This article is from Telecoms.com The full article was published on 24 February 2020 and is available here.
Telecoms.com periodically invites expert third parties to share their views on the industry’s most pressing issues. In this piece Hannes Gredler, CTO at RtBrick, has a look at why people aren’t talking about NFV as much as they used to.
Network Functions Virtualisation (NFV) was poised to bring virtualisation into the realm of network gateways and other functions, breaking the hard linkage between the hardware and software provided in integrated monolithic systems. It was positioned as the alternative to running networks on traditional equipment, delivering scalability, elasticity and adaptability and allowing operators to select software from any vendor as they wished.
But the discussion around NFV seems to have died down, and many in the industry are wondering: where did all the hype go? Has NFV proved more difficult to implement that anyone thought? Were the benefits less than we hoped? Or has NFV just been quietly getting on with it?
The virtualisation challenge
Many operators want to move applications to the cloud, so it’s no surprise that they’re seeking nimble, cost-effective and agile infrastructures which can be used across multiple applications. Yet they often still find themselves still bogged down by legacy architectures and traditional telecom systems, which are hard to migrate to open systems.
And it’s not just the specialist functions within the telco networks which have proved hard to virtualise. What about the network itself – remember Software Defined Networks (SDN)? In theory, NFV and SDN should have complimented each other, with SDN bringing flexibility to the network and NFV bringing speed and agility for new functions. But, as we now know, it hasn’t quite worked out that way.
Like NFV, SDN was supposed to bring about innovation, but the ‘classical SDN’ model lacked the scalability required by the large carriers. Several disadvantages emerged. A highly centralised control system made it vulnerable to catastrophic failure, and it was hard to contain any ‘blast radius’. It was restricted by the I/O limits of a single controller. And it was hard to migrate – ensuring centrally controlled network elements could work side-by-side with legacy routers.
Ensuring success
However, virtualisation can be deployed effectively in a carrier network! And a good example of this is the Broadband Network Gateway (BNG) that terminates residential Internet subscriber traffic in the access network.
Traditional BNGs were based on monolithic routing systems. They often left carriers in a perpetual hardware replacement cycle, as each element of the chassis-based system needed upgrading in tur. Carriers couldn’t mix and match the best hardware with the best software – so equipment selection was always a compromise. Multiservice systems have to provide all the service features that might be required by any service, whether they are being used or not. This is fundamentally bad economics as well as being a test nightmare!
But operators are now able to undergo successful virtualisation by applying a web-scale approach to their carrier networks. For example, Deutsche Telekom’s 4.0 Access project makes use of merchant silicon-based bare-metal switches and container-based routing software that has more in common with cloud-computing than traditional telecoms systems.
With the increasing growth in Internet traffic, it’s clear that things aren’t going to be slowing down any time soon. Providers are going to have to find ways to upgrade their infrastructure and remain competitive. Although telco operators need to make sure they’re picking the best functions for virtualisation, starting now and learning as they go will be essential to ensuring that they develop an agile network and become more ‘internet-native’.
So, whatever happened to NFV? Well whether we hear the term used as much or not, disaggregation of network software from hardware is happening for real and happening at scale. Expect to see a lot more of it.