The communications industry is abuzz with vendor and service provider activities in the NFV space. NFV has the potential to completely transform the way communication networks are designed, built and operated. NFV implements network functions as software entities and takes advantage of cloud technologies such as virtualization and cloud operation tools and techniques. As such, NFV promises a number of benefits such as CAPEX and OPEX savings, reduced time for service turn up and a shift to a “fail fast” model for new services. Perhaps even more notably, NFV sets the stage for further innovation in the communications industry by lowering the barrier for entry for newer software-only vendors.
Among the notable activities in the NFV space includes the important work being done by the ETSI NFV industry specification group (ISG) which was founded by the world’s leading communications operators. They are defining the specifications for the secure and performance-assured management of the virtual network function (VNF) lifecycle from definition and deployment to operations. Most of the management is presumed to be largely agnostic of the VNF internal logic i.e., the VNF is largely viewed as a black box function. This is the correct model in the near-term. VNFs vary widely in features and function and typically come with their own element management tools. As such, there is not an immediate need for a general purpose NFV management platform with a detailed view of and interaction with each VNF.
The vendor community has also been active in the NFV space. The early vendor activity has been focused on virtualizing current network functions such as firewalls, DPI, VPN and IMS appliances that were traditionally implemented on dedicated appliances. The efforts of many of these vendors, with a few exceptions, are based on virtualizing their dedicated appliance implementations without much change. So, in a sense, they are transplanted functions rather than fundamentally re-architected.
The above are important industry activities that are driving the adoption of NFV, but there are two further areas where innovation must take place in order for NFV to be successful. But these have not received as much attention thus far: Firstly, the business models for virtual network functions (VNFs) and secondly, how VNFs are authored for the cloud. I’ll cover the first one in this article and the second one in a subsequent post.
Both the ETSI ISG and the vendor community are culpable, but for different reasons. In the ETSI ISG case, the standardization activities are agnostic to the specific VNF itself as noted above. So, the ISG is not directly driving innovation in individual VNFs. In the VNF vendor community, their innovation challenges stem from business constraints and time to market issues.
Business Model Innovation
As noted above, NFV was conceived as a dynamic cloud service. The US-based National Institute of Standards and Technology (NIST) definition of cloud service describes it as on-demand self-service, using a common resource pool, rapidly elastic with resource usage controlled via metering capabilities such as pay per use.
Let us look at a specific example – a virtualized test head function – to see what this means. A “test head” is a centralized piece of test equipment designed to verify the performance of a network connection or a network service such as a Carrier Ethernet virtual private line (EVPL). The physical test head is typically built on a custom hardware appliance that consumes rack space and power no matter if it is being used or not. In addition, the operator of the test head must select a model that supports the performance and capacity required for the worst-case conditions.
In the new virtual world, the operator would ideally like to spin up – on-demand – an instance of the virtual tester to test each new Carrier Ethernet service as they come online. The virtual tester is allocated capacity from a common cloud resource pool. The test is then run for minutes or hours as needed and once the test is complete the virtual tester instance is turned down, and the allocated common resource pool capacity is released for use by other virtual functions. The virtual tester instance can also be scaled up/down to simultaneously test multiple EVPL connections or test multiple features/aspects of the connection, as appropriate.
Implicit in the above flexible usage model is a flexible commercial licensing and product offering by the VNF test head vendor. However, in my discussions with a number of virtualized appliance vendors, I have identified the following trends:
This indicates that, even when the deployment and operational model for NFV is automated and transformed to be in line with the cloud model, the above licensing and product restrictions may preclude operators from taking full advantage of the cloud properties of NFV.
This is likely due to the vendors being afraid to cannibalize their existing and sizeable physical appliance revenue streams. In fact, the physical appliances are often priced lower – on a price/performance basis – than their virtual brethren!
There are many implications for the vendors. Virtual appliances do not have the traditional hardware refresh cycles that vendors naturally benefited from. They are also more easily replaced by competition. In addition, flexible, usage-based pricing models require a number of internal transformations in the vendors’ accounting, reporting and financial planning systems. And they may also require changes in the sales compensation structure. Finally, in fairness to the early VNF vendors, telecom service providers still must evolve their billing and operations systems in order to support the more dynamic, cloud-friendly business models and fulfill the promise of NFV.
So while some may argue as to “What’s in a price”, our ability to reap the full benefits of NFV may very well hinge on the above noted significant innovations in the VNF vendor business models, internal vendor and operator billing system transformations. In the next article, we look at the architecture of VNFs and the happenings therein.
About the Author
Ramesh Nagarajan is the head of Ensemble OSA™ product strategy and management at Overture networks. He has nearly 20 years of experience in the telecom and cloud computing arena in a variety of innovation focused technology and business roles at IBM, Bell Labs, AT&T and Alcatel-Lucent. As a technology consultant and technologist, his work has shaped various leading product and services’ strategies as well as creation of new product and business segments. As a business leader, he has launched several new products and services, innovated business models, developed alliances and partnerships, managed development teams and financial management. Ramesh holds a Ph.D. in Electrical and Computer Engineering from University of Massachusetts, Amherst and has completed the core MBA requirements at Rutgers University
Contact Overture to learn how you can increase profitability by transforming service delivery.
Copyright © 2016 | All Rights Reserved | Overture Networks