NFV Performance and Security

Sep 3, 2014 by Prayson Pate

Overture CTO, Prayson Pate

There are a number of obvious cost and efficiency benefits of Network Functions Virtualization (NFV), but there are also open questions on potential pitfalls.  Can an NFV system meet Communication Service Provider (CSP) requirements for performance and security?  As it turns out, the answer can be yes.  Furthermore, the design requirements for assuring performance and security are closely related.  What are those design requirements?

This blog includes contributions from Ian Miller.

 

Three factors affecting Predictable Performance

In our experience in the trenches of the NFV insurgency – specifically in creating our 65vSE platform for hosting virtual functions on the customer premise – we have discovered the following factors can have dramatic impact on the performance of a given NFV-based service:

1) Multi-core Processors

The first step to assuring performance is to dedicate cores of multi-core processors to hosting virtualized network functions (VNFs).  For maximum performance, VNFs should run on dedicated cores.  For lower-priority or less resource-intensive functions, fractional CPU allocation and/or oversubscription may be used in order to minimize cost.  However both security and overall system performance benefit from the guaranteed minimum resource allocation and/or exclusivity that comes from core assignment.  We have found the availability of processors with multiple cores to be much more important than sky-high clock rates for both performance and security.

2) Acceleration Techniques

The next way to assure performance is to use tools such as Intel’s Data Plane Development Kit (DPDK) to accelerate packet processing in a virtualized environment.  DPDK uses dedicated cores to remove the inefficiencies of standard implementations of virtualization, and it helps in two main areas:

  1. Polled Mode. Moving from an interrupt-driven mode to a polled mode of operation eliminates context switching for the interrupt handler, which is a major bottleneck for performance.  A polled mode of operation also allows better scaling of performance with the number of cores because interfaces and queues can be better distributed across cores.
  1. Copy Reduction.  Copying packets is the traditional way that communication stacks handle transfers across components, but these copies are expensive in terms of performance.  With DPDK, incoming packets are transferred via a single copy into a DPDK memory buffer or ring.  There are no subsequent copies until the packet leaves DPDK.

    One drawback to the current implementation of DPDK is that shared rings are a security risk.  Any Virtual Machine (VM) which is given access to a DPDK ring can see all packets in that ring, which is a violation of multi-tenancy if rings are allocated on a per-interface or per-application basis.  DPDK is addressing these concerns with the user space vhost, MEMNIC-pmd and other mechanisms to isolate rings between one or more VMs. This does introduce a cost because a packet copy is required when there is a transfer between pools or isolated rings.

The DPDK-enabled version of the vSwitch (virtual switch) is a great example of this optimization which results in much-higher performance than the stock version.  One important thing to note: in order to benefit completely from DPDK, the hypervisor framework and each VNF should to be constructed to take advantage of DPDK.

3) Forwarding Architecture

Achieving high performance in a virtualized environment requires addressing potential bottlenecks at every layer of the networking stack:

  • Host network interface card driver
  • Host network stack
  • Layer 2 Ethernet functionality (switching, tag manipulation, shaping, etc)
  • Virtual plane switching fabric (i.e. vSwitch)
  • Hypervisor
  • Guest network driver
  • Guest application (or network stack)

In order to maximize forwarding performance, the system design must take advantage of connections to the network interfaces that are high-bandwidth and low latency due to processor offload techniques such DMA for data movement and hardware assist for CRC calculation.  Furthermore, these attributes should apply to all network interfaces, whether they are Ethernet or some other technology accessed via PCIe interfaces.  Any bottleneck in the network access (e.g. by using a low-bandwidth bus) will drastically reduce the performance.  Without these “fat pipes” to all network PHYs (whether Ethernet, wireless, xDSL and/or TDM/SONET) performance will be severely limited. This is one of the challenges systems that “bolt-on” processor engines to existing designs.

By leveraging direct connections, acceleration techniques and multi-core processors, it is now possible to create multi-VNF service graphs or chains that meet demanding performance requirements and deliver rich services with the flexibility delivered by the virtual environment.

Three factors affecting security for the CSP and Subscribers

Following on the heels of the performance concerns is the question of security in a virtualized environment.  How can the CSP ensure that the subscriber service functions are isolated from each other and the CSP management network is isolated from the delivered services?

1) Protect the Host Kernel from the VNFs

In a virtualized environment it is essential to assure the integrity of the host kernel in order to be able to guarantee proper system operation.  Hosting VNFs in VMs is a simple way to achieve this guarantee.  However, using VMs does incur some overhead in terms of processing and memory.  Some have proposed using containers rather than VMs as a way to lighten the load.  The drawbacks to containers are a) closer coupling of the VNF to the underlying system and b) the introduction of a possible security hole.  At Overture we believe that, in general, VMs are a safer and cleaner approach than containers.  However, there are some applications where containers make sense. For certain performance-sensitive functions that require direct processor and memory access AND are certified by Overture, there are exceptions to the VM guidelines.  Finally, using a security-enahanced version of Linux (SELinux) offers additional protection for many of the possible exploit vectors exposed by containers, further reducing the risk.

2) Protect the Subscribers from Each Other

Just as with performance optimization, multi-core processors provide an important tool for improving security in a virtual environment. Placing each subscriber’s VNFs on different cores helps provide isolation and an additional level of security. 

In addition, there are a number of tools built into OpenStack, libvirt, SELinux, and custom software that can be used to provide additional security. These techniques provide ways to:

  • Enforce resource allocation constraints on memory, disk utilization and host CPU utilization
  • Ensure minimum operational guarantees for VNFs and the platform, including VM management and the base Ethernet functions
  • Enforce resource access restrictions

3) Protect the CSP Management Network

The CSP must manage the deployed VNFs, but must also ensure that such management does not create an additional vulnerability.  A straightforward way to secure the management network is to deploy internal firewalls within the virtual environment.  Virtual firewalls included in the management service chain provide a mechanism that allows for management access to the VNFs without letting malicious traffic from the customer networks into the CSP management network.

The way we build services may have changed, but the expectations haven’t

The use of NFV to deliver services opens some great opportunities for cutting cost and accelerating innovation.  However, today’s expectations for predictable and secure services remain.  Successful deployments of NFV-based services must address these requirements. The good news is that there are ways to achieve this goal.  The challenge is to properly apply the required design principles, while taking the necessary steps to improve performance and security.

About the Authors

Prayson Pate is Chief Technology Officer and SVP of R&D at Overture, where he is also a co-founder. Prayson is a technology leader and evangelist with a proven track record leading teams and delivering products. Since 1983 he has been building Carrier Ethernet and telecom products for service providers and network operators around the world - both as an individual developer and as a leader of development teams. Prayson spends much of his time driving adoption of Overture's new Ensemble Open Service Architecture, which includes aspects of automation, virtualization, SDN and NFV. He has a BSEE from Duke, an MSECE from NC State and is the holder of nine US patents.

Ian D. Miller is a staff engineer at Overture Network working on a carrier-class virtualization platform.  For over 18 years he has pursued the development of innovative technologies and solutions to real-world problems, with a solid track record of successfully fielded products and technologies.  He has worked at market leading companies including Overture Networks and Xilinx as well as several startups.  Ian has a BSEE from Virginia Polytechnic Institute and State University and holds multiple US Patents.

 

Comments

Post new comment

The content of this field is kept private and will not be shown publicly.
By submitting this form, you accept the Mollom privacy policy.
To prevent automated spam submissions leave this field empty.