Host-based networking includes implementation of networking functions such as network virtualization, security, load balancing and telemetry. Software-defined networking (SDN) and network functions virtualization (NFV) build on the foundation of host-based networking and are intended to operate on commercial off-the-shelf (COTS) hardware equipped with general-purpose x86 CPUs and network interface cards (NICs). But will such a configuration result in an efficient implementation and peak price/performance in the servers being used as compute or service nodes? Extensive testing provides a definitive answer: No.
General-purpose CPUs are simply not designed for high-performance host-based networking, resulting in an extraordinarily high percentage of CPU cycles (and cores) being consumed by networking tasks that could be offloaded to an intelligent server adapter. In critical use cases, the use of such an intelligent server adapter can deliver more than a 20x improvement in overall price/performance.
This three-part series takes an in-depth look at the promise and pitfalls of using general-purpose CPUs in host-based networking applications, and explores the different technologies being used in intelligent server adapters.
The Proliferation of Host-Based Networking
Host-based networking has garnered considerable industry interest. It has been widely deployed in large cloud data centers to enable significant efficiency gains and to increase the pace of innovation. Sophisticated networking functions such as virtualization, security, load balancing and telemetry have been moved away from networking infrastructure equipment, such as switches and purpose-built appliances, and have been implemented using software on COTS servers. The virtual-switch data path, virtual networking functions (VNFs) and other operating-system data-path elements such as IP tables are examples of these software applications. The advantages of this approach are twofold. First, it simplifies the networking infrastructure, which now handles simple plumbing between servers, while more-complex functions are implemented at the end points, leading to more-efficient scaling. Second, use of software-based data paths is analogous to SDN, enabling rapid innovation and control over new feature rollouts.
Indeed, the current bellwethers of host-based networking—namely the large data center operators—have long endeavored to evolve networking where it is the easiest and most natural: in the server, where operators have full control of the networking stack. It makes perfect sense, of course, to evolve networking close to where the applications that use networking actually run.
With the increased use of virtual machines (VMs) in cloud infrastructures, the virtual switch that sits close to the VMs in the server has become a focus for innovation. More specifically, the virtual-switch data path that processes traffic or packets destined to applications in VMs has become an area where host-based networking has seen most of its success to date. Figure 1 depicts where such a virtual switch data path is located in x86-based servers. The opportunities that exist here get to the essence of SDN and NFV and their promise to afford enormous savings in both operational and capital expenditures in data centers worldwide.
Figure 1: Host-based networking using the virtual-switch data path.
Because the virtual-switch data path is implemented in software and runs in the familiar and ubiquitous x86-based CPU environment, data center operators have been able to freely add new features and quickly roll them out in production deployments. One such new feature involves tunneling for multitenant network virtualization, which also evolved initially inside large data centers in various forms and has only recently been standardized across the industry with specifications like Virtual Extensible Local Area Networking (VXLAN) and Network Virtualization Using Generic Routing Encapsulation (NVGRE).
Other recent innovations in both proprietary and open-source virtual-switch domains (e.g., Open vSwitch) involve new tunneling technologies for network virtualization and traffic engineering, as well as faster, more scalable processing of match-action rules used to implement the policies that govern access for applications and users. Future enhancements may encompass stateful security using Linux connection tracking, sophisticated load balancing or flow tracking to provide cut-through or accelerated processing of the data path, as Figure 2 shows.
Figure 2: Evolving the virtual-switch data path with new features.
Besides the virtual switch, other forms of host-based networking are evolving—for example, Layer –based approaches such as virtual routers with Juniper’s open-source Contrail solution, or the open-source Project Calico solution.
Implications of Host-Based Networking
The need to implement sophisticated network-related processing in software on the host requires the dedication of a significant amount of x86 CPU cycles. This demand, of course, results in a reduction of the CPU cycles that are available for VMs, VNFs and other applications. The demand on CPU cycles required for network processing is increasing dramatically with the relentless increase in data rates, such as the transition from 10GbE connectivity in servers today to 25, 40 and 50GbE connectivity, as well as the increased numbers of tenants and VMs per server that require processing of many more tunnels, services and security policies. The situation will only be exacerbated with the deployment of containers instead of VMs.
Consider the relatively simple example of processing a network-virtualization tunnel end point with a basic match-action security policy, which can consume as many as four Intel Xeon-class CPU cores while achieving only 50 percent of the data rate in a 10GbE network. The demand on the CPU is exponentially higher for more-sophisticated host-based networking functions, and the problem will become significantly worse as server connections move to 25, 40 and 50GbE in the next few years. Figure 3 shows an example of such efficiency degradation in terms of throughput per dollar and throughput per watt (power) for a typical data-path processing scenario.
Figure 3: Throughput-per-dollar and throughput-per-watt (power) degradation resulting from data-path processing in software.
There are two additional implications of the proliferation of host-based networking deployments. First, the need for intelligence in the network switching infrastructure for the implementation of sophisticated security and congestion-management policies at scale on a per-application and/or per-user basis is reduced substantially. This effect is especially helpful since the laws of physics with respect to silicon technology make it difficult to efficiently implement sophisticated policy management in switching silicon, while also increasing bandwidth to multiple terabits per second of packet processing and delivering lowest port-to-port latency. This simplification of the switching infrastructure in the data center has facilitated the adoption of so-called “white box” switches with disaggregated networking operating systems, resulting in substantial capital- and operational-cost savings.
The second implication is how networking appliances, such as load balancers and firewalls, are implemented. With the simplification of the network infrastructure and the desire of data center operators to control the development and deployment of new features, the appeal of COTS server-based applications is growing. These “super servers” that implement the most demanding network functions are sometimes called service nodes or network nodes, particularly by carriers. Of course, many of these same network functions can also be implemented in compute nodes in a distributed fashion using elements of host-based networking such as the kernel network stack and virtual-switch data paths.
The Emergence of Hardware-Accelerated Host-Based Networking
Hardware-based acceleration of networking functions in servers is not new. New classes of server adapters have evolved periodically, driven by increasing data rates and sophistication of networking functions in servers. For example, when the need for stateless offload networking functions became important, so too did the need for a new breed of 1GbE server adapters. Support for simple functions like checksum offload evolved into support for more-advanced ones, such as large send offload, receive-side scaling and others, driven by the practical need to conserve dwindling server CPU cycles.
Such application-specific accelerations and offloads have become much more prevalent with the adoption of 10GbE. As a result, application-optimized server adapters witnessed wider adoption. Examples include storage-area networking (SAN) optimized adapters using Internet Small Computer System Interface (iSCSI) and Fiber Channel over Ethernet (FCoE), as well as high-performance computing and clustered storage-optimized adapters using kernel bypass and Remote Direct Memory Access (RDMA) offload technologies, such as iWARP and RDMA over Converged Ethernet (RoCE).
More recently, with the rapid emergence of host-based networking at 10, 25, 40 and 50GbE speeds, server adapters called intelligent server adapters or SmartNICs for networking data-path acceleration are becoming critical. Such adapters are called intelligent or smart because they bring the best of both worlds—they enable hardware-based acceleration and server efficiencies, but they also have the ability to evolve rapidly, a critical new feature to keep the pace of software innovation intact.
The second article in this series will explore the three fundamental technologies being used in intelligent server adapters designed for host-based networking applications, and it provides guidance on solutions that deliver the best price/performance.
About the Author
Nick Tausanovitch has over 25 years of experience in the electronics and networking industries in positions ranging from FPGA and silicon design to system architecture to product marketing. Nick is currently senior director of solutions architecture at Netronome, where he is responsible for data center applications of the company’s intelligent server adapter products. Previously, he was responsible for the high-end network-processor product line at Broadcom. Before that, Nick was Director of Electronic Design at IDT, where he developed TCAMs and algorithmic search engines, and a system architect at Nortel, where he developed switches, routers and network processors. Nick holds a Bachelor of Science degree in electrical engineering from the University of Rochester and a Master of Science in electrical engineering from New York Polytechnic University.