Implementing QoS in Virtualized Environments

July 23, 2012 1 Comment »
Implementing QoS in Virtualized Environments

The adoption of virtualization technology has had a dramatic effect on the infrastructure of today’s data center, necessitating the emergence of 10Gb Ethernet connectivity to handle the increasing demands for I/O capacity. Servers with multiple virtual machine instances that merge traffic flows from multiple applications over a single 10GbE physical interface benefit from an effective I/O quality of service (QoS) implementation to ensure that the connection delivers the required service levels for business-critical applications. This article describes the benefits of and requirements for an effective I/O QoS solution in a virtualized environment.

In a traditional data center server implementation, application servers are “stove-piped,” meaning there is one application per physical server. In this environment, server resources are typically underutilized and the setup requires many physical servers to run an organization’s applications. Virtualization offers a solution for server utilization issues by allowing multiple logical containers for OS and application instances, known as virtual machines (VMs), to exist on a single host server. With virtualization, you can quickly provision and deploy multiple workloads and support faster application deployment times while simultaneously lowering power, space and cooling requirements.

But when multiple VMs use the same resource, such as a network interface card (NIC), administrators need the ability to set usage parameters for each VM to guarantee appropriate levels of service for each application. QoS implemented across multiple virtual NIC (vNIC) instances provides this ability. QoS subdivides each 10GbE connection into several virtual pipes, to which specific applications may then be assigned. Those virtual pipes are guaranteed to never have less capacity than they were individually assigned, but they may be configured to utilize any capacity that is currently unused, up to 100% of the 10GbE pipe.

For example, you might have Exchange, SQL, SharePoint, and web applications running in your virtualized environment. You can set your QoS so that 40% of the pipe is reserved for the web applications. If web traffic is not currently using that 40%, the unused portion can be used by the other applications. But if the pipe reaches its traffic-handling capacity, you can rest assured that web applications will always get the 40% of the pipe that you allocated to them. And if you find that 40% of the capacity isn’t quite enough and you need to change the allocations, raising web traffic to 50%, you can do that “on the fly” without rebooting the system.

With QoS you can even tailor bandwidth utilization for optimum productivity as the day goes on. In the middle of the day, you might have a need for higher bandwidth for Exchange, while at night your backup window kicks in, and you need significant bandwidth for your backup application. You can use QoS to assign a minimum bandwidth level for the backup operation and have it burst up to its maximum level (100%) if other applications such as Exchange are not currently using their allotted bandwidth. With this oversubscription paradigm, the currently unused bandwidth will be used automatically by other applications when needed; without it, such as with QoS solutions that only provide rate-limiting capability, bandwidth allocation is fixed and any unused bandwidth is wasted.

QoS Implementations

As is true in most data center functions, there is more than one way to provide QoS.

  • Hypervisor QoS—QoS functions are available in the virtual switching component that handles LAN traffic within most hypervisors, such as the vSwitch in VMware’s ESX. Hypervisor traffic handling is software based and therefore has an impact on the server’s RAM and CPU because it has to dedicate itself part of the time to doing switching functions, including Layer 2 forwarding, traffic scheduling and QoS functions.
  • Adapter-based QoS with hardware protocol offload—moving the switching functions from the hypervisor to the adapter offers not only minimum bandwidth guaranteed QoS, but hardware offload capabilities, which lower the server RAM and CPU utilization and maximize overall system performance. QLogic technology for switch-agnostic NIC Partitioning (NPAR) dedicates bandwidth and QOS specifically to individual applications, such as VM mobility, hypervisor management or storage traffic, along with the various applications running in the VMs that are being serviced, as recommended by hypervisor vendors such as VMware (Source: VMware vSphere 4.1 Hardening Guide, April 2011, vNetworking section, pages 54–59). One of the primary use cases for 10GbE is as a replacement for multiple GbE NIC cards to support just these types of services. All processing related to switching, including VM-to-VM communication without an external switch, is handled on the adapter rather than on the server, thus dedicating server resources (RAM and CPU) to applications and not to network protocol processing and QoS. NPAR operates independently from the network infrastructure. This means that it works with any 10GbE switch, and no additional switch or management tools are needed to deploy virtual partitions via NPAR. NPAR may be enabled on QLogic 10GbE Intelligent Ethernet Adapters or on QLogic 10GbE Converged Network Adapters that are capable of simultaneously supporting both TCP/IP LAN traffic and iSCSI and FCoE SAN traffic, maximizing flexibility and future-proofing the infrastructure, all while minimizing TCO.
  • Adapter-based QoS with software initiators—In this case the NIC handles the QoS, but the protocol processing is still performed in software running on the CPU, which affects the CPU’s overall performance.
  • Rate limiting—Some QoS implementations only support a simple rate limiting paradigm, rather than intelligent QoS with a minimum bandwidth guarantee. With rate limiting, each application is assigned a specific amount of bandwidth that cannot be exceeded, regardless of whether the bandwidth assigned to other applications is being used. This approach limits flexibility and can result in stranded bandwidth throughout the day when that bandwidth could have been put to use improving the performance of the applications.

Benefits of Full Protocol Offload

When deciding on a 10GbE NIC solution, there are two implementation choices: software initiators and offload engines. Both may deliver the desired functions, but only the offload engine approach conserves critical CPU resources. A hardware offload engine strategy provides a much more CPU-efficient approach versus software initiators, which only offer savings in initial deployment costs.

Despite the importance of cost in the adoption of 10GbE, a successful implementation is impossible if performance and reliability suffer materially as a consequence of protocol processing and hypervisor-based switching by the server CPU. Despite their lower price, new 10GbE software initiators and partial offload solutions will have a difficult time competing with full offload technology.

Virtualization, a target application driving the adoption of 10GbE, requires considerable CPU horsepower to effectively scale VMs. The aggregation of several applications on the same physical server multiplies the I/O capacity requirement, stressing the I/O subsystem and creating a potential performance bottleneck when the server CPU and RAM are required for protocol processing. This limits virtualization scalability. With a network adapter that employs a hardware offload strategy, CPU and RAM resources are protected, enabling users to obtain the full benefits of their virtualized environment, including the QoS capabilities described previously. Offloading 10GbE processing provides greater cost savings and ROI for virtualized environments. A NIC that employs hardware offload removes performance barriers for the virtualization of transactional enterprise applications.

Flexibility is another important factor in a 10GbE solution. The right NIC will support both iSCSI with full TCP offload and 10Gb Ethernet NIC features with stateless offload, further allowing the host CPU to be used for application and virtualization scaling. Along with scalability, performance, and offload feature benefits provided over software initiators, a NIC using hardware offload delivers additional operational efficiencies, such as eliminating PCI bus bottlenecks and ensuring data integrity with enhanced data delivery and error recoverability capabilities.

10GbE applications that move significant amounts of data and information systems that process large files will require solutions that free up CPU resources and increase bandwidth and performance. Software initiators lack maturity for enterprise applications, and it will take years of qualification testing to establish the elevated levels set by a properly-designed network interface that incorporates hardware offload.

QoS Requirements

The best QoS capabilities will be provided by systems that can deliver maximum flexibility in bandwidth usage for the available applications. The requirements include:

  • Adapter-based QoS with hardware protocol offloads to avoid using server CPU and RAM resources
  • The ability to assign applications to virtual connections with a guaranteed amount of bandwidth
  • The ability to allow applications to burst above their guaranteed amount if extra bandwidth is available

Companies using virtualized environments require QoS to guarantee the performance of mission-critical applications. The QoS solution should provide the flexibility to use the entire 10GbE pipe in order to maximize productivity and resource utilization. Only adapter-based QoS solutions that incorporate hardware protocol offloads can deliver the highest performance in virtualized environments.

About the Authors

Bill Henderson is Senior Principal Solution Architect in OEM Systems Engineering at QLogic. Troy Dunavan is a Solutions Engineer at QLogic.

Photo courtesy of hdaniel

Pin It

One Comment

Add Comment Register



Leave a Reply