Central processing units (CPUs) in enterprise servers are the brains of the data center, yet CPU inefficiency is one of the most overlooked issues in IT today. Just as high unemployment creates an idle economy, CPU underutilization creates idle data centers, putting unnecessary strain on modern businesses. CPU computing power is underutilized when it has to wait for data. As a result, companies overspend on servers, storage and software trying and meet application performance demands. Luckily, enterprises have options to harness the full potential of their CPUs and put them back to work. Let’s explore the details.
Roots of the Data Supply Problem
Success metrics for data center and application efficiency largely relate to the speed at which data can be delivered from storage to applications, and ultimately on to end users. But today’s ultra fast CPUs are frequently left underutilized owing to the infrastructure complications of getting the right data to the right application at the right time. As CPU performance has continued to double approximately every 18 months, storage has not kept up, leading to infrastructure sprawl, including many of the following examples:
- Single-purpose servers: Many applications are still tied to physical servers, as architects worry that virtualization cannot deliver the performance necessary for data intensive tasks.
- Excessive memory footprints: When servers cannot get data from disk drives quickly, they often rely on excessive DRAM spending to compensate. This DRAM is expensive, relatively small in capacity, power hungry, and not persistent. Often, excessively scaled out, memory-dependent architectures result in even lower CPU utilization than traditional scale-up enterprise deployments.
- High network spending: Trying to aggregate enough storage subsystems in particularly large numbers of disks drives across a large number of servers requires heavy network spending. This includes network switches, host bus adapters (HBAs) and target channel adapters.
- Complex and expensive storage configurations: Trying to deliver storage performance from the far side of the network means compensating for excessive disk drive latency with more systems, RAID controllers, storage processors and memory.
Figure 1 illustrates the impact of this infrastructure sprawl.
Figure 1: The data supply problem.
Economic Implications of the Data Supply Problem
The economic impact of the data supply problem is visible in the spending on data center infrastructure and application build out, which is estimated at approximately $1.25 trillion across the following categories, also shown in Figure 2.
- Servers: Of the ~$51 billion in server spending each year, approximately $20 billion is for memory-rich servers because of I/O bottlenecks.
- Software: Of the ~$250 billion enterprise software market, it is estimated that as much as ~$120 billion is underutilized software: for example, licensed servers that are only doing a fraction of the work they are capable of.
- Networking: In the ~$24 billion networking market, as much as half is estimated to be for high-performance networking.
- Storage: Across the ~$32 billion enterprise storage market, approximately $20 billion is performance optimized.
- Services: The largest spending area, professional services expenditures, is approximately $832 billion, of which approximately $60 billion is related to optimizing the data supply chain.
- Power and Cooling: All of this equipment needs power and cooling, an approximately $61-billion-dollar market of which half is attributed to power and cooling for underutilized servers.
Figure 2: Estimated traditional data center spend. (Source: third-party forecasts and company estimates)
It is important to remember that adding more CPUs, software, networking or storage has a dramatic effect on the overall spending and infrastructure impact. In many respects the analogy in real life would be like building more and more factory capacity but having the factory workers sit idle most of the time because they do not have the parts needed to do their work.
Putting CPUs Back to Work
Fortunately, new solutions based on the effective placement of flash memory within data center servers are enabling IT architects to put their CPUs back to work and, in turn, reduce excessive data center spending.
Much of the data center supply problem results from CPUs having to wait for data, both because that data is coming from slow, mechanical disk drives and because those drives are on the far side of the network, across a latency-inducing storage network. By placing process-critical data in flash memory, and placing that flash memory close to the CPU, the data supply problem begins to evaporate.
Now, CPUs can access important data in microseconds, as opposed to milliseconds, which frees latency bottlenecks and removes the impediments that CPUs face in reaching their full potential, or more specifically, the real threshold for their maximum transactions per second.
Figure 3 shows a sample configuration where process-critical data is placed on flash memory near the CPU within the server. Latency is further reduced by taking advantage of the PCIe bus natively without relying on slower storage protocols like SAS and SATA, which are ideally suited for rotating disks, but not silicon-based flash memory. Existing hard disk drive based storage can still be retained for archive data, providing a higher-capacity complement to flash for non-process-critical information.
And software technologies specific to virtualization, like caching, enable multiple applications to run on a single physical machine, improving the economics significantly.
Figure 3: Placing active data near the CPU eliminates the data supply problem.
Data Center Efficiency with Flash Close to the CPU
When flash memory and process-critical data are close to the CPUs, the CPUs can regain their role as a fully utilized data processing engine. More importantly, granting CPUs the data they need to remain effective reduces a significant portion of traditional data center spending.
As previously outlined in this article, a large portion of the roughly $1.25 trillion spending on data center infrastructure is due to underutilized CPUs. Putting those CPUs back to work means that data center architects can achieve the same results using less infrastructure. By tracking the estimates of performance-oriented spending, as much as about $260 billion of the $1.25 trillion can be saved in our industry, easily justifying the acquisition costs of new flash memory solutions.
Just because costs are reduced for a given configuration does not mean organizations will spend less on their IT solutions, however. A more typical case is that when the cost of a certain type of processing is reduced, IT experts can look into additional ways to exploit the new potential of their systems. So even though the process may involve some consolidation, overall opportunities for further technology improvements are expected to increase. Figure 4 shows the potential savings, which is estimated at approximately $260 billion.
Figure 4: Savings from eliminating the data supply problem.
Start Saving and Achieving Today
In addition to dramatic cost savings, the direct impact of using server-centric flash memory solutions is seen in the performance achievements of customers. A few examples include the following:
- SQL Server: 18x improvement in average throughput going from 200 rows per second to 5,000 rows per second.
- Virtual Desktop Infrastructure: 14x improvement by reducing the system footprint from 44U to 8U and 2.6x performance improvement for common desktop operations.
- Oracle: 40x improvement in query time from 2 hours to 3 minutes.
Customers looking to reduce the cost of data center infrastructure and increase performance for critical applications now have readily available options with server-centric flash memory solutions. Given the huge savings opportunities, it’s easy to find reasons to get started today.
About the Author
Gary Orenstein is VP Products for Fusion-io.