Data Center Energy: Past, Present and Future (Part One)

September 12, 2012 5 Comments »
Data Center Energy: Past, Present and Future (Part One)

An Audit of Data Center Power Efficiency—or Inefficiency

This article is part one of a three-part series on energy management in the data center. (See parts two and three.)

Today IT organizations face extreme pressures to cut costs and drive efficiency at every level. Approximately $24.7 billion is wasted each year on server management, power and cooling for unused systems in data centers. This figure is significant and represents on average 15 percent of a data center server population that is completely idle but consumes power and cooling.

On average, annual energy costs are $800 or more for a 400W server. Do the math, and you can’t ignore that data center energy inefficiency is staggering, especially since idle servers represent only one vector of the efficiency problem.

Crude Assessments

Before you can optimize energy efficiency, you have to understand your nominal power and thermal conditions under the range of workloads supported by your data center resources.

In the past, most data center managers had only minimal energy-monitoring capabilities. They often assessed power consumption and cooling efficiencies by solely relying on returning air temperature at the air conditioning units. When that didn’t provide much insight, they would manually collect additional power data on a per-rack basis. Also, to estimate future growth, predictive models were developed and used to translate these static measurements into assessments of future energy consumption. The models deviate from reality by as much as 40 percent, however.

Clearly a more accurate method is required to assess and predict energy and ultimately avoid the need to overprovision power and cooling. The large margin of error also makes modeling ineffective for predicting power spikes or other problematic events that lead to equipment failures and service disruptions.

Calculating Power vs. Measuring Power

Although expedient, this approach has deficits that fall short of most efficiency goals. The problem with the method is the conservative nature of the vendor data. Manufacturers notoriously lean toward worst-case estimates, and overprovisioning is understandably the result when planning is based on this data alone. Opinions also vary widely about how to de-rate the vendor data, with data center managers citing 20 to 50 percent reductions as common practice.

Some skeptical data center managers get out the power meters to verify the specifications and adjust data to their specific configurations and workloads, and intelligent power distribution units (PDUs) are deployed by those who can afford another layer of hardware. The PDUs can supply a steady stream of power data, but this data then must be collected and analyzed. Even then, the data is completely focused on power and does not address airflow or temperature patterns throughout the data center.

An Expanded Continuous Data Set

In the quest for a more accurate energy map, many data center teams acknowledge the need to continuously aggregate more data points and take advantage of automation to minimize the time required for assessments. In response to these customer demands and energy trends, data center equipment vendors began integrating capabilities to provide power and thermal data without the need to employ meters or absorb the cost of intelligent PDUs. Today, data center managers can collect readings for server inlet temperatures and power consumption levels for rack servers, blade servers, and any deployed power-distribution units (PDUs), as well as the uninterrupted power supplies (UPSs) related to those servers.

Energy-monitoring solutions are now available to aggregate this data and can give data center managers a detailed view of the conditions at the individual server or rack level. The data can drive a picture of the entire data center and also support drilling down to understand the energy requirements and usage patterns for groups of users or physical resources.

By continuously collecting data in real time, it is possible to generate thermal maps of the data center and uncover hot spots and airflow inefficiencies before they lead to failures. Data logs can also be analyzed to identify trends and fine-tune power and cooling systems accordingly. The primary benefit of a fine-grained, real-time data collection and aggregation solution is the ability to avoid designing data centers on the basis of worst-case situations.

Solid Solutions for Holistic Data Center Energy Management

Armed with the ability to automatically collect and aggregate power and thermal measurements, data center managers and facility teams quickly recognized the value of emerging middleware solutions and tools that go further than passive monitoring. Energy-management solutions have evolved to enable the following:

  • Proactive threshold detection to identify and correct problems and adjust conditions, extending the life of data center assets
  • Introduction of controls that enforce policies and allocations designed for optimized service and energy efficiency
  • Dynamic adjustment of server configurations during times of outages
  • Billing for services using a model that takes into account total energy consumption
  • Integration of energy management with systems- and facilities-management consoles and methodologies

Conclusions

Data center teams face energy challenges in every direction. Users want 100 percent uptime. Management wants lower costs and sustainable practices. Facilities teams need to divert power to other site needs. And utilities companies are saying they can’t meet the service levels required for expanding data centers, even if companies can pay the escalating prices for energy.

The days of overprovisioning are over, and holistic energy-management solutions have arrived on the market. Today’s challenge is to accurately define the main requirements for an energy management solution and to choose a solution that puts a data center on the lowest-risk path, considering the current trends and energy outlooks.

About the Author

Jeff Klaus on data center energy efficiencyJeffrey S. Klaus is the director of Data Center Manager (DCM) at Intel Corporation.

Next week, in part two of this series, Jeff overviews the trends and industry changes that are affecting today’s data centers and accelerating the need for energy efficiencies. The series will conclude with part three, which outlines the best practices that employ the latest approaches, along with examples of the benefits being gained by enterprises that are pioneering next-generation energy management.

Leading article photo courtesy of bandarji

Pin It

5 Comments

Add Comment Register



Leave a Reply