New Categories for Data Center Best Practices (Part 3)

April 15, 2013 3 Comments »
New Categories for Data Center Best Practices (Part 3)

h2 style=”text-align: justify;”>A Warm Front is Hitting Data Centers

Parts One and Two of this series explained how new energy-management methodologies and approaches have changed best practices relating to disaster recovery and power capping. In this final installment, the focus shifts to high temperature ambient (HTA) data center operation. Once again, the advancements in energy management are accelerating the momentum of an exciting trend by giving IT and facilities teams the oversight and controls necessary to minimize the associated risks.

Rising Temperatures

Heat—or dissipated power—has always been a battle in the data center. On one side, compute requirements continue to skyrocket, and server sprawl is generating more heat. On the other side, budgets are flat or shrinking for most organizations these days. Data center managers have to increase compute power without pushing the utility bill beyond budget constraints.

Being skilled problem solvers, IT and facilities teams have adopted creative approaches to satisfy both users and management. To meet compute demands, they add servers sparingly by virtualizing and optimizing server utilization. They boost rack densities to get the most out of the space and equipment.

As is often the case, these cures come with side effects. More compute power means more dissipated heat, and therefore the need for more airflow and cooling. These measures eat away at the energy budget that IT would like to allocate to compute power.

It is no surprise that data center managers are now experimenting with high-temperature operations.

Is HTA as Easy as Turning Up the Thermostat?

Historically, data centers have operated at 18°C to 20°C (64°F to 68°F), and as a result, data center equipment warranties have limited coverage to these relatively cool temperatures. Data center customers often include temperature clauses in service-level agreements (SLAs). The logic is sound: temperature can affect equipment reliability, and therefore appropriate management and limitation of temperature appear justified to maintain business continuity.

It probably started gradually. Picture a data center manager eyeing the thermostat. Simply pushing up the set point a couple of degrees couldn’t harm anything, right? And then, when the utility bill reflected savings, perhaps the temperature was pushed up another couple of degrees.

Today, several well-known companies including Facebook, Yahoo and Google are making headlines with their HTA operations at upwards of 27°C (80°F). Published proof-of-concept results have shown that HTA operation can slash cooling costs by up to 50 percent, which provides a huge impetus behind this trend.

Data center managers can start with simple and small thermostat-only changes. A single-degree (Celsius) increase in temperature has been shown to yield a 4 percent decrease in cooling costs for the data center. And since cooling can account for 44 percent of the power consumed in an unoptimized data center, the rewards make a small change a very worthwhile first step.

Taking It Further

Beyond a simple degree or two change, data center managers must take some precautionary measures. First, equipment warranties should be carefully reviewed, as well as SLAs and compliance requirements. Some legacy systems require lower operating temperatures, and obviously the potential power savings of HTA provide another reason to phase out or move these systems. Except for these rare cases and in the absence of any extremely stringent compliance requirements, most organizations can move towards HTA operation and the intrinsic cost savings.

In fact, today’s data center equipment is designed and warranted for reliable operation at 40°C, or 100°F!

Besides a thermostat-only change, data center managers should consider the potential for other temperature-related changes. For example, hot- and cold-aisle separation can drive up savings. Replacing chillers with economizers also yields dramatic savings. When combined with an increase in operating temperature, these practices multiply the savings. In one of Intel’s larger data centers (900 servers), these retrofitting practices contributed to a 67 percent annual power savings when the temperature was raised to 33°C. In our case, this meant saving $2.87M a year for the 10MW facility.

HTA Combined With Holistic Energy Management

Simply monitoring utility bills is insufficient for achieving truly optimized energy efficiency on a daily basis. This is especially true for any organization that wants to push HTA to the limits without increasing risk to the business. A holistic energy-management solution can provide real-time visibility of the impacts of HTA on energy consumption, and it also introduces many other power-saving capabilities.

Best-in-class energy-management solutions start with non-invasive, fine-grained monitoring of temperature and power consumption throughout the data center. The data is logged, and it can drive reports as well as graphical, at-a-glance thermal and power maps of the facility. These maps are particularly useful for monitoring HTA initiatives, clearly showing correlations between temperature and power consumption.

Holistic energy-management practices are essential for driving up sustainable business practices, lowering carbon emissions (which are increasingly regulated) and in general allowing data center operation to remain aligned to the business without exceeding power limits.

The daily oversights and threshold controls enabled from an energy-management platform allow IT to protect investments in equipment. Efficiently balancing services and workloads and avoiding power spikes are particularly crucial capabilities for HTA data centers since servers are operating with smaller safety margins.

Consider the case of the Department of Information and Communications, DaNang, Vietnam.[1] This organization wanted to increase data center compute capabilities for new cloud services in the most energy-efficient manner possible.

Team members carried out a proof of concept for an energy-management solution combined with HTA operation. The solution allowed them to evaluate and choose the best cooling solution, including validating “free cooling” during appropriate months of the year. They also took advantage of power-monitoring and controlling capabilities to limit power per server without affecting performance (thereby protecting SLAs), to more power-efficiently balance workloads, to and minimize idle server power draw.

The combination of an increase in operating temperature and more-effective energy controls resulted in a 30W power savings per server, or an overall energy cost reduction of 50 percent for the DaNang government organization’s data center.

Conclusions

With potential savings of this magnitude, HTA best practices will soon be commonplace. The extent of the benefits that each data center can gain from the trend depends on the integrated energy-management capabilities paired with HTA. Fortunately, this typically represents only a gradual shift in current practices, and even small initial steps can make a positive difference in the energy budget.

Just like the other new energy-management practices chronicled in this series, the HTA trend is a result of the energy crisis in the data center. As a percentage of the world’s power consumption, data centers’ power is already estimated at 1.5 percent. The cost of server energy exceeds $27 billion and is expected to double by 2014. Pushing up the temperature—to drive down cooling costs—makes sense in every data center that can make the change without risk to the business.

Risk management is key, and it is also a major reason why data centers are adopting energy-management solutions and practices. But it is the potential for cost savings that provides not only impetus but also business justification. These investments in technology and their associated data center practices offer attractive ROI and payback that extends over many years. The previous two articles in this series point out many other benefits that can be achieved with the introduction of a holistic energy-management solution, most of which translate to attractive bottom-line savings.

With a business case this strong, the data center is definitely going to heat up.

Leading article photo courtesy of Hugovanmeijeren

About the Author

data center Jeff Klaus is the general manager of Data Center Manager (DCM) Solutions at Intel Corporation, where he has managed various groups for over 13 years. Klaus’s team is pioneering power- and thermal-management middleware, which is sold through an ecosystem of data center infrastructure management (DCIM) software companies and OEMs. A graduate of Boston College, Klaus also holds an MBA from Boston University. He can be reached at Jeffrey.S.Klaus@intel.com.

zp8497586rq

Pin It

3 Comments