Trends in Data Center Cooling

February 5, 2013 5 Comments »
Trends in Data Center Cooling

As data centers consume more energy to meet rising demand from customers (both businesses and consumers), efficiency is becoming a greater focus. The prospect of rising energy prices and more-stringent government regulations is also driving data center operators to reduce consumption (or at least the growth thereof). Although the industry has made great strides, work still remains to reduce the energy demands of cooling.

Don’t Ignore the Basics of Cooling

Since plenty has been written on standard approaches to more-efficient cooling—such as the use of outside air (“free” cooling), raising the thermostat in line with ASHRAE’s updated guidelines and manufacturer recommendations, implementation of hot and cold aisles to limit mixing of cool air and exhaust, plugging of unused cable knockouts, and so on—they will not be emphasized here. Nevertheless, many data centers can still reap tremendous returns on little or no capital investment in these approaches. In some cases, depending on the particular company’s budget, standard efficient cooling practices may be enough—for instance, if a data center is located in a relatively cool climate that can use air-side economization (free cooling) year round.

In some cases, however, the local climate, manufacturer-recommended operating temperatures for equipment, power density in racks or other factors may necessitate a more sophisticated cooling strategy—yet one that still focuses on efficiency. Or a company may simply be seeking even more gains by cutting power spent on cooling. Here’s a look at some of the approaches that are making headway in the industry.

  • Equipment updates. This strategy may not be the first that comes to mind when you think of improving your facility’s cooling, but it is the primary contributor to data center heat: In other words, cooling is a solution to a problem. Reduce the problem and you reduce the need for the solution. ARM’s efforts to enter the server market—which is currently dominated by the x86 architecture—is a response to the industry’s desire for greater efficiency. The IP company’s strategy is to convert its low-power mobile-device technology for use in servers. But x86 is also improving its own power efficiency through use of finer silicon process technologies and a greater emphasis on efficiency in the processor design phase. Data center operators can exploit these efforts by upgrading older servers with newer and more power-efficient servers. These upgrades can deliver reduced power for the same performance, greater performance for the same power or something in between. Either way, greater efficiency at the server level means less cooling is needed, reducing the data center’s power consumption.
  • Reduced floor space. This may sound counterintuitive, as demand for IT services is rising steadily, but not every company is expanding its data center—some are cutting in-house services. If the footprint of your equipment is shrinking, maybe owing to increasing use of cloud-based services, you may have a growing “blank” area in your data center. If so, you could also be cooling space that doesn’t need it, reducing efficiency. By walling off this excess space (and possibly using it for other purposes), you can reduce the area you must cool and thereby save cost.
  • Chimneys over racks. Chimneys help remove hot exhaust from servers more efficiently by reducing mixing with chilled air. According to Ramzi Namek, president of engineering at Total Site Solutions, “Chimney racks can handle up to 30kW per rack.” This capability, along with the lack of complex infrastructure such as that required by liquid cooling solutions, makes the chimney approach an attractive option. Furthermore, chimneys can be fashioned by facilities personnel—they do not require specialized technology.
  • Liquid cooling. The use of liquid instead of air to cool IT equipment is nothing new, but it has yet to see broad use. In most cases, air cooling is sufficient, and it generally involves less infrastructure than liquid-based cooling systems. But momentum is building for liquid cooling owing to its efficiency benefits; Intel, for instance, has successfully concluded a year-long test of immersed servers, demonstrating both the power benefits and absence of damage to components. The benefits of liquid cooling accrue particularly for high-performance computing applications, where racks can reach high power densities exceeding 30kW. Liquid cooling will likely receive a boost if the government imposes efficiency regulations on data centers, whether directly or indirectly.
  • Variable-speed drives for CRACs/CRAHs. The ability to tailor infrastructure’s output to conditions is critical to efficiency. David J. Cappuccio, research VP at Gartner, notes that “On many older [CRACs and CRAHs] the fans spin at a continuous rate and the airflow remains continuous, regardless of how much is actually needed.” By implementing variable-speed drives, however, newer units can respond to changing conditions in the data center and thereby provide a more regulated cooling output, increasing efficiency. Cappuccio cites potential for up to 15% lower energy consumption owing to improved CRAC/CRAH technology.
  • Data center infrastructure monitoring. How high can you raise your thermostat without causing hotspots that will harm your equipment? You may never know unless you either take measurements or wait for a failure to occur. But a measurement now doesn’t take into account a rack over here running at a high capacity while some personnel performing maintenance over there obstruct the airflow in just the right way to cause a problem. You can’t plan for every contingency, but by monitoring your facility, you can not only keep a virtual eye out for problems, you can also identify areas for improvement. All-out data center infrastructure monitoring (DCIM) deployments may not fit the budget of all data center operators, but solutions (even if they’re cobbled together) designed for cooling can be a great benefit.
  • Computational fluid dynamics modeling. CFD has also been around for some time, but as the need for efficiency grows (and as the costs of CFD fall), this approach will increase as a means to improve data center design. As with other specialized technologies, however, each company must assess whether the returns on CFD analysis merit the costs.

Data center cooling today isn’t focusing on revolutionary new technologies; rather, it is employing existing technologies in better ways to improve efficiency. Free cooling and higher operating temperatures do much to reduce data center power consumption—assuming companies implement these measures. But where mechanical cooling is needed, whether in the form of air conditioning or movement of liquid, efficiency is a priority as energy costs rise and regulations loom. The above list highlights several industry trends relating to cooling beyond the most common approaches.

Photo courtesy of Intel Free Press

About Jeff Clark

Jeff Clark is editor for the Data Center Journal. He holds a bachelor’s degree in physics from the University of Richmond, as well as master’s and doctorate degrees in electrical engineering from Virginia Tech. An author and aspiring renaissance man, his interests range from quantum mechanics and processor technology to drawing and philosophy.

Pin It

5 Comments

Add Comment Register



Leave a Reply