Cooling is a critical part of a data center’s infrastructure, and fortunately (or unfortunately), a number of approaches are available to maintain the necessary temperatures to keep your facility’s electronic equipment running. Although a complete discussion of all the design concepts associated with cooling infrastructure could easily constitute a large tome, a brief overview of the options can help you determine which route is best for your data center.
Heat: Waste Product
IT equipment consumes electricity to operate and leaves behind heat as its “waste product.” Because the data center is an enclosed facility, this heat—if not given an outlet—can build to the point that it damages or destroys servers and other electronic devices. Thus the need for cooling: you need to somehow remove the heat from your facility. The American Society of Heating and Air-Conditioning Engineers (ASHRAE) provides temperature guidelines for data centers; today, ASHRAE allowable temperatures are as high as 104°F for certain types of electronic equipment.
The problem, of course, is that moving something (even heat) generally requires energy in addition to infrastructure, meaning it costs money in both capital and operational expenses. As companies seek to tame the power appetites of their facilities, cooling is a major target—thus, the industry is moving heavily toward so-called free cooling. Although free cooling is not as free as the name might indicate, it does tend to require less energy and infrastructure than more-traditional cooling methods.
Removing the Heat
So, what are the options for removing waste heat? The goal of cooling is to move heat from the indoor environment (the data center) to the outside, thereby maintaining a temperature conducive to proper functioning—and, preferably, long life—of servers and other equipment. The two essential options are to move heat either using air or a liquid (typically water or some form of refrigerant).
Air cooling offers some obvious benefits: air is everywhere, it doesn’t hurt IT equipment and it’s relatively easy to move. In addition, a handy characteristic of warm air is that it rises relative to colder air, providing some degree of separation between them; most air-based cooling designs use this property in their operation. Because this separation is not absolute, however, inefficiencies arise, particularly when warm air and cool air mix.
Liquid cooling can provide better and more targeted cooling, increasing both effectiveness and efficiency. Chilled water, for instance, can be delivered directly to a server rack, focusing the cooling effort right where it’s needed (at the rack or cabinet, instead of trying to maintain a certain temperature throughout the entire room, for instance). But liquid-based systems also pose some difficulties: leaks are a threat to IT equipment (particularly if the liquid is water), transport of chilled liquid can lead to condensation and more infrastructure is required, since the liquid must be contained (as opposed to air). Owing to these concerns, liquid cooling is more expensive than air cooling. But for certain high-density implementations, liquid cooling may be the only practical option.
Air Cooling Designs
Air cooling—in the traditional sense—involves the use of computer room air conditioners (CRACs) to convert warm air to cool air by removing heat to the outside. CRACs can be used in a number of basic configurations that focus on cooling the entire room, just a row or just a rack. Whole-room air conditioning situates CRACs such that a certain temperature is maintained fairly evenly throughout the room (in some sense, the same way you might want to cool a room of your house). Owing to the inefficiency of mixing warm and cool air, whole-room cooling designs have been refined to isolate warm air from cool air.
One commonly used approach is the raised floor: the CRACs supply cool air underneath the room (taking advantage of cool air’s greater “weight”), and it is drawn upward by fans to cool servers and other equipment. Warm air bearing the waste heat from servers then rises, and the CRACs collect it from higher up in the room, cool it and return it under the floor to repeat the cycle. Heat is then transferred out of the facility.
To provide even greater efficiency, some designs implement hot aisles and cold aisle to further isolate warm air from cool air. Server air inlets all face the cool aisle, and exhaust is directed to a hot aisle. This type of design attempts to minimize mixing of hot air and cool air by blocking unused server slots, plugging cable holes and so forth. More-sophisticated designs may also involve walls between the racks and the ceiling to further isolate warm and cool air.
CRACs may also be located in a manner that focuses cooling on particular aisles (rather than the entire room), or even particular racks. Such designs seek to provide greater warm air/cool air isolation and more-targets cooling, thereby increasing efficiency. These designs also tend to involve greater forethought and typically cost more than whole-room approaches.
Liquid Cooling Designs
Liquid cooling poses greater technical and budgetary challenges than air cooling, but its effectiveness makes it a virtual necessity for high-density applications. A liquid-based cooling design involves chillers that remove heat to the outside environment, often with the assistance of a cooling tower, to provide cool water or refrigerant. This liquid is then transported to the data center, whether directly to the racks or to a computer room air handler (CRAH). As mentioned above, liquid cooling requires more infrastructure—particularly, the lines that carry the liquid into the facility and possibly directly to the racks.
Free cooling, sometimes called air-side or water-side economization, involves cooling that limits the expense of running chillers and compressors associated with traditional cooling approaches. Free cooling doesn’t do away with this infrastructure, but it does minimize its use.
Air-side economization, at its most basic level, can involve what amounts to “opening the windows” of the data center, using fresh air from the outside to cool the equipment. This approach poses some difficulties, such as the presence of contaminants and humidity variations, however. More-sophisticated designs use heat wheels or fixed-plate heat exchangers to transfer the heat outdoors without the risks of contamination associated with directly drawing in outside air. Likewise, water-side economization uses outside air, typically combined with evaporation effects, to cool the liquid without the need for chillers.
Because of ASHRAE’s recent increase in allowable temperatures for data centers, the ability to use free cooling has expanded. Operating a facility at a higher temperature generally means less energy spent on cooling and a greater temperature differential between outside and inside air—particularly during cooler months. For a good high-level discussion of free cooling and their relationship to traditional cooling infrastructure, see the APC whitepaper “Economizer Modes of Data Center Cooling Systems”.
Choosing the Right Design
Making the right design decision for your facility depends on a variety of factors, such as power density, room size, budget and so on. For low-budget implementations, air cooling with air-side economization bypass is likely the best option, whereas high-density implementations may require liquid cooling, despite its higher costs. In between these extremes is a grey area that requires balancing a variety of considerations. Regardless of budget, however, air cooling can almost invariably benefit from some form of hot aisle/cold aisle setup to minimize mixing of exhaust with cool air.
Photo courtesy of Tom Raftery