Due to growing energy costs, power densities and heat levels, keeping data centers at safe operating temperatures while meeting efficiency goals has never been a more demanding task. To address this issue, companies have explored many different data center cooling strategies in recent years.
Until relatively recently, most data center cooling schemes relied on so-called “chaos” air distribution methodologies, in which computer room air conditioning (CRAC) units around the perimeter of the server room pumped out massive volumes of chilled air that both cooled IT equipment and helped push hot server exhaust air towards the facility’s return air ducts.
However, chaos air distribution commonly results in a wide range of significant inefficiencies, including:
- Re-circulation: Hot exhaust air finding its way back into server air intakes, heating IT equipment to potentially harmful temperatures
- Air stratification: The natural tendency of air to mass in different temperature-based layers can force set points on precision cooling equipment to be lower than recommended
- Bypass air: Cool supply air can join the return air stream before passing through servers, weakening cooling efficiency
To address these inefficiencies, businesses soon began adopting hot aisle/cold aisle rack orientation arrangements, in which only hot air exhausts and cool air intakes face each other in a given row of server racks.
Although a one-step improvement over chaos air distribution, hot aisle/cold aisle strategies have proven only marginally more capable of cooling today’s increasingly dense data centers, largely because both approaches ultimately share a common fatal flaw-- They allow air to move freely throughout the data center.
This flaw eventually led to the introduction of containment cooling strategies. Designed to organize and control air streams, containment solutions enclose server racks in sealed structures that capture hot exhaust air, vent it to the CRAC units and then deliver chilled air directly to the server equipment’s air intakes. The end results in several important benefits:
- Improved cooling efficiency: By preventing the cold supply and hot return air streams from mixing, well-designed containment solutions eliminate wasteful re-circulation, air stratification and bypass airflow.
- Increased reliability: Eliminating re-circulation spares servers from exposure to potentially harmful warm air that can result in thermal shutdown.
- Lower energy spending: To counteract the effects of re-circulated exhaust air, legacy cooling schemes typically chill return air to 55ºF/12.78ºC. Containment-based cooling systems completely isolate return air and safely deliver supply air at 65ºF/18.34ºC. As a result, containment cooling strategies typically reduce CRAC unit power consumption by an average of 16 percent.
- Greater floor plan flexibility: To generate the cooling convection currents that make hot aisle/cold aisle strategies work, companies must place their server racks in rigidly aligned, uniformly arranged rows. Containment strategies empower data center designers to position enclosures in any configuration that best fits their needs.
Despite these revolutionary impacts of containment strategies on data center cooling, most organizations continue to plan new computing facilities based on old methodologies. First they design a building and devote some of it to the data hall, or white space. Then they fill the white space with as many server racks as it can hold.
Designing data centers in this traditional manner can create a wide range of problems. For example, an undersized or oversized power and cooling infrastructure can limit operating capacity or increase capital expenses unnecessarily. Additionally, inconveniently located structural elements can force containment ducts to bend and detour in ways that reduce their efficiency, and excessively narrow or insufficiently long room dimensions can complicate server rack placement and produce wasted floor space.
As a result, companies are increasingly recognizing the wisdom of designing data centers not from the walls in but from the server rack out.
Instead of building a room and then filling it with racks, they’re selecting the ideal racks for their applications and designing the room around them. Instead of under- or over-provisioning their new facility’s power and cooling resources, they’re installing the optimal infrastructure for the precise array of hardware and enclosures that they’ll be using. Instead of improvising solutions to efficiency-sapping structural defects, they’re preventing those defects from occurring in the first place. The end result is a data center that’s not only less costly to cool and maintain, but also more reliable and better suited to business requirements.
Although planning data centers from the server rack does run counter to traditional design approaches, it consistently delivers better results. Here are a few essential steps if you’re interested in utilizing this approach:
- 1. Collect Requirements:
The core advantage of designing a data center around the server rack is that it allows you to tailor the facility to your exact technical and business needs. Identifying those needs, then, should always be the first step in your planning process. In particular, be sure to evaluate your requirements in these vital areas:
Power density: This is the single most important requirement to estimate and it is important to collaborate with key IT vendors to understand how densities may morph over the life cycle of the data center, as it drives several important decisions later in the design process.
Budget: A realistic understanding of both the capital and operating funds available to your new data center, and the level of cooling redundancy needed, will help you appropriately balance efficiency with cost.
Location: Where you construct your data center is also likely to influence critical design decisions. For example, data center facilities in all but the hottest climates may wish to include an air-side economizer in their new facility.
- 2. Decide between row- and rack-level cooling
Though either option is capable of handling high power densities, rack-level designs tend to be both a little more powerful and a little more costly. That generally makes them a better choice for companies anticipating extreme power densities, provided they can afford the greater upfront investment.
- 3. Decide between passive and active containment
Most containment strategies rely mainly on passive exhaust systems in which the server hardware’s built-in exhaust fans do most of the work of pulling supply air in and driving return air out. Sometimes, however, a phenomenon called backflow or back pressure inhibits airflow in ways that render server fans alone incapable of keeping the return and supply air streams moving properly. In such cases, an active exhaust system equipped with stronger, supplemental fans must be used. When deciding, always keep these points in mind:
- Most row-based cooling solutions employ passive exhaust designs exclusively
- It is rare for every enclosure in a rack-level cooling scheme to require active containment. Companies using rack-level cooling are generally better off making passive containment their default choice, and then installing active exhaust products in specific spots prone to backflow.
- 4. Design between cold aisle and hot aisle containment
Though companies must weigh a number of issues when choosing between cold aisle and hot aisle containment, the following two considerations can help simplify the decision in some cases:
- Since cold aisle containment allows hot return air to circulate freely, temperatures in the server room can quickly become uncomfortable for data center employees and visitors. Hot aisle containment fills the server room with cool supply air, making for better work and observation conditions. As a result, organizations that expect people to be present in the server room on a regular basis often prefer hot aisle containment systems.
- Air-side economization is generally an option only for data centers that utilize hot aisle containment, making it the row-level containment strategy of choice for organizations eager to capitalize on the savings that air-side economizers deliver.
- 5. Perform a computational fluid dynamics study
Once you've determined what kinds of racks, cooling and containment solutions you will be using and designed a server room around them, your next step should be conducting a computational fluid dynamics (CFD) analysis. Such studies use sophisticated software to model the flow of hot and cold air in potential data center layouts. CFD assessments can help you eliminate flaws and inefficiencies in a data center design before construction begins, when you can far more easily and afford-ably fix them. It is also a good idea to perform a CFD analysis whenever significant changes are made within the data center.
Designing a building from the inside out violates centuries of received architectural wisdom. But by machining the selection of server racks the starting point of the data center planning process instead of the end point, organizations can improve reliability and save money by ensuring that their power, cooling and IT infrastructure are all optimally suited to their requirements.
For example, Eaton recently designed a rack-based containment system for a major telecommunications company with aisle and rack containment, under-floor partitioning, rack hygiene accessories and CRAC collars. As a result, the company was able to shut down a total of 137 CRAC units and reduce energy consumption by 12.9 million kilowatt hours. The reduction in energy consumption allowed the company to achieve an overall annual savings of $1.39 million annually, enabling a complete return on investment in approximately one year.
Every organization, then, should make carefully analyzing their current and future needs and rigorously evaluating potential cooling and containment strategies their first step on the road to architecting new data centers. Embracing that approach to data center design will empower them to improve the efficiency and cost-effectiveness of their computing facilities.