With companies relying on IT more heavily than ever, data center capacity requirements are steadily rising. Unfortunately, so are the costs associated with data center construction and operation. As a result, organizations are increasingly searching for ways to reduce the size of their data centers without compromising their ability to meet business requirements.
The good news is that companies interested in following the trend toward smaller data centers can employ a wide range of techniques to shrink their computing footprint. IT consolidation is best approached by a combination of three primary strategies, design-level, white space and grey space.
Parts one and two of this series discussed design-level and white space tactics for reducing the size of your data center without shrinking its operating capacity, availability or efficiency; the following will recommend grey space consolidation strategies and reflect on the three most important consolidation principals to ensure your data center operates with maximum cost effectiveness and reliability.
1. Deploy modular, scalable UPSs and power distribution: Today, data centers often rely on large, inflexible UPS products that provide more capacity but operate less efficiently and take up more space than required. Using modular UPSs can eliminate that problem. Such products facilitate scaling a UPS deployment to meet current needs and expanding it incrementally as those needs increase.
For example, some modular UPSs provide up to 50 or 60 kW of capacity in 12 kW building blocks that fit in standard equipment racks. As requirements increase, another 12 kW unit can simply be plugged in. Even the largest UPS systems can be made modular in 200 to 300 kW increments. That’s a scalable and efficient approach to keeping up with escalating power needs that lowers upfront capital spending and saves room in the data center.
Modular power distribution components offer similar economic and space-saving benefits. Sometimes called rack power modules, such products offer scalable “plug and power” distribution from a UPS or panel board to an enclosure-based PDU, eliminating wasted excess capacity and significantly reducing cabling requirements.
2. Employ alternative energy storage technologies: The lead- acid batteries most UPSs use are bulky.
Employing alternative energy storage systems that require less space, such as flywheels, can help data centers reduce their physical footprint. A flywheel is a mechanical device typically built around a large rotating disk. During normal operation, electrical power spins the disk rapidly. When a power outage occurs, the disk continues to spin on its own, generating DC power that a UPS can use as an emergency energy source. As the UPS consumes that power, the disk gradually loses momentum, producing less and less energy until eventually it stops moving altogether.
Flywheels typically deliver only 30 seconds of standby energy, versus the five to 15 minutes of power generally provided by a lead-acid battery. However, research shows that more than 95 percent of utility outages last just a few seconds, so using a flywheel as a complement to batteries during brief power interruptions can save data center floor space by reducing the number of lead-acid batteries needed to protect server infrastructure.
3. Utilize air-side and water-side economization: Most data centers collect hot exhaust air and return water, chill it, and then re-circulate it. Facilities that utilize “air-side economization,” however, simply pump hot internal air out of the building and pipe in cool external air. “Water-side economization” is a similar process in which return water is pumped through an external radiator or cooling tower rather than a chiller. Both techniques can significantly lower cooling infrastructure needs, reducing the amount of space required for CRAC units and other cooling resources. Moreover, studies have shown them to be viable options for at least part of the day even in mild or warm climates.
4. Raise server inlet temperatures: Data center operators typically keep internal temperatures at roughly 70oF (21oC). According to recent studies by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), however, most data centers can safely operate at temperatures as high as 80oF (27oC) and up to 60 percent relative humidity. Raising data center temperatures and humidity even a small amount can dramatically reduce the need for space-consuming cooling infrastructure.
Every business should examine its specific needs and constraints before deciding which space-saving strategies to employ. Broadly speaking, however, most organizations seeking to shrink the scale of their data centers will benefit from following these basic principles:
1. Make as much use of server and storage virtualization as possible, to fit more computing power and data into less space.
2. Look for additional ways to increase power and processing densities through the use of technologies such as blade servers, passive cooling and 400V power.
3. Follow modular design principles when building or retrofitting a data center to avoid implementing more infrastructure than is likely to be needed over the near or medium term. Look in particular for UPS, cooling and power distribution products that allow adding capacity incrementally as requirements grow.
Today’s IT and facilities managers face a difficult bind: Though computing needs are constantly escalating, so is the cost per square foot of data center space and the price of critical supporting resources such as electricity and water. Consequently, more and more businesses are attempting to shrink the size of their data center without also shrinking operating capacity.
Fortunately, there are many ways to compact a data center while still meeting business requirements. Most of them, moreover, are proven and cost-effective options for companies of almost any size. By studying the techniques described in this series and following the recommendations outlined above, you can get more done in less space and position your company to meet its IT requirements with maximum cost effectiveness.