For companies resisting the temptation to outsource to the cloud, data center design and operation decisions can be extremely difficult in the face of mounting budget pressures, rising demand and limited resources, like floor space. Colocation providers face similar challenges in maximizing both the value they deliver to customers and their own returns on services. A critical trend in data center design is increasing power density. Given limitations on space and the need for more-efficient operation to counter the effects of rising power demand and cost, packing more resources into each rack is an obvious solution. But in addition to its benefits, increasing power density carries a number of challenges that data center operators must address.
Power Density Trends
Low power densities often equate to poor efficiency. More precious floor space is consumed by fewer IT resources (perhaps measured in operations per second, or some similar metric), and generally, more equipment must be maintained. Ben Coughlin, CFO and cofounder of Colovore, a Santa Clara–based high-density colocation provider, notes that “typical data center customer deployments today are in the range of 8–12 kW per rack, although density requirements in the mid-teens and approaching 20 kW+ are not uncommon for processing-intensive applications like big data analytics. But in the Bay Area and a lot of the U.S., the typical data center features 4–5 kW per rack (this was the build standard 10 years ago).”
In the colocation space, support for only low power densities can be costly, as the full leased rack space may be unusable owing to a lack of sufficient power. Despite the rack holding plenty of equipment, the supporting infrastructure capacity fails to match. “Customers can easily deploy 8–10 kW per rack with their existing server infrastructure, but because most data centers were built to support only 4–5 kW, they have to spread their gear across half-full racks. It can’t be cooled otherwise. But they pay for each full rack despite only being able to fill half of it. It’s wasteful.”
One constraint on power density is obviously the power-distribution infrastructure, both at the level of the utility-provided power and the backup facilities. For each watt supplied by the utility, the data center must have sufficient UPS and diesel-generator capacity to continue operations in the event of a power outage. And that, of course, is above the cabling, power-distribution units (PDUs) and so on dedicated to delivering the power to the racks. Coughlin notes that “most data centers don’t have much new power available for their facilities, so they likely have to get more power from the utility and spend a lot of money on core data center infrastructure (electrical and mechanical infrastructure, generators, power distribution and so on) just to be able to provide it. So access to more power and cost are two important variables.”
But the other and perhaps more pressing need is cooling: every watt consumed by the facility is a watt of waste heat that must be removed to maintain the desired operating temperature. Herein lies what may be the biggest challenge facing higher density—particularly for facilities not originally intended to handle it. “When you increase density considerably at the rack level, much more heat is generated by the servers and a lot more cooling is required,” said Coughlin. “Cooling infrastructure is very expensive, but the biggest challenge may be trying to retrofit an old data center. Most of these older data centers were built with low ceilings and there is no easy way to improve density in many cases other than ripping up the data center—which is incredibly difficult to do, especially with live customers.”
Unfortunately for companies with legacy data centers, the costs of retrofitting for higher density means that their facilities have little practical potential to expand their compute capabilities apart from awaiting semiconductor improvements afforded by Moore’s Law. But this approach requires purchasing new IT equipment to exploit the greater efficiency of finer process technologies, and the continual progress of Moore’s Law may be coming to an end—perhaps within a decade or so. Coughlin observes that in such cases, “Colo providers simply end up ‘spreading the load,’ or forcing customers to spread their infrastructure across half-full racks. But that’s not sustainable—they eventually run out of space, power, cooling or all three as their customers refresh their servers.”
Converged Infrastructure Driving Greater Power Density
The push toward higher density can be summarized in the term converged infrastructure: essentially, packing more computer resources into a smaller volume, which can be through existing data center trends such as virtualization and the use of blade servers and microservers. Converged infrastructure “has a very positive effect for a company in terms of operating efficiency,” said Coughlin, “because the physical size of the IT deployment is smaller, the IT manager has fewer boxes to manage, and there can be meaningful savings in total power required when server counts are reduced by 30–50%.”
This approach aims for lower total data center power (which has a twofold benefit because it also reduces cooling requirements) by raising power at the rack level. “On a per-server basis, in fact, power requirements increase considerably; but overall, total power can drop because there are so many fewer servers required. And this is exactly where higher-density data centers become so important—they are the key to enabling all this converged infrastructure. Servers today can easily draw 500 watts to 1 kW per rack unit!”
Higher Density = Cooling Problem
Naturally, were greater efficiency through higher density the end of the story, everyone would be cramming their racks full of as much equipment as possible and thereby saving money, floor space and management headaches. But like every good thing, there’s a tradeoff: in this case, it’s the aforementioned cooling issue. Low-density deployments can typically be air cooled, and in most areas, free cooling is a possibility for a good portion of the year (depending in part on the required operating conditions of the equipment). But as power densities rise, air cooling becomes prohibitively expensive—if not impossible.
The solution to heat production that more closely resembles point sources than an even distribution is to deliver the cooling directly to the source: whether the rack, the server or even the processor level. “The processing capabilities of the servers at the chip level seem to be increasing consistently, but at some point they will need to be cooled inside or immediately near the server if they increase too much.” Air cooling may still suffice up to a point, but water (or another liquid) offers greater cooling capacity—at the expense, however, of implementation difficulties like delivery infrastructure and the need for tight isolation (water + electronics = bad).
Deploying a water-based cooling solution can be problematic particularly for older data centers that must retrofit existing infrastructure. But for those that can—along with newer facilities aiming to support growing power densities—water allows delivery of cooling capacity right to where it’s needed, rather than trying to keep an entire room at a sufficiently low temperature to ensure that the space in around the servers is cool enough. Practices such as hot-aisle/cold-aisle containment can buy air cooling some wiggle room, but they have limits. And eventually, should densities rise high enough, immersion techniques may become necessary. Already, some companies offer products that involve immersing servers in nonconducting fluids or that pipe liquid into server enclosures.
Payback on High Density
For customers—whether they be colocation clients or simply the company operating the data center—high density offers important and profitable returns in total cost of ownership (TCO). Citing the benefits for colocation customers, Coughlin said, “When a company can consolidate its IT infrastructure into virtualized blade servers, it can save 20–30%+ in operating costs immediately versus a legacy deployment with 4–5 kW per rack. This is driven mostly by savings in monthly cabinets required to house the servers (which after power charges are typically the next-highest monthly expense for retail colocation customers), as well as reduced cross-connect costs and top-of-rack switches.” For both colo customers and data center operators, that means more available space for further expansion in existing racks, and given the hassles and expenses of constructing a new data center, the returns go beyond just the immediate measurable cost savings.
Among companies that want to keep IT in house or go the colocation route, the need to improve efficiency and save floor and rack space is growing as both energy prices and demand for IT services increase. Higher density is thus the trend, but it’s also the challenge: packing more capacity into a rack requires building the power-distribution and backup infrastructure to support the deployment, as well as the cooling capabilities to keep operating temperatures manageable. Although liquid cooling may not yet be a pervasive trend among data centers, it will become more prevalent commensurately with power density as air cooling becomes less practical and affordable. Regardless of the timing of a widespread switch from air to liquid cooling, however, power densities will continue to climb as companies try to maximize the use of their resources.
Image courtesy of Jemimus