Hedging Your Data Center Power

October 5, 2011 2 Comments »
Hedging Your Data Center Power

It’s the data center’s biggest operational expense, and its price keeps going up: power. With ever-growing demand for IT services, the data center’s power needs are increasing concomitantly, thus placing facility operators in a difficult position: how much power infrastructure should be installed to both provide room for growth and avoid the inefficiencies of unused infrastructure?

Two simple approaches lie at the extremes. One is to build a power infrastructure that provides just enough to meet current demand. Another is to plan far into the future, designing the infrastructure to leave enough headroom for many years of growth. Each of these has its own problems. For instance, infrastructure that only meets current demand is fine if that demand remains unchanged over time; if demand for services increases—which it is doing throughout the IT world—then the data center operator will quickly be forced to choose between providing inadequate service and adding more infrastructure. On the other hand, building more infrastructure than is needed avoids the potential problems of growing demand at the cost of higher inefficiency through power loss, poorly used space and higher capital expenses in the short term.
Nevertheless, these two extreme approaches are not the only option. The “middle of the road” approach combines modularity, right-sizing and some planning ahead to avoid the problems associated with less subtle strategies. But taking this route requires careful planning and nerves of steel.

Power Isn’t Everything—But It’s Close

Power is the difference between an operational data center that supplies a variety of IT services and pile of useless silicon and other machinery—it’s the lifeblood of the data center. To be sure, power isn’t the only cost: cooling and power infrastructure, capital IT, personnel and maintenance are just a few of the other areas of expenditure for the data center. Power, however, is increasingly taking center stage. Dr. Mickey Zandi, managing director at SunGard Availability Services, notes that data centers “are faced with the ever-growing demand for more services from the servers and applications, which are power hungry and require more horse power. Today, more and more applications are being deployed in a production environment where power and cooling are required for keeping chips, hardware and software more resilient.”

Power is the greatest operational expense for the data center. It is even beginning to eclipse capital IT costs by some metrics: the lifetime cost of powering a server is greater than the capital cost of that server, by some estimates. In addition, the cost of energy is steadily rising—whether as a result of decreasing supply relative to demand, increased government regulation of its production and use, inflationary pressure or some combination of these factors.

And if that wasn’t enough, data centers are power hogs that are eating a growing portion of the energy produced every year; as a result, they are gaining visibility in the eyes of governmental organizations around the world. Unfortunately, visibility to a government agency usually means one of two things: higher taxes and/or more regulations. Because data centers generally do not create emissions like power plants (except when running diesel backup generators, for instance), they do not fall under the purview of regulations in this area, but they do face the results of such regulations—even higher energy costs. Thus, for instance, a carbon cap and trade system, depending on its scope and implementation, could raise energy prices and therefore the cost of IT services.

Thus, data center operators are highly motivated to keep down the costs associated with their facilities—both capital and operational. Balancing this motivation, however, is the need to keep pace with growing demand for service: this situation squeezes operators from both sides, making the job of meeting demand and doing so in a cost-effective manner difficult.

But increasing capacity (or providing more capacity than is needed) is not the only way to deal with growing demand. More-efficient operations can provide some leeway. According to Dr. John Busch, Founder and CTO of Schooner, “Successful data centers need to handle an ever increasing load of user transactions with great response time. But this does not mean that they need to continually increase their power budget. Technology is advancing along with the demand for data center services, and data centers can take advantage of these technology innovations to meet service demands without increasing power consumption.” Meeting demand is thus a balancing act of both providing sufficient capacity and making more efficient use of what’s already available.

The Danger of Right-Sizing for Power

To avoid the inefficiencies that arise from unused infrastructure, one approach to meeting demand is right-sizing. But this approach runs into problems in an environment where demand for service is steadily increasing. Constantly adding new infrastructure (in the traditional sense) can be a very expensive proposition, and it likely involves some disruption of operations. It is also more expensive than single, large projects, which can take advantage of discounts for larger amounts of materials and equipment.

Right-sizing in the context of power distribution is a matter of ensuring that enough power can reach the IT equipment and associated infrastructure, such as cooling, lighting and so on. Cables have a limited capacity, so right-sizing reduces the data center operator’s ability to expand by adding more server racks or more cooling equipment. If demand increases, then the operator must add new cabling to support the added infrastructure needed to meet this new demand. Naive right-sizing is tempting, because it avoids wasted capital costs and inefficiencies that arise from unused or excessive power infrastructure.

The Danger of Not Right-Sizing for Power

If naive right-sizing is the left ditch, then building now to accommodate projected demand 15 years in the future is the right ditch. When money is put into equipment that goes unused—even assuming that equipment has no effect on efficiency of power and space usage—that money becomes unavailable for use elsewhere in the business. It provides no return, but instead depreciates value as the equipment ages. And indeed, often inefficiencies are introduced when excess equipment is present. For instance, too much uninterruptible power supply (UPS) capacity can decrease the efficiency of the power system, raising the cost of operating the data center. Furthermore, more equipment generally means more space consumed in the data center, and because this equipment is not contributing, it represents wasted space.

Like naive right-sizing, however, the “plan way ahead” approach has its own apparent benefits. For instance, as demand for service increases, the company can simply add server racks without having to worry about whether sufficient power and cooling infrastructure is available—it is most definitely available. The data center is thus spared the hassle of constant construction and retrofitting projects intended to add capacity.

Fortunately, accommodating growth efficiently is not limited to an either-or solution. By balancing a certain amount of infrastructure that leaves room for growth, but does so efficiently, with the ability to add new capacity as needed, data center operators can make their facilities adaptive and cost effective. This middle-of-the-road approach is modularity.

The Role of Modularity

Modularity is nothing new to data centers. Some aspects of the data center are almost inherently modular: for instance, it’s easy enough to add IT capabilities by simply adding new servers and racks and integrating them into the network. The problem here, however, is ensuring sufficient power, cooling and space resources. Fortunately, modularity can extend to power distribution as well. Ramesh Menon, global IT channel manager for Eaton’s Distributed Power Quality Division, notes that “Building the data center to meet the needs of an uncertain future is expensive, but the ability to scale is vital. Modular designs allow data center managers to pay only for only the functionality needed in the short-term, as well as expand at their own pace.”

For example, Menon identifies UPS systems as one area where a modular approach can save both capital and operational costs while still leaving room for growth. “Modular designs with built-in expansion capabilities are preferred to reduce inefficiencies; thus, controlling costs. Eaton’s BladeUPS system is a scalable system that allows data center managers to take a building-block approach to meeting power capacity needs as business grows, so money isn’t wasted buying a system that’s too large, extracting power that won’t be used.”

Some aspects of the data center are more amenable to modularity than others, however. For instance, adding a new UPS unit is generally much easier than running additional power cables—at least in the sense that the UPS unit occupies a relatively small space, whereas running a new cable or set of cables may require a variety of gymnastics—accessing the cable space (such as under a raised floor) may be difficult, existing equipment may be in the way and so on. But cabling is generally less expensive than large equipment units, such as UPS systems. Thus, given cost and installation considerations, power cables are one area where the data center may benefit from more infrastructure than is needed to meet present demand. When adequate cabling is present to meet growth, the operator can simply purchase the necessary equipment to add capacity without worrying whether the facility has the capability to incorporate that equipment without adding more cables.

Thus, modularity—used with some discretion, as in the case of power cabling—allows data center operators to take the approach of right-sizing, along with its associated benefits, without incurring the penalties of limited or difficult growth to meet rising demand. “Modularity in design subdivides a room into smaller parts (modules) allowing for power and cooling to be shut down in non-occupied segments/rooms and isolated in areas which are being used, allowing for energy costs to be greatly reduced,” said Zandi. “Right-sizing the data center infrastructure is key to any data center manager’s successful strategy. Data center managers should continue to consolidate, virtualize, standardize and use the cloud to augment their overall availability and resiliency strategy.”

A key to making this whole approach work is knowing how much power is being consumed in the facility. To this end, power monitoring software is required. The critical point is identifying peak power demand: this is the minimum capacity that the data center’s power distribution infrastructure must support. As demand for services increases, so will this peak power draw. Additional infrastructure is needed when the peak power begins to approach the maximum capacity of the system.

Menon identifies power reporting software, such as Eaton’s Power Xpert Reporting, as a means for data center managers to “benchmark existing energy usage and power consumption in their power distribution systems, allowing energy and cost saving opportunities to be quickly identified and implemented. Seeing the forest for the trees can be difficult, especially when trying to detect trends and patterns in electrical power distribution; this software offers several solutions that assist in reducing energy consumption and minimizing costs and environmental impacts associated with excessive energy use.”

Hedging your data center’s power is thus a matter of combining the best of both worlds: planning ahead to meet future growth and right-sizing to reduce inefficiencies. Through careful planning and monitoring, companies can run their data centers in a way that reduces capital and operational costs, increases scalability and makes better use of existing infrastructure and resources.

What Else to Do About Power

The modular approach to power distribution is helpful, but it’s not the only aspect of a good energy strategy in the data center. Demand for service will rise, but in the short term, that doesn’t always mean that power consumption must rise also. Doing more with less as a parallel strategy can yield more mileage from modularity. This is, of course, easier said than done, but with numerous resources for virtualization, consolidation and upgrading to more-efficient IT equipment, it is a viable option for many companies. Zandi notes, for instance, that “consolidation may be approached in conjunction with virtualizing applications, services or servers. By using standardization as the overall method of operations, the data center manager is able to leverage the best efficiency of space, power or cooling.”

On the efficiency side, Busch states, “Technology advances in software architecture and commodity hardware enable a large increase in achievable transaction throughput per watt. Advances in commodity multi-core processors and flash memory, coupled with data access software designed to exploit them, provide an order of magnitude increase in computational density and power reduction. Memory hierarchies employing flash memory consume 1/100th the power of conventional DRAM/hard drive storage systems, and they provide 100 times the I/O throughput with 1/10th the latency of hard drive based storage systems.”

And, of course, many data center facilities have the potential to increase their cooling efficiency, whether by using free cooling (air-side economization, for instance) or even simply by increasing their operating temperature, as ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) has recently noted as a possibility in many cases. By taking steps in the area of efficiency, data centers can effectively increase the capacity of existing power systems in terms of IT services delivered per watt.

Thus, implementing efficiency measures, such as upgrading IT equipment as more-efficient models become available, can combine with a modular approach to power capacity to enable data centers to meet demand without wasting capital on unneeded infrastructure.

Conclusions

What’s best, right-sizing or building for the future? It’s not an either-or question. A proper combination of both, with a healthy dose of modularity, is an excellent way for data centers to stay lean and mean and to still be agile enough to meet rising demand. Chances are a data center will face a shortage of power distribution infrastructure in its lifetime: ever increasing demand for IT services virtually guarantees this scenario. But companies need not pour capital into equipment that won’t be needed for years, and neither must they expect periodic expensive and disruptive retrofits to increase capacity as needed. The modular approach permits a smoother increase in infrastructure as its needed. In addition, however, energy efficiency measures implemented in parallel with a modular power system approach can enable an even better growth strategy by reducing the rate of power consumption growth rather than simply responding with more capacity. And in this arena, numerous options are available—some more expensive than others. But even though power needs for most data centers will grow, they need not do so in a manner that the facilities cannot accommodate in a calm and measured manner.

About Jeff Clark

Jeff Clark is editor for the Data Center Journal. He holds a bachelor’s degree in physics from the University of Richmond, as well as master’s and doctorate degrees in electrical engineering from Virginia Tech. An author and aspiring renaissance man, his interests range from quantum mechanics and processor technology to drawing and philosophy.

2 Comments

Add Comment Register



Leave a Reply