With ever-increasing demand on IT resources, data centers must grow to keep pace with the needs of users—whether in the company or beyond it. But adding more floor space is an expensive, disruptive and sometimes unaffordable option. One possible solution is to increase the power density in the facility: this approach yields more computing power per square foot. But the trends of today are not necessarily the same as those of yesterday in this realm.
A number of different approaches to increasing power density have expanded the computing power per square foot of data center space. According to a Gartner press release (“Gartner Says More Than 50 Percent of Data Centers to Incorporate High-Density Zones by Year-End 2015”), “Traditional data centers built as recently as five years ago were designed to have a uniform energy distribution of around 2 kilowatts (kW) to 4kW per rack.” But the addition of high-density zones can increase this energy distribution several times over in certain areas of the facility. “Gartner defines a high-density zone as one where the energy needed is more than 10kW per rack for a given set of rows. A standard rack of industry-standard servers needs 30 square feet to be accommodated without supplemental cooling, and a rack that is 60 percent filled could have a power draw as high as 12kW. Any standard rack of blade servers that is more than 50 percent full will need to be in a high-density zone.”
Idle equipment takes up space but doesn’t contribute much to power draw (after all, it doesn’t take much power to just sit there). In an attempt to increase energy efficiency as well as power density, many data center operators have implemented some level of virtualization. By spreading IT processes over multiple servers, utilization (and thus power draw) is increased, as is efficiency. And since virtualization (at least theoretically) allows a facility to do more with less, excess equipment can be powered down and removed, freeing space (consolidation).
But even virtualization has its limits, and in some cases, it is even undesirable. For instance, the mega social networking company Facebook has eschewed virtualization, citing the difficulties that can arise with virtualized infrastructure. A CIO.com article (“Facebook Nixes Virtualization, Eyes Intel Microservers”) states, “Facebook uses a variety of server types across different parts of its data centers, but the company’s aversion to virtualization extends throughout its infrastructure. . . Facebook wants to be able to balance its computing load across many systems and potentially lose a server without degrading the user experience. ‘As you start to virtualize, the importance of that individual server is greatly enhanced, and when you have that at scale, it becomes very difficult,’ [Facebook labs director Gio Coglitore] said.”
In the case of companies like Facebook that want to avoid virtualization—whatever their reasons—another approach is necessary. Such an approach can involve using higher-power processors or, as is increasingly the case in implementations that are not required to handle large computer workloads, many lower-power processors.
Another way to increase power density is to pack more servers into less space. The traditional server rack is a metal frame or cabinet with discrete server “boxes,” each of which implements a single physical server. Blade servers increase density in a literal sense relative to this traditional arrangement: they pack a number of server “blades” (or cards) into a single chassis. SearchDataCenter.com (“Blade server”) describes blade servers as follows: “The blades are literally servers on a card, containing processors, memory, integrated network controllers, an optional Fiber Channel host bus adaptor (HBA) and other input/output (IO) ports. Blade servers allow more processing power in less rack space, simplifying cabling and reducing power consumption.”
Blade servers increase power density, but they may not always meet fit the overall strategy of companies like Facebook. Cnet.com (“Microservers: Blades rebooted”) notes, for example, that “the blades sold today by the likes of Cisco, Dell, HP, and IBM are about virtualization and integration. They pull together computing, networking, and storage and tightly integrate them both physically and through software. They are, in a sense, a form of scale-out consolidation.” For companies that want to dedicate a single server to a single application or customer, therefore, blade servers can be a form of high-density overkill, leading to gross underutilization and energy inefficiency.
But the concept of blade servers is getting a refresh with the increasing focus on lower-power servers for use in data centers.
A relatively new architecture gaining steam in many data centers is the microserver, which CIO.com (“Facebook Nixes Virtualization, Eyes Intel Microservers”) describes as “small, low-power, one-processor servers that can be packed into a data center more densely than rack or blade servers. The microservers in a rack typically share power and cooling and may also share storage and network connections.” A microserver is thus aimed at meeting the goal of increased power density—just like a blade server—but its use of lower-power processors seeks to also increase energy efficiency and align with strategies that do not rely as heavily on virtualization.
For instance, Federal Computer Week identifies the importance of microservers to Facebook’s non-virtualization strategy: “Facebook instead wants to be able to balance its load more evenly and, preferably, not degrade the user experience if a server does go down. . . Using microservers in data centers gets around that problem. They are circuit boards that carry a low-watt processor such as an Intel Atom chip, along with some RAM, a little storage and networking interconnects. Each of these boards represents one software-based virtual server.”
But despite the many benefits that can be gleaned from the use of low-power server processors in a microserver architecture, they sometimes don’t fit the bill. They are not able to handle the same heavy workloads as a full-power Intel Xeon processor, for instance. Thus, companies facing high peak demand, particularly in mission-critical applications, will not find microservers to be a viable option. The same is true of high-performance computing (HPC) applications. But for steady streams of light transactions—such as are processed by Facebook, web hosting companies and similar organizations—microservers offer a more efficient alternative.
So what is high density in the data center today? That, of course, depends on the situation. One company might rely on virtualization and consolidation through extensive use of blade servers, whereas another might rely on microservers. Clearly, the common thread is an increase in power draw per square foot (or rack): what’s considered high density today is closer to 10–12kW per rack compared with 2–4kW per rack less than a decade ago. The difference is how that increased density is implemented.
Part of the solution for some companies is to increase utilization and to eliminate idle equipment (virtualization and consolidation). This approach may or may not be combined with increased equipment density through the use of blade servers for larger workloads or microservers (similar to blade servers) for smaller and more granular workloads. Increasing density helps to get more compute power out of a given data center floor area, but the cooling challenges increase commensurately. Similarly, power distribution infrastructure must be able to handle the greater demand for power. Nevertheless, expanding supporting infrastructure may be preferable to adding floor space. Although the details may vary from application to application, high density is essentially about getting more from less—the definition of efficiency.