Open Access: New Best Practices in Energy-Efficient Data Centers

July 9, 2014 2 Comments »
Open Access: New Best Practices in Energy-Efficient Data Centers

Big data: it’s getting even bigger. And with it comes the need for more big data storage.

In 2012, EMC estimated the size of the digital universe (all the data created and used) that year to be 2,387 exabytes (EB), and it predicted that by 2020, that number would increase to 40,000 EB—about 5,200 gigabytes per person.

That data explosion is apparent in everyday things, like smartphones, and in less obvious but still critical processes, such as the conversion of medical records from paper to digital, the vast transmission and data processing demands generated by manufacturing, and the increasing sharing of sophisticated simulation and 3D models. Clearly, the demand for data storage capacity must increase to keep up with the data explosion. Therefore, promoting and developing storage center efficiency through improving performance and reducing power consumption and costs is critical.

Data centers consume vast amounts of energy in the course of handling all of those transactions—most of it to cool the facility. Virtualization and intelligent software to manage the servers will help, but the reality is that the heat load is still present, and the tools for increasing utilization only create headroom for processing even more data. Making data centers more energy efficient will go a long way to meeting the ever-growing demand for increased capacity. Ensuring that the cooling systems are reliable and easily monitored, even remotely, will further improve efficiency.

Conflicting Priorities

This is easier said than done, of course. Often, conflict exists between IT, facilities and financial/business decision-makers—simply because of the inherent conflicts in their job-related objectives as well as divergent opinions about the data center decision process.

Obviously, risk aversion is a big factor in operating a data center. Even though the server manufacturer might warrant its equipment at server inlet temperatures exceeding 100°F, it would be difficult to convince a data center operator to raise cold-aisle temperatures even as high as 80°F.

Conversely, financial managers will be anxious to reduce operating expenses—something that raising temperatures will do—but these managers must also factor in service-level agreements with clients. Even if the financial managers can be convinced that the risk is unchanged, the wealth of conflicting information on the subject will make convincing clients a real challenge. And of course, manufacturers of data center cooling equipment will naturally present a case that their solution is the best and will often use proprietary research to justify their position.

The best data center solutions, then, are found when facilities, IT and financial managers work together. Clearly, finding an objective and validated solution that puts everyone at ease is a necessary step in achieving the goal of improved efficiency.

Steps Toward a Solution

Fortunately, the U.S. National Science Foundation (NSF) has created a program designed to identify, validate and advance the state of the art in energy-efficient data center design. This program, called the NSF-I/UCRC (National Science Foundation-Industry/University Cooperative Research Centers) on Energy Smart Electronic Systems, combines the research capabilities of four universities (Binghamton University, University of Texas at Arlington, Villanova University and Georgia Tech) with the industry experience of 24 companies operating in the IT and data center environment.

Unlike programs funded by a single manufacturer or company, the NSF program offers the potential for an unbiased source of information. Research is currently underway in the areas of cooling-system control, board- and chip-level cooling, particulate contamination effects, waste-heat recovery, outside-air cooling, evaporative cooling and filtration of outside air.

Innovations in Data Center Cooling Systems

The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) has proposed that data centers operate at elevated server-inlet temperatures, with a goal of encouraging the use of outside air, or evaporative cooling, as the most efficient means of air-based cooling.

But traditional evaporative cooling methods can present challenges. This method, because it avoids the use of compressors or chillers, consumes 70% less energy than traditional air conditioning—a big reduction in operational expense. But through this process, outside air is passed over a wetted pad to transfer heat, resulting in a much higher relative humidity that has earned this method the nickname “swamp cooling.” That term alone will drive end users away from this very effective cooling method.

Furthermore, some users are uneasy about the potential for particulate contamination. ASHRAE recommends a level of air filtration that is easily achieved in most HVAC equipment (a “MERV 8” level), but many users remain hesitant to trust this recommendation.

To address these concerns, some manufacturers have developed indirect methods of cooling air using evaporative cooling that will reduce the temperature without adding moisture. The indirect method is slightly less efficient than the direct method but still consumes a fraction of the energy that a typical compressor-bearing cooling system might consume. Data from a typical HVAC-equipment manufacturer’s catalog indicates that an indirect evaporative cooling system such as Aztec will use about a third of the energy compared with a similar-size air-cooled rooftop unit or chiller system. Going a step further to employ outside air for cooling can reduce the energy use to less than a quarter of that required by conventional systems. Progressive companies that have already deployed these technologies can justifiably claim PUEs of under 1.1 on a regular basis.

Collaborative Research

Because evaporative cooling and fresh-air cooling can be so much more efficient, they are the primary types of cooling under research and validation testing as important elements of the current NSF-I/UCRC program. Mestex, a division of Mestek, has provided its specialized Aztec evaporative cooling system to the NSF for research managed by the University of Texas at Arlington Engineering Department. The Aztec system has been installed on a small data pod in Dallas, Texas. Evaporative cooling technologies are generally considered impractical for use in a hot and humid climate like Dallas, so this research can establish whether it is a viable solution in areas that had previously not considered it.

Real-Time Monitoring and Remote Access

Although the NSF research began in earnest only in January of this year, it has already seen interesting results. The Aztec unit and 120 servers, donated by Yahoo, in the four cabinets of this research pod (donated by Verizon) are being monitored in real time by the on-board digital control system. This data will soon be augmented with the addition of a DCIM software package from CommScope’s iTracs division. In the meantime, the current monitoring and control software is displaying real-time PUE figures that have ranged from 1.03 to 1.37—a sharp contrast to the average performance measures of roughly 2.0 for most data centers in the US.

One of the keys to achieving these results while still maintaining cold-aisle temperatures within ASHRAE class-A1 conditions has been the control strategies. Since the Aztec digital control system allows web-based access to over 60 data points (out of the almost 300 points being monitored), researchers in remote locations can view and even trend those points and look for patterns that suggest changes to improve performance further.

A real-life example of the value and importance of remote monitoring is apparent in a biopharmaceutical company that develops and manufactures life-saving drugs and that has just received a patent for a breakthrough drug. Similarly, food service and distribution centers must monitor temperatures in warehouses and storage facilities to comply with FDA regulations, particularly with the new requirements of the Food Safety Modernization Act.

The bottom line is in many industries that require temperature sensitive warehousing—food service, pharmaceuticals and data centers alike—remote monitoring and controls are a critical component of strong, safe supply chains.

Open Access Improves Research

Even the design of the user interface will ultimately become a benefit from this research, as information is only truly useful when it is easily interpreted and becomes actionable. To further that additional benefit from the NSF research, Mestex created the “Open Access Project.” This project allows any data center owner, operator or client to log onto webctrl.aztec-server-cooling.com and watch how an evaporative cooling unit performs on the research pod in Dallas. Because this is a research site is intended to benefit all data center designers, and because commercial and private data centers seldom allow access to details of their operations, visitors to the site can see for themselves how evaporative and outside-air cooling can perform under the variety of weather conditions in Dallas, Texas, over the next year.

The Optimal Outcome

It’s clear that the work of the NSF, in collaboration with industry leaders, demonstrates that the best solutions for energy-efficient data centers will include the following elements:

  • Energy efficiency (which may include evaporative cooling)
  • Scalability, or “plug-and-play” options, that allow the data center to grow as the industry grows, without the need for retrofitting or other additional expenses
  • Vendor-neutral controls to link disparate vendor equipment and/or building-automation systems
  • Real-time monitoring and remote access
  • A clear interface that provides context for the user

In our increasingly data-driven world, collaborative development and information-sharing between key industry players and researchers, as seen in the NSF program, will drive development of a new level of “best practices” for data center design. The demand for data is here to stay; these efforts enable a possible solution to the ever-increasing demand for cooling energy to be documented and shared with all designers and users for the benefit of the industry—and, really, every person whose life is touched by the digital world.

Leading article image courtesy of cbowns

About the Author

data centerMichael Kaler is president of Mestex. Mestex, a division of Mestek, Inc., is a group of HVAC manufacturers with a focus on air handling and a passion for innovation. Mestex is the only HVAC manufacturer offering industry-standard direct digital controls on virtually all of its products, including Aztec evaporative cooling systems, which are especially suited for data center use, as well as Applied Air, Alton, Koldwave, Temprite and LJ Wing HVAC systems. The company is a pioneer in evaporative cooling and has led industry innovation in evaporative cooling technology for more than 40 years.

Pin It

2 Comments

Add Comment Register



Leave a Reply