The ongoing severe drought in California, as well as perennial concerns about the availability of water (particularly in western states), raises serious questions for data center operators. Water plays an important role in cooling many facilities, but how should companies address the issue? Is avoiding liquid-based cooling entirely a reasonable approach? What about cases, such as high-performance computing (HPC), where liquid cooling may be the only practical option? Here’s a look at some of the considerations surrounding water in the data center.
Water, Water Everywhere, But...
Water is an abundant resource—almost three-quarters of the Earth’s surface is covered in it. The problem is that most of it is salty, making it undrinkable as well as unusable in other contexts where, for instance, corrosion is a problem. And even if a data center can use seawater, the costs of bringing it to many landlocked locations are likely prohibitive. In most cases, the most abundant option is fresh water, whether from a river or lake or from the ground. This form of water is much scarcer, and its scarcity is creating huge problems for California and other western states.
One option for creating potable (“fresh”) water from seawater is desalination. Unfortunately, however, this process is energy intensive—and it’s important to remember that the most widely used forms of energy production (coal, natural gas and nuclear) use large amounts of water. Of course, that generally doesn’t mean the water disappears, but it may be contaminated or evaporated into the atmosphere.
Some creative efforts to make desalination more economical have come down the pike, but they have largely failed thus far to make a serious dent in the costs associated with this process. For instance, one approach is to use cold seawater (which contains less ocean life) to cool a data center, bringing it to a temperature more amenable to desalination through reverse osmosis. According to James Hamilton, “Cold water is less efficient to desalinate and, consequently, considerably more water will need to [be] pumped which increases the pumping power expenses considerably. If the water is first run through the data center cooling heat exchanger, at very little increased pumping losses, the data center now gets cooled for essentially free (just the costs of circulating their cooling plant). And, as an additional upside, the desalination plant gets warmer feed water which can reduce pumping losses by millions of dollars annually. A pretty nice solution.”
Some data centers already implement cooling infrastructure that can handle salty water. Again, however, the use of seawater—whether directly or after some form of treatment or desalination—is only suitable for data centers near the ocean. The rest must use what’s available locally.
Data Centers and Fresh Water
The Leading Edge Design Group summarizes the effect of liquid cooling: “Many data centers are designed to use cooling towers as heat rejection and as a result they can consume water in a couple of different ways: extracting water from a public source and losing water to the environment through the process of evaporation.” The company cites three alternatives to the use of potable water to cool a data center. First, mentioned above, is using non-potable water. That option may be impractical as far as seawater and inland data centers, but another possibility is greywater—that is, water that has been used but does not contain human wastes or other impurities that require special treatment. (Think, for instance, the soapy water that goes down the drain of your kitchen sink.) Accessing such water in large amounts sufficient for a data center, however, may be difficult—but Google has made an arrangement with a local water utility to do just that. Smaller companies, however, may lack the clout (and wherewithal) to make such deals, however.
A second possibility is the elimination of water from the cooling system: specifically, reliance on air. “Facebook (and others) are using direct air economization designs for data center cooling, where outside air is drawn in and supplied to the IT equipment,” notes the design firm. “In most cases this requires water ‘mist’ to be sprayed into the air stream, but the amount of water required is a significantly less than a traditional cooling tower design.” Third, a data center could employ a “closed-loop chiller design with a waterside economizer,” which can reduce (but not necessarily eliminate) water consumption.
What to Do When You Need Water
In high-density deployments, for instance, liquid cooling may be non-negotiable. Supercomputers are one such case. A FacilitiesNet article, however, offers several measures that can conserve water in case of a drought or, more generally, just to improve water efficiency. Like any real-world problem, however, greater water efficiency will likely involve tradeoffs. Perhaps the most obvious is to simply reduce energy consumption: lower dissipation of electricity as heat means lower cooling requirements. Doing so, however, may affect performance when implemented as a stopgap (e.g., in case of a drought) rather than as part of a planned efficiency-improvement project. Effectively, this option trades off service or capabilities for lower water consumption.
Another possibility is lower operating humidity. Here, however, balance is necessary: if the air is too dry, static electricity can become a problem—it’s a nuisance for people, but it can be deadly to sensitive electronic equipment. Standard practices for efficient cooling also help: they may include even simple and inexpensive measures such as removing clutter from the computer room to facilitate air flow. Raising the operating temperature, within equipment warranty guidelines, is another option that reduces the cooling burden. Companies in areas where droughts are a regular concern (e.g., California) should design their data centers to be tolerant of such conditions, just as they would design for any other likely adverse condition.
Writing at HPC Wire, Shaolei Ren notes that supercomputers can use a software-based approach to run workloads at times when water efficiency is maximal. “Unlike the current water-saving approaches which primarily focus on improved ‘engineering’ and exhibit several limitations (such as high upfront capital investment and suitable climate),...software-based approaches [can] mitigate water consumption in supercomputers by exploiting the inherent spatio-temporal variation of water efficiency.” Thus, “the spatio-temporal variation of water efficiency is also a perfect fit for supercomputers’ workload flexibility: migrating workloads to locations with higher water efficiency and/or deferring workloads to water-efficient times.”
The least desirable option in the event of a drought is water delivery by truck. This approach is expensive and may even be impossible for a data center that consumes vast amounts of water. Additionally, water restrictions by local authorities may create additional hassles.
The drought in California is a stark reminder that data center water consumption is a point of failure—particularly in dry regions, although it can strike anywhere. Minimizing reliance on water for cooling can help, but some deployments—particularly high-density ones—have little choice. Efforts to use non-potable water have found some success, but their wide-scale feasibility is doubtful. The threat of drought to data center operations is simply one more consideration that companies must face when designing (and running) a facility.
Image courtesy of Staecker