The growing amount of data in today’s on-demand world is transforming the data center market and the technologies that support it. With continued advances such as cloud computing and the Internet of Things (IoT), the pressure is rising for cloud-based and other central data centers to ensure data is processed and analyzed in real time. The modern data center is therefore evolving and pushing data to the network edge. But as edge deployments tend to be smaller in size, data center managers are starting to see a number of requirements emerge—particularly in the cooling systems that support this type of architecture.
Edge data centers can take many forms, often serving in rugged environments or remote locations: for example, at the edge of a forest to support a series of cell towers, in a hospital to support critical medical equipment or on an oil rig. Owing to the diversity of locations, micro data centers, which range from 1 to 10 IT racks and draw less than 100 kilowatts of power, are a critical part of the edge-computing movement. As a result, data center managers must consider the size of their cooling solution and the space it will occupy.
By moving data centers to the edge, IT managers bring data closer to the end user, enabling critical scale for large amounts of data and supporting a range of next-generation devices and solutions. What’s more, edge data centers’ flexibility can reduce costs and deployment time—a priority for every data center manager. Data center and IT managers, however, must consider a variety of cooling-related issues during the design phase, including the following:
- Security: Security is an important issue with any data center, but especially an edge deployment. Since this type of system is intended to bring computing power closer to the end user, it’s often located an area that’s easily accessible by non-IT personnel, such as a retail storage room, increasing risk. Additionally, because a micro data center may require cooling components such as condensers and other plumbing outside its walls, the physical infrastructure becomes easier to vandalize.
- Floor space: Edge systems tend to be small deployments, so floor space is extremely limited. Data center and IT managers must consider the area that a cooling system needs and explore alternative cooling methods to ensure valuable space is reserved for IT power.
- Location: In some cases, cooling solutions create a lot of noise, potentially dictating the location of an edge data center. For example, edge systems with cooling architectures that include a loud condenser should avoid offices where people require a quiet environment. Additionally, cooling solutions must be near a drain to properly dispose of condensation.
- Lead time: Given their small size, edge data centers are expected to enable rapid deployment for immediate access to data. Unlike embedded devices and regional data centers that go through highly complex planning and construction phases, edge data centers can grow quickly to add capacity in diverse environments such as factories and health facilities. Installation of data center components—including cooling solutions—should occur rapidly to reduce or eliminate the wait times typical of traditional large-scale installations.
- Redundancy: A critical factor many data center managers may overlook in an edge deployment is redundancy. Edge deployments collect and process large amounts of mission-critical data. Availability of that data is incredibly important for business operations. Edge data centers are meant to reduce latency, accelerate load times and prevent processing bottlenecks. Companies can’t afford downtime in these facilities.
- Room for growth: Although edge data centers avoid high rack densities, requirements will only expand as demand increases. With the rise of technologies such as artificial intelligence and machine learning, more applications will need server-side processing with ultra-low latency. As a result, cooling systems in an edge deployment must be able to quickly move from low to high density.
Although there are many factors to consider with an edge design, data center and IT managers have several ways to work around these challenges. The following are the top three solutions for data center and IT managers to keep in mind as they evaluate cooling options in edge IT deployments:
1. Overhead and wall-mount cooling: When floor space is tight in a micro edge environment, cooling solutions optimized for performance and space efficiency become essential. In data centers with increasing densities, overhead and wall-mount cooling offers the flexibility for a smaller system design. These cooling systems, which hang from the ceiling, sit on IT racks or mount to an internal wall, requiring no additional floor space in an edge data center. Additionally, by suspending them over contained hot aisles, they eliminate the need for a raised floor, which can impede airflow to equipment. An overhead cooling design maximizes capacity and efficiency, allowing more flexibility with the data center’s white space and allocating more available power to critical IT.
2. Total liquid cooling: Commonly used in high-performance computing, total liquid cooling (TLC) is making its way into traditional data centers because it brings efficiency and environmental flexibility. TLC immerses IT gear in a dielectric fluid or mineral-oil solution that absorbs heat, bringing the cooling much closer to servers and minimizing spacing concerns in a micro edge deployment.
Additionally, TLC doesn’t require large mechanical compressors or air-cooling components. Heat is absorbed into the fluid or oil solution, pumped away to be cooled and returned to the servers. The process mainly relies on “free” outdoor air to cool the liquid, along with small pumps to move it back and forth, eliminating the need for refrigerants that harm the environment.
Since TLC requires no additional fans or air cooling, it can survive far longer in a power outage. If the liquid supply can’t be immediately replenished during an outage, the data center’s servers can expel heat for approximately one hour before the liquid becomes too hot to cool the load. Even so, there’s plenty of time to shift to backup power or shut down the IT equipment to avoid overheating.
Particularly beneficial to an edge design, TLC doesn’t require hot-aisle/cold-aisle configurations, increasing flexibility for rack placement. It’s also helpful in reducing noise, as requires no loud evaporators.
3. Prefabricated data centers: In a prefabricated data center, everything from power and cooling modules to IT space can be arranged ahead of time, built in a factory, or even installed in a container and assembled on site. Prefabricated data centers can include any combination of IT racks along with power and cooling solutions that suit the needs of any deployment. This approach is helpful when available spacing and redundancy are critical. The data center’s components are optimally integrated to allow for predictability and design flexibility, giving data center managers a variety of options as cooling requirements change and advance.
With a prefabricated data center, traditional cooling systems can be retrofitted for suitability in an edge environment, delivering greater capacity for existing infrastructure that can be quickly deployed. It can also scale as needed. Lead time and reliability are top priorities at the edge. Prefabricated design is an option for data center and IT managers looking to easily add blocks and redundancy.
As IT and data center managers build an edge-computing strategy, they must consider today’s cooling requirements and how they could change in the future. Edge data centers continue to pack more processing power into small spaces to bring valuable data closer to the user. It’s important for data center and IT managers to regularly evaluate whether their cooling systems will support increasingly dense servers, both now and in the long term. With edge IT environments, these evolving requirements should help drive the decision on which cooling solution makes the most sense for a specific design, as well as ensure current equipment—including cooling solutions—will satisfy future demands.
Leading article image courtesy of Intel Free Press under a Creative Commons license
About the Author
Steven Carlini is the Senior Director of Data Center Global Solutions for Schneider Electric. Steven is responsible for developing integrated solutions and communicating the value for Schneider Electric’s data center segment, including enterprise and cloud data centers. A frequent speaker at industry conferences and forums, Steven is an expert on the foundation of data centers, which includes power and power distribution, cooling and technical cooling, rack systems, physical security, and DCIM solutions that improve availability and maximize performance. He holds a BS in electrical engineering from the University of Oklahoma and an MBA in international business from the CT Bauer School at the University of Houston.