This article is the first analyst insight of a four-part series from IHS Markit, in which the Data Center Infrastructure team will explore the potential benefits of edge data centers and what they mean for related infrastructure markets. The markets to be assessed are UPS hardware, containerized data centers, and multitenant data centers.
The rise of colocation and cloud services has been well documented. For years, colocation revenue has grown approximately 10% per year, with cloud growth rates more than double that number, as the world’s increasing digitization expands the global need for compute and storage. The foundation of these business models began with the cannibalization of enterprise data centers, and it continues at the expense of growth from enterprise-owned data centers. IHS predicts this trend will continue and the market will see further consolidation of small and medium-size data centers, with businesses outsourcing their IT asset and/or management to colocation and cloud-service providers. As a result, the data center market is shifting and will continue to shift toward fewer but larger data centers. These facilities are generally located in the Tier One cities around the world that have the greatest computing demand (Silicon Valley, Dallas, Chicago, London, Amsterdam, Frankfurt and Singapore, to name a few).
Although this group accounts for a good portion of data center compute needs, it leaves much to be desired in other locations where increasing consumer and distributed compute needs are still growing thanks to the likes of Netflix, Pokémon Go (and other location services) and the Internet of Things (IoT). These applications are less well serviced by the often central colocation and cloud-service data centers, owing to higher latency and general network congestion during peak usage times. So how is the data center market gearing up to address these types of applications, which are starting to take center stage? The answer lies in edge-of-network data centers.
What are these data centers? IHS Markit defines edge-of-network data centers, or simply edge data centers, as small data centers (about 10–100kW IT load) geographically located close to end users and intended to reduce latency, decrease network congestion, keep mission-critical applications on premises, and/or act as a data-aggregation and content-caching point between a user and a central data center. In simpler terms, it’s a data center “in between” a central data center and the end user, or “what’s left” on premises after a company outsources the management of its computing and storage needs to colocation and cloud-service providers.
But is the need for edge data centers really that strong? Maybe so. The recent release of the immensely popular Pokémon Go app highlighted this potential need. For those who have managed to escape the barrage of news related to Pokémon Go, it’s an augmented-reality game, playable on your smartphone, that combines the real world around you using Google Maps and your camera with the world of Pokémon. Those who have played Pokémon Go from the start are no stranger to the “server issues” that Niantic (the app’s creator) experienced, sporadically leaving many players and regions unable to play for the first few weeks. Even to this day, approximately a month after its release, speculation suggests that Pokémon Go has removed features from the original game to ease the IT load caused by the overwhelming demand. By looking at Niantic’s job listings, DatacenterDynamics theorized that the company is using the Google Cloud Platform for its computing needs. This theory makes sense, as Niantic started as an internal Google group, and its founder was also the founder of Google Earth. Although Google keeps secret the exact size and location of its Cloud Platform data centers, it’s fair to say the company builds them in hyperscale fashion, with larger data centers often residing in more-central locations. The hosting of Niantic’s Pokémon Go application would then be limited to this geographic scope. While there are multiple paths to resolving Niantic’s early performance issues, one could be a shift in computing architecture, such as using edge data centers to reduce latency and network congestion of real-time data between the player’s phone and a large-scale data center. An edge data center could act as a caching and data-aggregation point between the player and the larger data center.
The Internet of Things (IoT) makes another clear case for edge data centers, with the installed IoT base expected to more than double from 2017 to 2022. Nodes and sensors are making their way into everything from factory equipment to home appliances, and they are generating an immense amount of data that must be stored and analyzed. A central data center, which would likely service a large geographical scope, could connect to millions of these nodes and sensors. Constant pinging of data from the nodes and sensors for this large geographical region would congest the network and could result in poor connections. Although the data may need to be collected in real time, analyses may not. Rather, data could be aggregated in edge data centers and sent to a central data center during non-peak-traffic times. In this case, minor computing could take place at an edge site while the analysis of larger data sets takes place at the central data center after the transfer.
Another application bringing attention to the potential benefits of edge data centers is streaming content, such as Netflix, HBO Go, YouTube and Hulu. Streaming content is as popular as ever, with thousands of users accessing the same content, often at the same time, and creating network congestion that can reduce the quality of the end-user experience. Anyone who has eagerly awaited the Netflix original series House of Cards has probably experienced or heard of others experiencing streaming problems. Netflix is aware of this problem and is tackling it head on with its “Netflix Open Connect.” According to the company, “The goal of the Netflix Open Connect program is to provide our millions of Netflix subscribers the highest-quality viewing experience possible. We achieve this goal by partnering with Internet Service Providers (ISPs) to deliver our content more efficiently. We partner with hundreds of ISPs to localize substantial amounts of traffic with Open Connect Appliance embedded deployments.” In essence, this program is giving ISPs a “Netflix box” to host content closer to the end user and locally maintain the Internet traffic it generates. This strategy keeps Internet traffic off broader networks and, as Netflix stated, delivers to the viewer the best possible experience. This Netflix box is a perfect example of hardware that can be, and is, hosted in edge data centers.
With the case for edge data centers on center stage, what does it mean for related data center infrastructure markets? Stay connected with the Edge Data Center series, in which the IHS Markit Data Center Infrastructure team will explore this very question. The markets we’ll explore are UPS hardware, containerized data centers and multitenant data centers.
About the Author
Lucas Beran is an analyst in the Data Centers, Cloud & IT Infrastructure group at IHS Markit. The team produces high-quality market intelligence covering uninterruptible power supplies (UPSs), power distribution, precision cooling, rack enclosures, colocation, structured cabling and service/support. Lucas joined IHS Markit in early 2016 and has since been responsible for producing semiannual reports covering the UPS global market. Before IHS Markit, he obtained a degree in economics from Boise State University and had been working in Copenhagen, Denmark, in IT, including server-room management.