Industry Outlook is a regular Data Center Journal Q&A series that presents expert views on market trends, technologies and other issues relevant to data centers and IT.
This week, Industry Outlook asks Intel’s Jeff Klaus about the growing demand for bandwidth and how data centers are responding. As General Manager of Intel Data Center Solutions, Jeff leads a global team that designs, builds, sells and supports data center software products. Since joining Intel in 2000, his accomplishments have been recognized by Intel and the industry. An accomplished speaker, Jeff has presented at such industry forums as Gartner Data Center, AFCOM’s Data Center World, the Green IT Symposium and the Green Gov conference. He has authored articles on data center power management in Forbes, Data Center Post, IT Business Edge, Data Center Knowledge, Information Management and Data Centre Management. He currently serves on the board of directors for the Green IT Council. Jeff earned his BS in finance at Boston College and his MBA in marketing at Boston University.
Industry Outlook: How does consumers’ insatiable need for connectivity affect the data center?
Jeff Klaus: If consumers are always on, so is the data center. From mobile applications to wearables, the number of devices that are always connected continues to grow. Surprisingly, many consumers don’t understand that behind wicked-fast music downloads, YouTube video streaming or an instantaneous ride from Uber, data centers are at the forefront of this always-on movement. Without the compute capacity and agile power-management options, these services would simply be impossible. Take for example Pokémon Go; the entire system is running on remote servers, and with the unexpected demand, amassing 20 million active daily users, data center operators have had to frantically manage the unexpected traffic influx.
IO: This year alone, an estimated 6.4 billon connected devices are in use, a 30% increase from 2015, so there’s no sign of a slowdown in consumers’ obsession with connectivity. How are IT and data center operators dealing with this bandwidth paradox?
JK: Today, data centers are being flooded with traffic from the spike in connected devices—whether smartphones, tablets, watches or cars. As a result, operators are using a combination of manual and automatic processes for monitoring and managing infrastructure. To account for the rapid increase in IoT technology, not only must today’s data center operators forecast for the highs and lows, but they must also implement the latest DCIM technology, which employs real-time data and analytics, into their servers, racks and overall data center environment. With connected devices, real-time data and actionable insights are pivotal in optimizing the data center to keep up with the increase in bandwidth demand.
IO: How is the bandwidth paradox further complicated by hugely popular games such as Pokémon Go?
JK: Data center operators have always run into many of the same problems in their facilities. Heat being the top cause of downtime, infrastructure costs can account for up to 50 percent of energy consumption. And when you crunch the numbers, that situation could amount to sizable savings resulting from an investment in cooling equipment. Capacity planning, however, plays a much larger role in the case of Pokémon Go. Enterprises must be able to accurately predict their flux in use, but in cases such as the new augmented-reality (AR) phenomenon and having 75 million active users rather than the originally forecasted 20 million to 50 million, companies need the ability to adjust on the fly.
IO: As the world continues to become more connected, the role of the data center operator is obviously evolving. What tools do today’s data center operators and enterprises need to handle unexpected spikes in demand for capacity?
JK: In today’s world of on demand connectivity, data center operators must be on their toes. Those who employ IoT technology in their own data center and monitor in real time can more actively manage the unexpected changes a facility runs into with this perpetual bandwidth paradox.
The conclusion is that handling unexpected spikes will be a continuous effort. As connected devices have already seen a 30 percent increase, not accounting for viral hits like Pokémon Go, data center operators will need to shift their attention to implementing real-time analytics. With the connected-everything culture, we’re clearly in a large transition period where implementing DCIM and power-management solutions is the best (and perhaps only) option to increase compute capacity at all levels.
IO: Do you foresee a point at which bandwidth constraints lead to usage caps, "pay per gigabyte" or some other scheme that limits users' appetites? If so, what might such a scheme look like, and what would be the data center's role, if any?
JK: We’ve actually seen companies take steps to make sure this doesn’t become an issue. For example, a popular video-streaming site is already making deals with ISPs to guarantee the required bandwidth for the performance they need. They wouldn’t be as successful if end users had to worry about how many gigabytes they use as they binge watch Game of Thrones, for example.
For most companies, bandwidth constraints and usage caps would be a major challenge to their business model. Engineers would be tasked with finding new ways to compress and transport video and other content if companies didn’t reach bandwidth deals with AT&T, Comcast and the like. Most Internet companies for this reason have such large technical staffs that they can explicitly avoid having to hand off to their end users unwanted or bothersome requirements that would negatively affect their experience. Instagram, Facebook and Twitter wouldn’t be nearly as popular if they had to limit their users’ experience. For any company to be successful, the ability to negate constraints to user experience will always be priority number one.
IO: Does the impending end of Moore's Law (some say it has arrived already) have any impact on the data center's ability to cope with demand? Or is the matter of bandwidth not really a silicon issue?
JK: A quick Google search returns nearly 4,620,000 results with regard to the death of Moore’s Law, so how impending is it really? At some point we may reach the theoretical maximum CPU frequency for a core, or number of cores in a processor, but when you look at Moore’s Law as a general law of compute capability, engineers will always find new ways to optimize, even when a direct option is unavailable.
Hypothetically, if we begin to run out of compute frequency, engineers will work to make the memory faster so we can process more of it in the same period. In the next cycle, they could expand how much memory is available at that super-fast speed. By the third cycle, there could be some new compute innovations to adapt to the changes made in earlier iterations. This is the world the data center lives in: constant innovation on the basis of needs.