Like that clip in nearly any space-fi movie when the starship goes into hyperspace, it feels like the future is accelerating toward us in a blur. Yet IT leaders are being asked to prepare for change that may be unforeseeable from the CIO’s helm.
The good news is established and emerging solutions can help usher in technologies we haven’t envisioned today. Many of them fall into the fast-gelling field of the software-defined data center. It’s a concept gaining traction in the market, but the varying maturity levels of its components—from virtual storage to software-defined power—make adoption complex.
Expect the Unexpected
“Future proofing” was once synonymous with long-range planning—essentially, life-cycle management that enables data center facilities and hardware investments to deliver full value before redevelopment or replacement. The definition has steadily evolved to connote a flexible, resilient architecture capable of supporting accelerated business-driven digital transformation.
Of course, product-specific choices—such as whether to go with up-and-comer Nutanix or the more established EMC for a given appliance—will remain. Those building or retrofitting data center facilities must carefully consider cabling, power and cooling with a 6- to 15-year horizon in mind.
But even the most rigorous cost-benefit analyses, needs projections and product evaluations won’t by themselves produce a future-proof IT infrastructure. There are simply too many unknowns beyond mere capacity forecasts. Perhaps Benn Konsynski of Emory University’s Business School put it best in an MIT Sloan Management Review paper:
“The future is best seen with a running start….Ten years ago, we would not have predicted some of the revolutions in social [media] or analytics by looking at these technologies as they existed at the time….New capabilities make new solutions possible, and needed solutions stimulate demand for new capabilities.
If the IT architect or data center team can’t know in detail what the next revolution will look like—be it in machine learning, artificial intelligence or a field existing only in the most esoteric research—how can a data center infrastructure conceived today support it down the road?
The software-defined data center (SDDC), in which infrastructure is virtual and delivered via pooled resources “as a service,” has promised to deliver the sought-after agility that businesses require. It offers many advantages in this regard. Specifically, it does the following:
- Promotes a business-focused approach rather than a component-centric one in which the technology constrains the business decisions and engenders server, network and storage silos that are unable to adapt responsively.
- Reduces the need for specialized components while enabling each application to reside on the most appropriate underlying infrastructure. The data center can employ mismatched products it already owns, take advantage of commodity hardware in a converged environment and ultimately move toward easy-to-maintain, off-the-shelf hyperconverged devices.
- Facilitates automation of resource provisioning and management, which frees network staff from tedious, repetitive tasks; can drastically reduce human-resources requirements with a tilt toward core talent; and ensures standard deployments, not to mention speeding service delivery and enabling SLA achievement.
- Simplifies management under a single platform, which replaces the various tools previously required to manage routers, switches, storage devices and other hardware, and eliminates the need for specialized training on vendor-specific consoles.
- Maximizes utilization rates to drive the greatest return on each investment and minimize stranded capacity and wasted IT funds—a crucial consideration when IT is being asked to do more with less.
- Increases adaptability through holistic infrastructure designed to support any workload in an optimum fashion within a dynamic infrastructure environment.
- Creates a location-agnostic data center, which can reside across multiple physical sites and combine various service providers, a boon for contract negotiations. SDDC can combine the public cloud for scalability and turnkey development environments with on-premises capabilities that to deliver the security, latency and other advantages that certain applications need.
- Increases resiliency by compensating for hardware and software failure and offers security and disaster-recovery advantages.
- Provides elastic computing in a manner that frees the IT department to focus on strategic projects and innovation.
Flexible, agile and capable of supporting true ITaaS and rapid or continuous deployment (CD), SDDC may be ideal for businesses needing to “fail fast and fix it” in order to capture or maintain competitive advantage.
The Long and Winding SDDC Roadmap
As a concept, SDDC reaches back to at least 2012, when it became a hot topic at storage and networking conferences. But it’s an evolving field on which many IT pros have nonetheless pinned their hopes.
In 2015, Gartner analyst Dave Russell cautioned that “[d]ue to its current immaturity, the SDDC is most appropriate for visionary organizations with advanced expertise in I&O engineering and architecture.” At the same time, Gartner predicted 75% of Global 2000 enterprises will require SDDC by 2020—putting us about halfway along the trajectory to widespread adoption, at least among large businesses. A 2016 Markets and Markets study supported such a “hockey stick” growth chart, projecting a 26.5% CAGR through 2021 to reach $83.2 billion.
By most measures, SDDC is at a transitional phase, where virtual servers and hyperconverged appliances are maturing. But pushing the software-defined model to less charted realms, such as software-defined power and self-aware cooling, remains mostly on the whiteboard.
A conservative adoption strategy can track the changing status of emerging technologies, leaving experimentation to those with the funds and inclination to undertake it and leaving time for winning and losing technologies in each area to shake out. The approach provides greater clarity for those positioning themselves a step back from the bleeding edge.
Server virtualization, for example, is reaching its peak  and has become the de facto choice for organizations of all sizes. Software-defined storage falls in line as well, with technologies frequently offering a favorable upgrade pathway from traditional solutions.
Software-defined networking is gaining stability as the explosion in global IP traffic (projected by Cisco to reach 3.2 zettabytes by 2021), which coincides with rapid growth of east-west data center traffic, is necessitating advancements. Technologies such as VMware’s NSX and NFV are by deemed by most experts to be mature enough to serve as a primary software-defined-networking backbone.
These developments provide viable SDDC entry points for the enterprise, as well as for IT professionals who want to future-proof their skills. For instance, server virtualization is no longer optional. Reaching 90% virtualization is a good indicator that the enterprise is ready for the transition to SDDC.
From there, converged racks of software-defined storage, networking, and computer or hyperconverged appliances are also enterprise ready. Hyperconvergence, whether in its current vendor-specific form or a more vendor-agnostic iteration, will likely be a requirement as enterprises absorb the flood of data from IoT, respond to accelerating market changes with continuously deployed solutions, and add capacity to serve internal and external customers. Its power comes from being preconfigured and engineered for scalability by adding interoperable modules under a hypervisor for Lego-esque expansion.
Despite its potential, the transition to hyperconvergence may be incomplete until 2025 or later. Even then, the chances for 100% hyperconvergence are essentially nil, as it’s inappropriate for some applications. For example, ROI is best for central workloads but dwindles at smaller scales. As edge computing takes hold, IoT-centric and other technologies that cannot be incorporated into such a virtual box will temper hyperconvergence adoption
SDDC’s Final Frontier?
An important next stage in SDDC’s implementation will be to bring total virtualization to the data center to achieve the full potential of “software-defined everything.” Through virtualization, utilization rates and equipment densities have multiplied. In a sense, the space problem in the data center has been solved. Miniaturization via on-premises hyperconvergence, especially when paired with cloud overflow, is nearing par with increased demand.
With these changes, the pressure on power and cooling systems has only grown. Emerging possibilities in the area of software-defined power could soon enable use of spare or stranded capacity. With dynamic redundancy, power systems currently experiencing only 50% utilization may be able to approach 100%, yielding operational and capital savings. Although such developments have been primarily vaporware to date, hyperscaler technology and data center infrastructure management (DCIM) are just now becoming able to commercialize this next stage of data center virtualization.
The Bottom Line
The SDDC transition already underway holds great promise to boost enterprises’ abilities to absorb technological and business-driven change. Will developments occur beyond the software-defined revolution? Absolutely. But for IT professionals standing on the bridge navigating for tomorrow, harnessing the power of SDDC is likely the best option for future-proofing the data center of today.
About the Author
Paul Mercina brings over 20 years of experience in IT-center project management to Park Place Technologies, where he has been a catalyst for shaping the evolutionary strategies of the company’s offering, tapping key industry insights and identifying customer pain points to help deliver the best possible service. A true visionary, Paul is currently implementing systems that will allow Park Place to grow and diversify its product offering on the basis of customer needs.