The explosion of data growth—with the advent of artificial intelligence, machine learning and blockchain applications—has propelled enterprise data centers from monolithic centralized architectures to decoupled and decentralized deployments. Not only must data centers keep up with the rapid pace of innovation, they must continue to break geographical barriers to satisfy global business requirements. Software-defined compute (virtualization) is already ubiquitous, and software-defined networking (SDN) and software-defined storage (SDS) are growing exponentially.
As is true with any disruptive technology, software-defined solutions have come under tremendous scrutiny and have also suffered a few setbacks. SDN was deemed hard to secure, and SDS was branded as too costly and cumbersome to manage. Traditional IT teams were unaccustomed to the additional challenges that came with these solutions—for example, sourcing and upgrading hardware separately from software, ensuring security of multiple loosely coupled components, and tuning applications to maximize the potential of software-defined solutions. This situation led the move toward purpose-built specialized solutions designed to sacrifice scalability and cost effectiveness over ease of management, and the dream of true software-defined solutions got lost midway.
There are numerous challenges when building next-generation data centers:
- Hyperscale or hyperconverged. This is one of the hardest choices to make. Hyperscale deployments give you cloud-like flexibility when you’re supporting a wide range of applications. According to Stratistics MRC, the global hyperscale data center market is expected to grow from $20.24 billion in 2016 to reach $102.19 billion by 2023, for a CAGR of 26.0 percent. But hyperconverged deployments make management easier and work great with limited data-set requirements for point uses, including VDI, test/dev, and ROBO. According to IDC, the largest segment of software-defined storage is hyperconverged infrastructure (HCI), which boasts a five-year CAGR of 26.6 percent and revenue that’s forecast to hit $7.15 billion in 2021.
- Hardware vendor. Selecting a hardware vendor is generally based on the CPR theorem, which says you can guarantee only two of the following three: cost effectiveness, performance and reliability. You need the flexibility of trying different hardware vendors (if needed) without ripping out your application workflow.
- Cloud compatibility. Most enterprises will inevitably move a portion of their compute and storage to the cloud. According to a state-of-IT report, about 70 percent of the companies surveyed admitted they’ll increase cloud spending in 2018.
But your cloud strategy needs to take care of the challenges that come with the ease of provisioning resources. If left unchecked, your cloud bills can easily become overwhelming. According to RightScale, roughly 35 percent of cloud-computing spending is wasted on instances that are overprovisioned and not optimized.
That’s why we must move toward the next step in data center evolution, simultaneously strengthening modern applications and supporting traditional workloads. Let’s define this next stage of evolution as hypercloud. The following minimum set of requirements will define the concept of hypercloud:
- Freedom of deployment architecture. Any data center deployment has three main components: networking, storage and compute. Over the past decade, multiple configurations have been designed to put these components together. Hyperscale and hyperconverged architectures are the most common, and the dominating theme in both of them is a software-centric approach. Both have their pros and cons, but almost all vendors force you to pick just one. A hypercloud infrastructure will give you the flexibility of being hyperscale or hyperconverged when needed. It’ll treat networking, storage and compute resources as building blocks (similar to Legos!) that can coalesce in multiple ways.
- Freedom to choose hardware. Most software-defined solutions force you to buy either certain hardware type(s) or specific hardware vendor(s). A hypercloud infrastructure lets you innovate faster and get ahead of the competition by adopting hardware innovations sooner rather than later. For instance, you should be able to choose from commodity to high-end flash memory, 1 Gbps to 40 Gbps (or more) networking, and tens to hundreds of CPU cores.
- Freedom to consume the public cloud. Either you already have a cloud presence or you will soon. With cloud deployments, two of the biggest mistakes you can make are underestimating total cost and running into cloud-vendor lock-in. Hypercloud will give you the flexibility of commoditizing cloud resources by seamlessly supporting multiple cloud vendors and enabling data movement across these clouds. This approach not only involves infrastructure support, it demands application-level design, which is loosely coupled, built on open standards and cloud agnostic.
The main benefits of hypercloud are as follows:
- It provides a blueprint to build web-scale as well as niche application-specific infrastructures.
- Different workloads have different elements. Performance and data characteristics dictate design and extension of your hypercloud.
- You can mold, build and instantiate a new instance of hypercloud in no time, if you need complete data segregation.
- Adoption time drops for bleeding-edge technologies.
In addition, the following are scenarios you can achieve using hypercloud:
- Make your data active again. Get the most out of your backups and archives by running analytics and reporting. You can do so when you move away from backup/archive-only solutions and converge your primary and secondary storage.
- Put virtualization and containerization under the same umbrella. VMs are the choice of today, and containers have the promise of tomorrow. They’ll both carve out a niche for themselves and find a way to coexist. Abstracting your compute, storage and network layers will set you on the right path.
- Build your own cloud. Enable cloud-like availability for your traditional applications by blurring the networking boundaries across your data centers and public clouds.
Software-defined data centers will no doubt become a reality sooner rather than later. One of the major components in achieving this goal is the transformation of enterprise IT teams. Focus must shift from building solutions that have a lot of gravity attached to them. Automation through infrastructure as code (IaC) will enable resource portability and reusability. You shouldn’t settle for just the modernization of your data center, but instead strive to make your data center future proof.
About the Author
Gaurav Yadav is a founding engineer and product manager at Hedvig. He has more than 10 years of experience working in storage, databases, distributed systems and virtualization. His previous experience includes working with a search-engine startup, Google and Oracle.