Hyperconverged infrastructure (HCI) promises benefits in several dimensions: streamlining deployments across compute, storage and networking; scaling linearly as needs grow; and unifying and simplifying management. When examining prospective HCI solutions, it’s important to ensure this set of promises is fulfilled, but more importantly that checking those boxes with HCI doesn’t create new IT silos. A careful examination of what causes these silos makes it much easier to identify a solution that blends seamlessly into your architecture and chips away at IT silos, while at the same time ushering in the simplicity and ease that HCI promises.
Traditional Hyperconverged Infrastructure Falls Short and Creates Silos
First, public-cloud paradigms have ushered in expectations of simplicity as well as the need for an elastic consumption model. Influenced by this trend and by budget challenges, IT organizations are shifting away from a pay-up-front procurement approach that involves projecting demand and toward a more on-demand procurement model that needs less initial planning and sizing. Additionally, IT is increasingly moving away from procuring best-in-class solutions across compute, storage and networking. Instead, the granularity of choice happens at a data center solutions level. It involves evaluating whether the solution can deliver on specific outcomes: application performance, availability and responsiveness SLAs, data and analytics support, and support for rapid application development and deployment life cycles. HCI has emerged as a leading data center infrastructure solution to meet these changing expectations.
In addition, IT now needs to provision a bimodal operating model that supports existing legacy applications and caters to emerging needs. This situation implies HCI must be deployed into brown-field environments that include traditional converged and siloed architectures. IT budgets are also flattening, or in some cases decreasing, putting a premium on efficiency through increased utilization, elastic operations, and optimized workforce deployments and workflows.
Against this backdrop, HCI must do more than just promise simplicity inside its siloed deployment or only bust silos when one is operating in the HCI box. Delivering a unified management environment for hyperconvergence alone is nice but far from sufficient—and perhaps also damaging. HCI can’t only promise elastic growth in the HCI deployment: it must transfer some of the elasticity and efficiency benefits to the converged infrastructure (CI) side. HCI can’t ignore application life cycles and deployment options that should span HCI as well as traditional CI. It shouldn’t ignore workflows and employee expenses, but must scale both to the HCI deployment, instead of investing anew.
Busting Silos Horizontally Across HCI and CI Stacks
True silo busting requires that HCI be able to operate alongside traditional converged and legacy infrastructure, with unified management environments, shared policy and a shared networking fabric. It also requires the infrastructure to take on workloads that are well suited to HCI and leave behind workloads that are appropriate for traditional CI. In some cases, workloads may, and often should, span both CI and HCI.
Busting Silos Vertically Up and Down Stack
HCI must cut across operational silos as the “scale-out” cluster grows. This situation needs an HCI-management paradigm with a policy-driven architecture that transfers policy attributes to new hardware infrastructure elements as they are added to the deployment. Cutting across operational silos becomes possible when networking is truly part of the HCI architecture; the policy should carve network lanes into the fabric and should be driven by a deep understanding of HCI traffic patterns and quality-of-service (QoS) needs.
A true HCI architecture delivers simplicity because hardware and software have been designed to work together. Just as public-cloud providers have obsessed over and redefined hardware architectures to work with the evolving needs of an entire stack, HCI must make the hardware simple for the IT user by deeply engineering it, offloading to it and optimizing it where it matters.
The data-platform—the core HCI engine—must be based on a truly distributed data architecture that harnesses the hardware, networking and QoS policy. As resources are added to the cluster, this approach will ensure the delivery of low-latency data access, a no-compromises approach to data services (dedupe, compression and so on), and transparent as well as linear pooling. Be wary of solutions that require that rich features such as data optimization be turned off to maintain performance. Also, be sure to ask questions early to avoid getting stuck with an architecture that will be self-limiting as you scale. An important test for HCI in the vertical dimension is one of predictability. Along these lines, be sure to investigate the spread of performance and latency over time and across virtual machines. Performance is great, but predictable performance is critical!
Busting Silos in Application Deployment and Life Cycle
As HCI matures, use cases will rapidly expand beyond virtual desktop infrastructure (VDI), virtual infrastructure, test development and basic database support. Over time, HCI will increasingly support more latency- and performance-sensitive databases, analytics and in-memory workloads with large working data sets. In the interim, the HCI architecture must support an application deployment and development life cycle with components that reside on the HCI (e.g., test-dev and presentation layers) with the more-performance-oriented components on traditional CI architecture. We tend to simplify applications as a monolithic chunk when they’re a collection of interacting services. Putting the right services on the right infrastructure—HCI, CI or multiple public clouds—is a much more effective method.
Busting Silos Created by Vendor Business Models
Beyond technology, it’s important to examine an HCI vendor’s business model. Some business models may force a philosophy that tries to fit HCI as the solution for all. Others may dictate a strong stance toward a certain hypervisor. These considerations are often ignored in the heat of technology and architecture debates but are critical in choosing an HCI solution that will truly bust silos.
New paradigms are great and usher in many benefits, but a careful choice early on is critical. A new, powerful architecture must be proud of its youth, smarts and simplicity, but also respect the existence of the legacy infrastructure and seamlessly help the legacy transition over time. Choose well!
About the Author
Kaustubh Das is VP of Strategy and Product Management, Storage, for Cisco’s Computing Systems Product Group. Kaustubh is responsible for leading strategy development to drive market-share growth and profit revenue for the group. He’s also responsible for accelerating the growth of Hyperflex, an industry-leading solution for hyperconvergence. Kaustubh joined Cisco from Seagate where he served as Vice President of Product Line Management for the company’s enterprise segment, including P/L ownership for an approximately $4 billion business selling into the enterprise segment, data centers and cloud-service providers. Before that, Kaustubh led Seagate’s product-management teams in the Systems Division, including product lines in storage systems, software defined, object storage, storage-as-a-service offerings, and high-performance computing. He has worked for Cisco, Intel, McKinsey, Schlumberger and Motorola. In his prior role at Cisco, he led the original Cisco One efforts in SDN, served as strategy lead for the data center business, and was a senior leader in the corporate-strategy organization. Kaustubh has an MBA from Wharton, UPenn, an MS in ECE from UT Austin, and a bachelor’s degree in EE/CS from IIT-D.