Behold, the software-defined data center (SDDC): infinite, elastic, on demand and automated. Any application, any time, instantaneously, with ease. Can it really be that simple? Does it even exist? The real question is whether you can get one! We’ve seen a variety of software-defined technologies such as containers, hypervisors and virtual machines at the compute layer, along with an evolving list of software-defined-networking approaches and network functions virtualization at the connectivity layer. So what stopped the presses? The holdup on the path to the software-defined data center has been data and all that data storage—more specifically, the lack of a software-defined storage capability that supports the infinite, elastic and on-demand philosophy of the SDDC until recently. But the path is now open.
When I was eight years old, I went to a friend’s house to play. My friend’s mother asked whether I’d stay for dinner and whether I liked chicken. I said yes to both. My friend had a wry smile and whispered, “Well, then, you’re gonna have to help.” I immediately thought, what did I get myself into? His mother lugged a big pot of water onto the stove and my friend told me to follow him to get a hatchet. The process of assembling a chicken dinner from scratch had begun. The hot water was to remove the feathers and the hatchet was, well, you can guess.
That dinner-assembly process is similar to the way IT projects once were and in some cases still are. Define an application (dinner) and begin assembling the ingredients: switches, servers, processors, routers, memory, storage, operating systems, virtual machines, databases and on and on.
SDDC History Lesson
Having been on the engineering side of the data-storage industry for over 30 years, I’ve seen a lot. What people consider new at various points are mostly just recycled ideas, but what’s old is often what’s new. Technologies such as solid-state storage and persistent memory that feel relatively new existed 30 years ago, but they were reserved for specialized and usually isolated applications. An overarching trend for the last 10+ years has been the move to separate the consumer of IT resources from IT-infrastructure deployment and management—that is, delivering IT as a service. The trend began with the move to virtual machines (thank you VMware), then VMware acquired more technologies that it put under the umbrella of the virtual data center (VDC) and later changed the term to the more visionary software-defined data center. And although this process set the table for typical enterprises, dinner wasn’t actually served until Amazon AWS, Microsoft Azure and Google GCP started cooking. But for enterprises who want a meal of their own making, that chicken has been pretty hard to catch, mainly because of storage.
Data Gets In the Way
Why has storage held back the software-defined data center? The problem is the third word: data. The world revolves around data, and more specifically, extracting business value through exploitation of data. In the SDDC, compute instances come and go. Light up an application, chew on some data, produce some new data. Networking moves the data to where it should be, hopefully quickly and reliably. The problem for storage is that the data is expected to be persistent and perfect. Never lose a bit and always give it to the application when asked, lickity split and at a reasonable price.
Volume, variety, velocity, veracity and value. Wait—I just switched from “storage” speak to “data” speak. And for good reason. Storage is the steward of data, and we’re really talking about the multitude of applications that need persistent and perfect data in the dynamic SDDC world. Compute instances come and go, networks shuttle things here and there, but data has gravity and storage is responsible for getting the data right. Failure to do so can result in reboots at best and, at worst, loss of trust, opportunities, customers and revenue.
Three Is the Magic Number
I see three fundamental challenges that must be confronted:
- Organizational: silos. Many times, IT is organized by infrastructure type (servers, storage and networking). The result is often insular, inflexible and expensive: completely counter to elastic, efficient and on-demand. You end up with highly specialized resistant-to-change organizations that defend their right to exist and fight over resources. It’s akin to requiring three kitchens: one for breakfast, one for lunch and one for dinner.
- Technical: manual processes and little to no automation. Manual processes remain the norm for gaining access to IT resources in a typical data center. Submit a ticket, get a place in a work queue, put on a pot to boil and Have you ever been to a restaurant and become convinced that when you ordered the chicken dinner, someone is behind the restaurant chasing a chicken around with a hatchet? But SDDCs have automated basic manual tasks. They have the greatest value at scale (like serving meals at a restaurant), and operating at scale requires that most tasks are automated. You order your food using your mobile device and it’s waiting for you when you walk in.
- Financial: operational expense versus capital expense. The internal IT customer has become accustomed to budgeting for and depreciating equipment for an application, creating a sense of ownership and expectation of control. An SDDC delivers the service without the customer knowing what equipment the application is consuming, and it receives payment monthly on the basis of consumption.
The problem has been that any given software-defined-storage offering mapped to only a sliver of the enterprise data center, software defined or otherwise. Want storage for containers? There are vendors for that. Storage for virtual machines? More vendors. How about bare metal? (Weren’t they called physical servers a decade ago?) Of course there are some for that too. What if I need super-high-performance flash memory? Lots. Hybrid? Yup, tons. Block? Object? But in the SDDC, these features and functions must be API driven, giving you infrastructure as code. Numerous vendors specialize in nearly every conceivable area, so what’s the problem? The enterprise SDDC needs many if not all capabilities simultaneously, and the only way to get them is to talk with all the vendors. I feel like I’m making a handful of different chicken dinners at one time.
Testing Your Data Center Chops
When we get questionnaires, we often see many answers to choose from with a final choice of “D: all of the above.” This choice means all the options are valid. Ask any enterprise CIO questions related to his current needs across applications, deployment styles, performance requirements, capex thresholds or people and you’ll see a lot of A, B and C choices, but ultimately “D: all of the above” is the answer.
But when you aim your questions toward the future, the final answer changes to “D: don’t know,” and it emerges as the main selection even for seemingly basic questions: “How much data do you have?” “Where does that data reside?” “How many apps do you expect to build and deploy in the next couple years?” “How fast are you growing?” “How do see yourself using data differently in the future?” Basic questions indeed, but the inevitable answer is “D: don’t know.”
The point is that storage approaches that may answer specific questions usually fail. The approach that not only passes the exam but gets a top grade must address the challenge of operating at scale across a complete data center environment with one systematic, enterprise-wide approach. Without it, your SDDC veneer is as thin as the scripts you’re deploying to glue it together.
Most failed software-defined-storage companies latched onto a single trend, often called a beachhead strategy, only to realize that they couldn’t help when the scope expanded to applications and environments requiring “all of the above” and serving “don’t know” future needs. Because enterprises are constantly evolving, they’re always a mix of legacy, standard and new applications—all of the above. And because business opportunities and competitive threats are dynamic, future needs are anything but certain.
I now come to hyperconverged infrastructure (HCI), which is the “flavor of the month” promising a fully integrated, easy-to-deploy solution with just the right amount of compute, storage, networking and software for your application. These simple prepackaged “meals” are good enough for when you know just what you want and you’re confident it won’t change. The challenge is they simply can’t answer the call when “all of the above” is the answer to “What applications will this serve?”—and more so when you ask the same question again and “don’t know” creeps in. A lack of clear application selection, scope, scale and capabilities dooms all cookie-cutter solutions such as HCI. The result is often that you leave behind your HCI like your leftover chicken dinner that sits in the refrigerator a bit too long—no longer fit for consumption!
Is the software-defined data center now within reach for your enterprise or still just a figment of IT imagination? From talking with thousands of CIOs and IT architects, I’ve found that the old “don’t know” answer has now become “A: yes, it is.” The path forward is no longer limited by technology, but it has more to do with you and your company’s appetite to better handle today’s “all of the aboves” and prepare for tomorrow’s “don’t knows.”
My advice is to accept that you must build your SDDC for “all of the above.” Your present and future applications should all target your SDDC design blueprint. Embrace your “don’t knows” and select vendors that enable automation and machine learning to provide the agility your SDDC needs to grow, change and accelerate, not be a roadblock. You want a tool box that contains more than just a hatchet, and then one great kitchen is plenty for all the meals you’re serving now and intend to serve later.
About the Author
Hal Woods is chief technology officer for Datera.