The vast majority of IT departments are experiencing enormous increases in the demand for storage and computing power. Few, if any, will have the budget to meet rising requirements that continue to outpace the growth of their revenue. This situation raises a difficult question for IT teams everywhere: how long is the usual approach of managing the install, upgrade, retire and replace cycle going to be?
By now, it should be obvious to all that the strategy that built the data center of the past isn’t going to deliver the data center of the future. New models and approaches are being embraced by hyperscalers, based on open-source software and commodity hardware. The cloud, we are told, has made IT a utility―as simple and as easy to manage as your gas bill. Yet, although we all know there are many advantages to paying opex rather than capex, over time the cloud can mean paying more—just in smaller installments.
As the changes come through, there is considerable risk for IT teams, who will need to best maximize their existing assets while frugally spending on future ones, all by wisely navigating the gap between hype and reality.
In this foggy world, some things are crystal clear. Here are three considerations:
- Outside of the “hyperscalers,” hardly anyone will be able to afford to own and host all their compute power on premise. In the future, a proportion of your compute power is going to be in public clouds, one way or another, sooner or later.
- Storage growth is massive and unsustainable. You are going to need to find a better, cheaper way of doing it, and that way is going to need to work in harmony with your compute decisions.
- Vendor lock-in is never a good idea. In a world where business models change, discovering you’re locked into a cloud provider might be one of the most unpleasant discoveries of your life.
There’s a joint conclusion that many in the industry have arrived at. The growth of the software-defined storage (SDS) industry is loosely defined as a method of storage that is organized by a software provider, regardless of hardware or location.
It’s well documented that SDS is the inevitable destination for much of your future storage needs. If you don’t believe me, consider the latest market forecast from Gartner:
- By 2016, server-based storage solutions will lower storage hardware costs by 50 percent or more
- By 2019, 70 percent of existing storage-array products will also be available as software-only versions
- By 2020, between 70 and 80 percent of unstructured data will be held on lower-cost storage managed by SDS environments
So the question instead shifts to how SDS must be implemented and the types of needs it can serve for your data center. Here are a few common data center concerns, and how SDS can be deployed correctly to fit each need.
Businesses are moving too fast to rely on storage architectures that are proprietary, overpriced and inflexible. At the same time, IT is also challenged with organizing storage assets as a bridge between new and old, with the same level of performance across locations and classes.
SDS should deliver storage capability comparable to midrange and high-end storage products at a fraction of the cost. It should be an open, self-healing, self-managing solution that scales from a terabyte to multi-petabyte storage network. Coupling SDS with commodity off-the-shelf storage elements yields amazingly cost-efficient storage. Truly unlimited scalability enables enterprise IT organizations to deliver the agility businesses demand by nondisruptively adding capacity at the cost they want to pay. Intelligent, self-healing, self-managing distributed storage enables storage administrators to minimize the amount of time spent managing storage. It thus allows organizations to support more capacity per storage administrator or spend more time focused on delivering future innovations to the business.
Flexibility is one of the core tenets of SDS, as the increased ability to shift storage across locations and hardware leads to its agility and cost benefits. But flexibility cannot be obtained without true interoperability. Your new SDS provider may have a long-term roadmap toward standardizing other components of your IT infrastructure on the same vendor, limiting many of the benefits of SDS in the first place.
To achieve maximum flexibility with your SDS project, make sure that you evaluate solutions that play well with others. Examine a vendor’s alliances and alternate IT solutions, and evaluate whether open source or proprietary plays into this alignment.
Decoupling Storage Hardware From Software
SDS is an approach to data storage in which the programming that controls storage-related tasks is decoupled from the physical storage hardware. Software-defined storage is part of a larger industry trend that includes software-defined networking (SDN) and software-defined data centers (SDDCs).
Software-defined storage puts the emphasis on storage services such as deduplication or replication, instead of storage hardware. Without the constraints of a physical system, a storage resource can be used more efficiently and its administration can be simplified through automated policy-based management. For example, a storage administrator can use service levels when deciding how to provision storage and not even have to think about hardware attributes. Storage can, in effect, becomes a shared pool that runs on commodity hardware.
SDS is part of a larger industry trend that includes software-defined networking (SDN) and software-defined data centers (SDDCs). As is the case with SDN, software-defined storage enables flexible management at a much more granular level through programming.
The shift toward SDS is largely driven by a substantial cost reduction without compromising (and even improving) a previously commoditized technology. By the way it’s consumed and managed, it splits your capital storage expenses for hardware into bills that you pay as you use. With future storage demands growing and unpredictable, having technology that’s handled as an operating expense can lead to a huge cost reductions. SDS also saves on any hardware maintenance and support costs, as storage resources are no longer tied to hardware.
Cost advantages, however, don’t start and end by shifting to SDS alone. Various solutions in the market maximize a potential IT investment. For example, the solutions available in the open-source community allow data center managers to achieve additional cost savings compared with their proprietary counterparts.
There’s no question that the storage industry is at an inflection point in its adoption. Offering nearly overwhelming advantages in cost, flexibility and performance, SDS is a solution any storage-conscious data center manager will evaluate in the next several years.
But adopting SDS alone isn’t enough. Instead, make sure that you know the goals for your storage project, and evaluate vendors that align with your IT objectives. SDS is the future—make sure that you’re on its leading edge.
About the Author
As Head of Product and Solutions Marketing for SUSE, Jason Phippen is responsible for leading SUSE product and solutions marketing efforts globally. He is also the product marketing lead for SUSE Enterprise Storage, the new software-defined storage offering from SUSE. Jason has more than 15 years of product and solution marketing experience previously working with companies such as Veritas, Computer Associates and Emulex before joining SUSE in 2014.