Gartner estimates the shift to the cloud will yield $216 billion in spending in 2020, up from $111 billion in 2016. As the adoption of virtualization technologies has skyrocketed, so has data volume. Organizations are seeking greater flexibility, room for growth and increased productivity, but they’re also looking for affordable storage. That’s where software-defined storage (SDS) comes in.
During the 2017–2021 forecast period, the SDS market is predicted to enjoy a compound annual growth rate of 13.5 percent, with revenues of nearly $16.2 billion in 2021. Enterprise storage spending is shifting from on-premises IT infrastructure to private, public and hybrid cloud environments, as well as from hardware-defined dual-controller array designs to SDS. But SDS isn’t “one size fits all.” Before making the switch, you need to know that your solution can handle your enterprise workloads. Here are some elements to look for.
The Appeal of Software-Defined Storage
SDS approaches are gaining in popularity because they have multiple advantages over traditional storage architectures. They can run on commercial, off-the-shelf hardware while delivering better and faster functions, such as provisioning and deduplication, via software. SDS offers easier and more-intuitive autonomous storage-management capabilities that reduce administrative costs, increase agility and decrease expenditures thanks to the lower-cost hardware.
Knowledge is power. Those charged with choosing an SDS architecture could end up with a lot of marketing hype and less-than-satisfactory results without proper education about SDS capabilities. So, here’s guidance to inform your choices as you look for a versatile, cost-efficient, unified SDS architecture.
Constituents to Consider
Horizontally aligned SDS platforms are totally hardware agnostic as well as fully flash compliant. They enable the flexibility and performance that’s critical to the future of storage. In making this important decision, watch for these constituents in an SDS solution:
- Unified storage: For machine-to-machine/IoT transactions and other applications that require extreme scalability and have no performance requirements, object storage is used. It’s been getting all the attention lately, but it’s poor at managing unstructured data. That’s why you need file storage. But you need block and object store as well for a truly unified approach.
- File features: It’s often the case that SDS providers offer file systems that are based on freeware, meaning they exclude some important features most Windows users are accustomed to. Therefore, thoroughly vet the file-related features you’re being offered; make sure they include snapshot, quota, antivirus, encryption and tiering.
- Network-attached storage (NAS): Consistency is critical in a scale-out NAS—meaning files are accessible from all nodes at the same time. Look for consistency in the SDS solutions as part of your research.
- Sharing: For hybrid cloud users, different office locations in an organization probably need both a private area and an area that they share with other branches. So, only parts of the file system will be shared with others. Selecting a section of a file system and letting others mount it at any given point in the other file systems provides the flexibility to scale the file system. This flexibility ensures the synchronization occurs at the file-system level to allow a consistent view of the file system across sites. The ability to specify different file encodings at different sites is useful, for example, if one site serves as a backup target.
- Hyperconverged capability: Because hybrid-cloud solutions require support for hypervisors, the scale-out NAS must be able to run as hyperconverged as well.
- Storing metadata: In the virtual file system, metadata are important bits of information that describe the structure of the file system. For example, one metadata file can contain information about what files and folders reside in a single folder in the file system. That means you’ll have one metadata file for each folder in your virtual file system. As the virtual file system grows, you’ll get more and more metadata files. Make sure your prospective provider’s storage layer is based on object storage so you can keep all your metadata there. This approach will ensure good scalability, performance and availability.
- Caching: SDS solutions need caching devices to increase performance. Both speed and size matter—as does price. It’s also important to protect the data at a higher level by replicating it to another node before destaging it from the cache to the storage layer.
Know Your Options
Virtualization demands increased storage capacity, and that storage must be fast and flexible as well as affordable. Software-defined storage meets all these criteria, but not all SDS solutions are the same. Be on the lookout for a scalable, unified approach that incorporates all storage types. As you vet candidates, use the recommendations above to ensure that your organization is getting the architecture that will serve it best.
About the Author
Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise-scale data-storage solutions designed to be cost effective for holding huge data sets. From 2004 to 2010, he worked in this field for Storegate, the wide-reaching Internet-based storage solution for consumer and business markets having the highest availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile- and fixed-network operators.