Multicloud deployments are a major IT focus, with a whopping 85 percent of enterprises reporting that they have a multicloud strategy in 2017. That’s up from 82 percent in 2016 (RightScale 2017 State of the Cloud Report, February 2017). A big difficulty in pursuing a multicloud strategy is the complexity of managing different storage implementations, each with its own features, user interface, access methods and terminology. If that storage requires proprietary formats or protocols that are rare or unavailable elsewhere, the challenge increases.
Two major cloud outages by hyperscale cloud-service providers AWS and Microsoft Azure earlier this year put multicloud storage architectures in the spotlight. Enterprises using these public clouds suddenly found themselves unable to access their data for four or five hours. The Wall Street Journal estimated the AWS outage cost S&P-listed companies $150 million in lost revenue—excluding the cost to midsize and smaller firms.
In postmortem assessments of the early 2017 cloud outages, executives of cloud-service providers, partners and technology firms—those that were affected and those that were unscathed—had plenty to say on what the outages meant as well as how to prevent future outages or at least lessen their impact. Yet comments largely ignored the tremendous innovation that has taken place in multicloud storage, giving enterprises new options and more choices for adding a second cloud-service provider or even a private cloud. Moreover, the ability of multicloud storage to provide greater agility and lower costs may represent an even stronger argument for their deployment than the more obvious advantage of safeguarding data from the effects of local disruptions.
Multicloud Storage Benefits
Multicloud storage once implied redundant clouds and thus double the costs. Fortunately, the industry’s move to software-defined storage has made multicloud storage more practical, accessible and, in most instances, less costly.
In a well-implemented multicloud solution, IT teams achieve important advantages.
- Disaster recovery: Cloud outages make it painfully evident that many enterprises haven’t fully thought through potential single points of failure or have forgotten to test their disaster-recovery procedures, even when architectural best practice dictates it’s job one.
- Avoiding vendor lock-in: Enterprises are increasingly viewing multicloud storage as an insurance policy that allows them to quickly switch from one cloud to another. That way, they can retain the flexibility to take advantage of the best offerings of each provider as business conditions change or innovation unleashes enhanced capabilities.
- Optimal workload performance: Since no one provider can be all things to all people, aligning the service with the application makes the most sense. For example, many users who run Windows applications consider Microsoft Azure a natural fit. On the other hand, AWS has long been the cloud of choice for open source. A multicloud approach allows users to match particular workloads with the most appropriate cloud platforms.
- Easier cloud hydration: The time and hassle of getting data from a traditional storage into the cloud is one of the dirty little secrets behind a cloud deployment. Software-defined storage makes it vastly easier to concurrently connect multiple resources, both traditional and cloud, so companies can easily and quickly “hydrate”—or move—their data to new cloud platforms. Hydration can take place either through encryption-supporting appliances for static data or through mirroring for production data.
Four Lessons About Multicloud Storage
Here are some less obvious lessons of the past few outages specific to multicloud storage.
1. Keep data in two locations. Data should reside in two separate locations to mitigate outages, data corruption, natural disasters and other disruptions. If the enterprise is using one cloud-service provider, it should also locate compute resources in two locations as well so it has common data but multiple access points.
For those who want their data accessible from more than one cloud provider, this approach works even better. If one cloud-based system has an outage, but IT teams have set up the proper failover architecture, enterprises can simply redirect the applications and/or data to an alternate cloud-service provider. In this year’s outages, enterprises that retained concurrent access to secondary clouds were unaffected and maintained access to their cloud-based data.
Alternatively, it’s possible to create a disaster-recovery (DR) plan while keeping resources in the same cloud-service provider by applying a metropolitan failover strategy. For example, maintaining compute and storage resources on the East Coast, with a DR site on the West Coast, protects against a massive outage.
You can also configure a hybrid approach where critical data is synchronously mirrored between the cloud and on-premises storage. This way, your resources can fail over between the cloud and an on-premises copy of your data.
2. Architect for redundancy. Regardless of whether you’re using a public cloud, private cloud, on premises or hybrid, IT teams should consider duplicating major applications with separate cloud providers. Doing so allows them to stay in control and own your data. This approach avoids vendor lock-in as well as the pitfalls of having a single strategy.
3. Remember the competition. Many corporations—perhaps even most—are realizing that a mix of AWS, Google and Azure makes sense because it gives them redundancy and optimal pricing/performance. It enables a focus on which cloud offers the best price/performance to meet a particular application’s requirements. And there are hundreds of cloud-service providers—not just the majors. The rise of multicloud-friendly storage enlarges your options.
4. Select native multicloud storage. Native means storage where access to multiple cloud providers is designed into the architecture from the storage vendor, not custom integrated after the fact or “bolted on.” This option involves some critical functions: with it, there’s no need to manually “transfer” data from one cloud to another after an outage. Access to other clouds is concurrent, so an enterprise’s data is already available to a backup cloud provider.
In addition, several important features in scale-out, software-defined storage make it almost as easy as toggling an interface switch to enable multicloud deployment:
- Asynchronous replication between clouds, which supports almost instant failover.
- Metropolitan failover and high-availability options, allowing enterprises to have a fully disaster-tolerant system by distributing application servers across different cloud-service-provider availability zones and by distributing storage across different buildings in the same region. Based on automatic synchronous replication of data across a metropolitan-area network, they protect the enterprise in the event of facility-level failure.
- Backup at the block or image level, such as to AWS’s S3 storage. Some companies enable data mounting immediately on demand via a virtual gateway device. Although slower than other cloud-migration methods, the immediacy of this option can be handy.
Giving credit where it’s due, cloud outages are reminders that like electricity, enterprises can flick the proverbial switch and get free-flowing cloud access almost any time. But for businesses that depend on their IT operations—that is, most everyone—downtime can be crippling. Consider that even the requisite “three nines,” or 99.9% availability, translates to 8.76 hours of downtime per year, and that’s before any cloud outages or internally caused hiccups. Add cloud outages to that figure, as well as the lost revenue and sales opportunities that may ensue, and the toll quickly adds up.
Multicloud storage is a major advance in the way enterprises can mitigate risk, reduce costs and improve the agility of their IT infrastructures. For IT managers that are used to putting up with distinct NAS and SAN products in siloed environments, it’s a welcome change and an open door to a competitive advantage.
About the Author
Kevin Liebl is vice president of marketing for Zadara Storage, an enterprise storage-as-a-service (STaaS) provider offering multicloud storage and more. He has over 20 years of experience in guiding rapidly growing companies in storage, virtualization and networking technologies to dramatically expand their market presence with highly differentiated products.