Okay, I’m going to really date myself here, but remember the days when you manually fast-forwarded from song to song on a cassette tape (stopping and listening to find the break between songs)? It was pure tedium. Then, a radical invention came along: a feature that “automatically” skipped to the next track. The tape still needed to spin, but at least you saved the time and energy of stopping, then starting, then stopping. Now, you could quickly go from “Love of My Life” right to “Bohemian Rhapsody” without having to hear one note of “Good Company.” And, of course, tape players really advanced when they played both sides of a cassette without having to manually eject and turn it around.
Then CDs came along and the world changed. It was instantaneous! Skip from track 1 to track 10 in seconds. And, better yet, all the songs of an album were on one side. No ejecting and flipping disks. Depending on your music collection, replacing every album added up, but the sound! The convenience! CDs took up less space. CDs were practically indestructible.
Fast-forward another decade and MP3s were born. Now, it wasn’t just about speed of access and eliminating manual tasks. It was about scope. All of your music on a single device that was practically the size of one of those old cassette tapes.
Now, imagine trying to adapt cassette-tape technology to today’s world of MP3s, iTunes, and Spotify? Impossible, right? The resulting solution would likely be unusable—cumbersome and counter-intuitive. No matter how brilliant the Sony Walkman engineers are/were, they’d be attempting to adapt an antiquated model to a modern-era phenomenon.
A similar phenomenon is playing out in the data management market, with backup software specifically.
Backup software as we know it emerged in the 1970s with the advent of disk drives being used as primary media. At the time, disk media were cost prohibitive to store anything, but production data and tape was a cost-effective alternative to store copies of data. The behavioral differences between random access, block-addressed disk and the sequential access, streaming tape media created a need for software that could bridge the data movement and format between the two different media types. Backup software was born.
The problem is that the design of backup software assumes no other activity during the backup window. As businesses grew, application operations time started to overlap with the backup window, while at the same time, exponential data growth expanded the backup window itself. This meant that backup software was unable to complete this operation, leaving critical business information vulnerable to loss, without affecting business applications. Businesses operating in highly regulated vertical markets like health care, government and finance were exposed on both sides of the equation. They were required to both back up information but also to be able to retrieve the data in the event of an audit, legal discovery or another request. No longer is backup an insurance policy; it’s evolved to a business-critical function that can cost organizations millions if improperly managed.
The traditional methods of grandfather-father, father-son backup are the equivalent of the first-generation cassette tape players: slow, cumbersome operations, and very little end-user satisfaction. Next-generation backup players represent a marginal improvement—like a Sony Walkman with the skip and auto-reverse functions. But, these vendors are merely streamlining a last-generation process with a nice GUI. They’re still offering a 1980s Walkman.
Backup Software As We Know It Is Obsolete
With organizations virtualizing servers in the data center, it is time to revisit the arcane practices of backup, disaster recovery and business continuity. By getting rid of the complex, expensive and inflexible collection of multiple-point tools, users are rapidly embracing virtualization—enhanced business availability, vendor independence and improved utilization, all at a significantly lower cost.
Multi-terabyte-scale databases, rampant virtual machines and cloud storage were the realm of science fiction when most of these technologies were built. Now, they’re commonplace in IT, regardless of company size. The technologies built in previous decades were designed for their time. And, now that the world of IT has moved on, these technologies are struggling to keep pace. Each new release adds complexity, layering functions on top of the decades-old foundation. It’s the equivalent of adding millions of lines of code to attempt iTunes support for a cassette tape.
Albert Einstein once said, “We can’t solve problems by using the same kind of thinking we used when we created them.” Today’s backup demands a radical rethink of the foundational tenets of data protection. The design must start in the context of today’s reality—cloud, big data, virtual and physical machines, universal-access, always-on, application-orientation, hyper-scale. The approach to usability must be rooted in the fulfillment of a business need—a service-level objective—not in infrastructure. The implementation should be a simple task conducted by a layperson, not a multi-week endeavor involving high-priced consultants. Even the term “backup software” is outdated. Backup speaks to making copies, mindlessly adding to the pile of data. Backup is only as valuable as its ability to recover. A better term would speak to the business goal of instant data capture and access so it can be managed, leveraged for a given need—whether that need is recovery, file restoration, or even development and test automation.
It’s 2013: time to put away the cassette tapes and players. It’s time to find a better solution.
Fortunately, leading industry analysts such as Gartner are now recognizing this technology shift and have defined a new technology category. They call it copy data management and define it as products that can “perform a host of functions, including backup, archiving, replication and creation of test data using a minimal number of copies.” IDC also has sized this new market opportunity, which is estimated at $44 billion. That’s 8x the projected market size for big data and analytics! It’s no surprise that using technologies designed for the disco era for so long has created such a big mess.
Organizations are finally wising up to Einstein’s advice and starting to rock out to a whole new copy data tune.
About the Author
Brian Reagan is VP of Product Marketing at Actifio. He was previously CTO of the global Business Continuity and Resiliency Services division at IBM Corporation, responsible for the technology strategy, R&D, solution engineering and application development for all global offerings including cloud services. Before IBM, he was CMO for performance data storage firm Xiotech and also CMO for Arsenal Digital Solutions, which was purchased by IBM in 2008. A technology-industry veteran, Brian also held senior-level strategy and marketing roles at EMC Corporation and MCI Telecommunications, and he has spent over two decades in the areas of storage and information management. Brian holds a BA from Bennington College and an MBA from George Mason University and is a frequent speaker at industry events and conferences.
Leading article photo courtesy of Nico Kaiser