Industry Perspective is a weekly Data Center Journal Q&A series that presents expert views on market trends, technologies and other issues relevant to data centers and IT.
This week, Industry Perspective asks Bill Hobbib about the state of backup and disaster recovery (DR) in today’s market. Bill is vice president of marketing for ExaGrid Systems, a provider of cost-effective scalable disk-based backup solutions with data deduplication.
Industry Perspective: What is driving the move from tape to disk?
Bill Hobbib: The move from tape to disk is driven by a combination of factors, including frustration with the problems of tape, the superior reliability of disk and decreases in the price of disk in recent years that allow disk with deduplication systems to approach the price of a new tape library plus tapes.
Tape backups and restores are unreliable because of corrupted tapes, missing files, blank tapes and other problems. IT professionals spend excessive time managing manual tape backup processes. Security is poor, as data is leaving the building when tapes are transported offsite. Even onsite, tapes may not be in a secure area. Tapes wear and stretch with reuse, increasing the chance of failed backups and subsequent restores, and tape libraries are more prone to breakage compared with other hardware in the data center because they have many moving parts. Furthermore, increased levels of virtualization leave organizations still using tape unable to fully modernize their backups or take advantage of the advanced recovery techniques found in virtual data protection products.
Disk, in contrast, is inherently more reliable. Backups complete successfully, since writing to disk is reliable. Restores work since the data is reliably backed up to disk, and therefore the restore is also reliable. Manual movement and handling of tapes is avoided, as disk drives remain in data center racks. Security issues are negated since disk sits in the data center rack behind physical lock and key and within network security. Disk is not damaged by heat or humidity since disk is in a hermetically sealed case in the temperature- and humidity-controlled data center. Uptime is greater because spinning disks fail far less often than tape drives and robotics.
IP: How should companies approach the move from tape to disk?
BH: To comprehensively address the problems of tape, the optimal solution is to back up all data to disk onsite, replicate the backup data to disk offsite, and entirely eliminate tape along with its associated drawbacks—provided that the cost is equivalent to that of tape. Since disk costs more per gigabyte than tape, the only answer is to use far less disk to store the same amount of data. If straight disk costs 20 times the price of an equivalent-size tape library plus tapes, then if you can store the total amount of data on 1/20th of the disk, the costs are now equivalent. You can! The way to accomplish that is by using disk with data deduplication.
IP: Will tape ever be phased out completely?
BH: Although industry sales of tape libraries have been on the decline for the past four years while sales of disk-based backup with deduplication are seeing double-digit yearly growth, tape is still used in a majority of organizations in some part of their backup process: for backup, offsite disaster recovery, or long-term retention and compliance. In view of the reality that tape is still in widespread use, it is not possible to predict if or when tape will be phased out completely.
IP: How much of the backup market is likely to go to the cloud?
BH: For the consumer market and small businesses that are not multi-TB environments, as security concerns about the cloud are addressed, a substantial part of those markets will eventually turn to a series of cloud providers for their end-to-end backup needs, including using the cloud as the primary backup target.
Backup to the cloud is good for small amounts of data—typically under 1TB. This limitation is due to the time needed to recover the data over the Internet. Under normal operation only the changed bytes or blocks get sent for replication. However, if a full backup restore is required, it would take about 31 days to get 1TB of data over 3MB of bandwidth from the Internet. It is for this reason that consumers and small businesses with smaller data sizes are the main markets that will likely go to the cloud.
IP: What is the cloud’s role in disaster recovery?
BH: Mid-market to enterprise companies are beginning to investigate selective use of the cloud for storing disaster recovery copies of their backup data. These enterprises recognize the cloud cannot serve as the primary target (as it can for small business) owing to the logistics of initial backup and subsequent recoveries. Initially, the cloud will serve as a repository for lower-priority data and longer-term archiving of backup data.
IP: What size businesses are turning to the cloud for backup and DR?
BH: SMBs will find cloud providers to be a good answer for their end-to-end backup needs, even as a primary backup target. Mid-market and enterprise businesses will have their primary backups onsite and will turn to the cloud for some disaster-recovery purposes.
IP: What is “instant recovery” and what can it do for businesses that suffer downtime events?
BH: Instant recovery refers to the ability of IT users to rapidly recover objects and virtual machines in the event of a failure by leveraging their disk-based backups in production, rather than going through prolonged restore procedures. An example of this is a disk-based system that maintains a full copy of the latest backup in a landing zone, allowing IT staff to instantly recover and run an entire VM in minutes directly from a disk-based backup system, should the primary VM be unavailable. While most disk-based deduplication products only retain a deduplicated copy of data—often resulting in limited functionality or hours to recover from a failure—an architecture that retains a complete copy of the backup in a landing zone allows customers to fully leverage instant-recovery features available in some backup applications to minimize downtime and disruption.
IDC estimates that downtime costs businesses an average of $70,000 per hour. In addition, it takes an average of seven hours to resume normal operations after a data-protection incident. This makes it absolutely critical for IT organizations to implement enhanced backup and recovery.
IP: For 2013, what is the outlook for the amount of data businesses will be dealing with?
BH: Most IT organizations are facing data growth rates of 30 percent or more annually. This means your data will double about every 2.5 years. Thus, it is essential for organizations that are moving from tape to disk backup systems to evaluate how well the architecture of a new system under consideration scales to accommodate growth. If the system uses a scale-up architecture, where disk capacity is added without also adding corresponding compute resources, as data grows your backup window will expand to the point where you’ll need an expensive “forklift upgrade.” In contrast, using disk backup systems that have a scale-out architecture and add compute with capacity via full servers in a grid gives you a permanently short backup window and eliminates costly forklift upgrades.
IP: Will poor economic conditions in the new year affect company spending on backup and DR?
BH: No one has a crystal ball regarding spending in 2013. Many organizations are feeling the pressure from their aging tape-based backup and recovery systems and are realizing they cannot continue to delay investments in new backup and recovery systems. We’re seeing strong demand for our disk backup with deduplication systems, as our target market becomes better educated about the capabilities and affordability of disk backup with deduplication, as well as the advantages of ExaGrid’s approach versus others in the marketplace. We just achieved a record year in 2012—our seventh consecutive year of growth—and we are excited to carry that momentum into 2013.
Related Posts :