The Great Debate: Finding the “Right” Storage Solution

January 29, 2014 2 Comments »
The Great Debate: Finding the “Right” Storage Solution

Making Smart Storage Choices: How to Evaluate SSDs

As the demand for faster storage grows many organizations are considering switching from hard-disk drives (HDDs) to solid-state drives (SSDs). SSDs boast the combination of increased I/O (input/output) throughput and greater power density (IOPS per watt), which can accelerate applications and reduce the data center footprint. These two factors present a compelling argument for evaluating SSD drives. What is the best way to accurately compare cost versus performance to determine the right mix of technology for your specific application environment?

Capacity, throughput and cost per gigabyte are the age-old guidelines many turn to when evaluating storage, but are these the only metrics relevant for SSDs? Is it even fair to use these familiar benchmarks to compare HDDs and SSDs? They seem like practical and simple means for comparing the two types of drives, but the huge variation in flash endurance among SSD types and the relatively low performance of HDDs makes the task much more complex.

Our industry has a 60-year history with HDDs, which has led to the solution becoming permanently engrained in our minds as the obvious storage choice. But technology continues to grow and adapt with time, and so must we in our thinking. HDDs are not prone to wear. Some may fail, but the vast majority are superseded and upgraded before they die. There are no restrictions and usage is straightforward. You buy the drive. You use the drive. This has led to a focus on acquisition cost for storage; hence the popularity of the cost-per-GB metric.

SSDs are different; the nature of the NAND technologies used in them enables only a finite number of program erase cycles or writes to the drive. This limitation essentially makes SSDs consumable and must be factored into total-cost-of-ownership calculations when evaluating SSDs versus HDDs. This fact becomes even more important when looking at the choice of SSD types and their relative I/O performances.

SSD vs. HDD: Evaluate Total Cost of Ownership

The Storage Networking Industry Association (SNIA) performed an in-depth analysis of the factors that should be considered in comparing HDDs and SSDs for a given application. It concluded that assessing total cost of ownership (TCO) offered the most realistic basis for comparison, assessing both the direct and indirect costs of deploying a storage system over its lifecycle.

Direct costs—typically labor—and capital costs are familiar and relatively easy to measure, but indirect costing is more complex. Industry studies have shown that operating a storage device over three years can cost more than buying it. So to get a clean TCO, it is essential to objectively account for all relevant data including the cost of the following:

Acquisition

Analysis of acquisition must include cost per drive, software licenses and differing architecture options. Consider, for example, that in an application with highly random I/O transactions (e.g., exchange email, banking transactions and so on), a single SSD could replace an array of 10 or more HDDs, resulting in a smaller footprint, higher performance and lower costs for supporting hardware and software licensing.

Maintenance and Repair

HDDs have an annual failure rate between 2 and 8 percent, so as many as 1 in 12 HDDs deployed will fail every year. The cost of a drive, the personnel to replace it and any system downtime must all be factored in to reach an accurate estimate for the cost of these replacements. In addition to these factors, SSDs must be treated as a consumable product with endurance dependent on multiple elements, which vary by manufacturer and design. We will look at comparing this in detail later.

Power and Cooling

In Tier 0 and Tier 1 storage systems, choosing SSDs can save over 80 percent in total storage-system energy requirements. There are many elements to this calculation beyond just the lack of rotating media, swinging read/write heads, actuators and spinning motors. Ultimately it is the greater power density (IOPS per watt) of an SSD that makes the difference; fewer drives can deliver the same throughput for less power, with the additional benefit of requiring less space and cooling.

RAID Configuration

It is standard practice to use RAID configurations to improve performance and reliability. Conventional RAID configurations mask the high I/O latency inherent in HDDs. New SSD-friendly RAID implementations both exploit and enhance the performance and reliability of SSDs. The tradeoffs between RAID levels can significantly shift the performance, cost and reliability equations, so the relative benefits need to be factored into a complete TCO exercise.

The SNIA provides a TCO calculation spreadsheet in Microsoft Excel format for download. This spreadsheet incorporates all of these factors.

New Drive Comparison Rules for SSDs and HDDs

Calculating a full TCO is an important discipline, but for immediate comparison of SSD types and designs, some simple metrics are useful. In the HDD era, the rule-of-thumb measures were cost per GB and I/O throughput. The characteristics of SSDs weaken the relevance of cost per GB and mean that tests of I/O throughput must be appropriate to SSDs to ensure that accurate real-world performance comparisons can be made between drive types.

This situation leads to two new rules:

  1. Adopt the cost per terabyte written ($/TBW) as a key metric for comparison.
  2. Compare performance on the basis of standardized tests suited to an SSD.

Why Endurance Matters or Why You Need the $/TBW Figure

The various NAND technologies used to create SSDs all have finite lives. Their logic gates degrade after a certain number of writes, at which point they can no longer reliably store data. SSD manufacturers take a variety of approaches to managing and guaranteeing endurance, resulting in a spectrum of capability and price combinations that can be confusing for buyers—especially as the majority of enterprise drives are typically marketed with a five-year warranty.  The critical factor is that the warranty always incorporates a limit on the number of writes in that warranty period. The impact of this limitation is significant, and clarity about its impact is essential when making a purchasing decision.

Buying at the lowest $/GB without reference to endurance is like buying a cheap tire; it may fit and have a five-year warranty, but have you saved anything if the warranty limits use to a thousand miles a year?

Below is a hypothetical example comparing two SSD drives, both with a five year warranty.

storageWith an HDD mindset, buying SSD-A is an easy decision, as it’s half the price. If endurance is factored in, however, the picture changes, and you will find yourself replacing SSD-A every six to eight months. SSD-B offers the best real value over the drive’s lifetime, as the $/TBW shows.

Standardizing SSD Performance Testing

The management of NAND flash memory devices in an SSD to optimize the drive’s overall performance is complex. The effectiveness of flash management, leading to the drive’s measured performance, is affected by several factors: the state of the drive before the test, the workload pattern (such as the read/write mix and block size being written) and the data pattern.

For example, when using a fresh-out-of-the-box (FOB) SSD and subjecting it to I/O throughput testing, the FOB drive will outperform an identical SSD that has already been in use. As use continues, the FOB drive will change to steady-state performance, which is the point where measurements that reflect the drive’s actual in-service performance can be accurately taken. There can be a significant difference between the maximum FOB performance and steady-state figures.

The SNIA has evaluated the impact of these different factors and developed a performance test specification for enterprise solid-state storage, available for download.

Performance-data evaluation should always use data from testing that meets this specification to ensure like-for-like comparisons across drive manufacturers. But even less-formal in-house testing will be improved by following the basic steps of the process before taking measurements. Testers need to restore the SSD to its factory-default state, precondition the drive by writing to it to twice its capacity with a sequential workload and then run test scripts until a steady state is achieved.

Run the Numbers

Realistic comparisons between an HDD and the best-performing SSD show that for the majority of enterprise applications, the performance benefits of SSDs are now available at a price that is becoming competitive with HDDs.

So when you next ask yourself if it is time for SSDs, think TCO and ask for $/TBW and SNIA performance figures. Then simply run the numbers; they speak for themselves.

Leading article image courtesy of Simon Wullhorst under a Creative Commons license

About the Author

storageJohn Scaramuzzo is senior vice president and general manager of SanDisk’s Enterprise Storage Solutions team. John is a veteran of the storage industry with more than 25 years of experience that includes leadership roles at Seagate, Maxtor, Quantum and Digital Equipment Corporation. Before joining SanDisk, he served as president of Smart Storage Systems, where he was responsible for driving and expanding the company’s technology leadership and storage business in the enterprise, OEM and channel markets, as well as in the related cloud, big data and vertical industries. John holds a bachelor’s degree in electrical engineering from Boston University and a master’s degree in electrical science from Harvard University. He also holds three U.S. patents related to disk-drive technology and applications.

2 Comments

Add Comment Register



Leave a Reply