The deployment of Non-Volatile Memory Express–connected solid-state drives (SSDs) is a topic increasingly on the minds of enterprise storage customers; I am asked about it frequently. It is often convolved with the belief that the transition to the Non-Volatile Memory Express (NVMe) interface on SSDs will eliminate the considerable performance advantages that purpose-built flash storage platforms enjoy. Although at first glance this might appear to be the case, there are a number of significant factors that customers should carefully consider.
Buyers need to be aware that although it is suitable for consumer devices and server deployments, the NVMe standard has not yet matured to where it can effectively meet the needs of enterprise-class storage arrays. The missing ingredients—scalability and high availability—happen to be fundamental requirements for any networked storage solution. Scalability requires an interface standard that can effectively attach upwards of thousands of devices to deliver the petabyte scalability that is expected today. At present, the scalability of NVMe-based storage is in the very low double-digit range, given the availability of PCIe lanes and the challenges associated with creating PCIe-based switched fabrics. To achieve high availability, the interface must be dual-ported or offer an equivalent level of redundant access to eliminate single points of failure. As the first deployments of NVMe SSDs have been in consumer laptops and server environments, dual-port implementations are not yet commonplace, nor are they a mature technology.
The maxim “nothing comes free” applies to NVMe-based SSDs, as NVMe presents significant design challenges in exchange for increased performance. The power consumption of high-performance NVMe-based SSDs is expected to increase significantly over their SAS-based counterparts. Additionally, although one implementation of the standard will share the same physical connector as SAS, NVMe will require an increased number of signal traces. The combination of these two factors means that traditional storage enclosure designs (e.g., a 2U25 small-form-factor JBOD) will prove unsuitable for NVMe-based SSDs, because they do not allow sufficient power, airflow and cooling, nor mid-plane surface area for signal routing. In addition, the maturity, reliability and security of SAS-expander designs (and associated SW and FW code bases) that have evolved over the past decade do not carry forward to new NVMe-based products. Evidence of the challenges associated with implementing NVMe technology are visible in today’s storage-server offerings, which can only support 8 NVMe SSDs out of a total of 25 physical slots.
One perceived benefit of SSDs is that they are a “commodity,” which for many means they are easily available, are interchangeable and have a very competitive price. But ask any vendor whether you can substitute the SSD in your array with a different SAS SSD of your choice (perhaps at a lower price) and you will inevitably be led through a list of lengthy (and legitimate) reasons why doing so is infeasible. So in reality, an SSD is an off-the-shelf part, but not a commodity. This situation raises the question, if SSDs based on the mature and long-established SAS protocol are not truly commodities, when could newer SSDs, based on an emergent standard, realistically reach that state?
Compared with purpose-built flash arrays, NVMe can only address a small subset of the capability gap. SSDs, regardless of type, flavor or size, are isolated devices. The wealth of capabilities that can only be extracted from NAND media when orchestrated by a global flash translation layer at the whole-array level is considerable.
Increased single-device interface performance is sometimes useful, but it does not solve the problems of uneven over-provisioning and device wear, nor does it enable orchestrating garbage collection and write operations so that they are invisible to reads. These capabilities can only be realized when the entire data path, firmware and data-management layer are architected end to end from the onset to unlock the full potential of flash media.
While SAS is considered to be obsolete, the future has not yet arrived and will take several years to unfold. This situation puts customers in the unenviable position of having to choose between continuing to purchase a technology that is at the end of its development path and putting business at risk with a version-1.0 product with a new architecture that is immature and unproven at the enterprise storage scale. How will vendors protect their customers’ investments? What compromises or tradeoffs must be made along the way? Will it turn into yet another forklift transition?
All of this dynamic is an unfortunate consequence of continued attempts to use “commodity” designs that are ultimately driven by objectives other than achieving the greatest potential value from flash technology. The fundamental question remains, what is the best way to transform NAND flash into an enterprise storage array? SSDs are not the only solution. Customers do have a choice. Instead of being led through another architectural detour (compatibility with HDD form factors, attempting to use consumer technologies built to entirely different requirements) on the journey to the all-flash data center, customers can instead choose to invest in a proven flash fabric architecture—one designed with this singular goal in mind.
 “The Performance Impact of NVMe and NVMe over Fabrics – Q&A” SNIA Blog; Dec. 2014
About the Author
Andrew Chen is a seasoned executive with over 20 years of experience in the storage industry. Andrew is currently the Senior Director of Product Management at Violin Memory, where he is responsible for systems and product strategy. He was previously at NetApp in a range of product-management and general management roles spanning FAS storage, the V-Series (Flex Array Virtualization) and the E-Series (including the EF all-flash array). Before NetApp, Andrew was in the hard-disk-drive industry at both IBM and Hitachi, where he held positions in engineering, manufacturing and business-line management. He holds a BSEE from the University of Michigan, an MSEE from the University of California, Los Angeles, and an MBA from the University of California, Berkeley.