Enterprises have undergone profound changes thanks to paradigm shifts in the nature and volume of business-critical data. Over the past 20 years, the rapid emergence of applications that create and use unstructured data such as video, documents, logs and even social-media sentiment has driven an exponential increase in the data we collect, store and employ to make critical business decisions. But the underlying architecture of enterprise storage systems has largely remained the same since the days when we measured storage in terabytes, even though we’ve raced into the era of petabytes and exabytes, with no signs of slowing.
All these dynamics have created a crisis for senior IT staff. They must deliver greater degrees of data availability to meet 24/7 business demands while dealing with ever growing pools of all types of data in hybrid multicloud environments. IT teams are in dire need of solutions to tackle these various scalability, availability and performance challenges while reducing total cost of ownership and demonstrating returns on infrastructure investments.
IT Spending Under the Magnifying Glass
Today’s “app for that” economy demands unprecedented performance from IT systems and fuels the appetite for data across the enterprise. In some instances, an organization’s storage deployments are unable to handle the data deluge.
Organizations in a global economy must operate with a 24/7 mentality, as business users demand truly continuous uptime for every application and data asset. Security and governance requirements are complicating the demand for access. Given GDPR and other regulations, nation-state cyberattacks, and the prevalence of commoditized ransomware, data must be stored, processed and tracked securely in accordance with best practices.
IT-budget pressures can make funding technology solutions extremely difficult. The lack of a clear upgrade path for many technologies means forklift upgrades are often the norm, even when everyone knows they’re a poor idea. As the data ecosystem evolves, organizations must invest in optimizing their data strategy and systems if they want a fighting chance with customers, competitors and emerging challenges.
The Changing Nature of Data
Data is a driver of the expansion of global markets for just about every kind of good and service. Structured data is the powerhouse that has fueled commerce for decades. The demand for network-attached storage (NAS) and storage-area networks (SANs) will only increase as the volume of transactions hitting those databases continues to climb. But unstructured data has become more central over the past decade with the rise of mobile, advertising technology and social-media platforms. Most studies show it constitutes 80% of corporate data today and is also the fastest growing. The Internet of Things (IoT) promises to serve up billions of new data sources in just a few years, deepening the impact of unstructured data on enterprise IT.
Digital technologies are transforming everything from daily personal life to education to government services to global supply chains, so every business is in some way data defined. Data-intensive uses such as predictive analytics and deep learning are emerging at an unprecedented pace, promising far-reaching disruption.
Despite the myriad ways in which data types and uses have evolved, many enterprise storage systems have remained unchanged. Numerous organizations are trying to cram new data types, such as video images, into systems better designed for files and tables. Traditional storage systems provide critical functions, but they can be slow to deploy and difficult to scale quickly. Likewise, RAID technology is now more than 30 years old and is no longer the most agile solution for large data sets. Disk capacity has continued to increase, but disk rotation speed has effectively maxed out at 15,000 RPM. As a result, latency is increasing in proportion to capacity. The expanding size of disk drives also extends the time required for rebuilds in the event of a data center failure. Even worse, if a second failure occurs during a rebuild, loss of data can occur.
Rethinking Storage in an Exabyte World
So how does IT keep the data ship afloat and learn to steer it efficiently? Many organizations are in the midst of a dramatic shift to vast objects with huge scaling requirements, making them rethink how they deploy and use storage devices. They often augment existing SAN and NAS devices with object-oriented storage devices to employ the same kind of flexibility and cloud economics that virtual machines do. Some mega-data players—such as Facebook, Instagram and Shutterfly—take an object-first approach to storage.
As companies rise to the data-transformation challenge, many begin by replacing purpose-built storage controllers with industry-standard, scale-up and scale-out hardware that’s readily available and easy to procure. Employing standard hardware components enables enterprises to adopt the software-defined (SD) approach that the rest of the data center has already implemented. They then reapply SD principles to storage, with an emphasis on high scalability.
Building on lessons learned from network virtualization, object storage can also separate the data, connectivity and management planes so each can scale as needed without affecting the other system elements. Using industry-standard hardware for each of those planes in combination with innovative storage solutions offers additional advantages:
- Increased uptime: Guarantee essentially continuous uptime for data in the storage system by interconnecting multiple nodes to form a mesh.
- Inherent connectivity: Enable delivery of any type of data to any user protocol, empowering virtually any user or device to access all business-essential data (whether object, NFS or CIFS) as applications demand.
- A supervisor server: Provide comprehensive management and automatic rerouting around any failed system element, whether disk, controller, power supply or entire server, until it can be replaced. Seamlessly add the element back into the cluster to further ensure continual availability for all users.
This approach to object storage provides the ability to scale capacity and performance while providing exceptional availability. It enables an enterprise to remove bottlenecks anywhere in the system and add capacity only where it’s truly needed. Having the flexibility to support any data type from any client anywhere in the world enables object storage to integrate smoothly with existing Tier One SAN or NAS storage, data, applications and users.
As organizations of all kinds increasingly turn to a hybrid IT environment that encompasses both on-premises systems and multiple clouds, the ability to move data easily between hosted and local environments means storage stays in step with enterprise strategy as business objectives change. The economic benefits are multifold as well. Using industry-standard hardware enables organizations to get the best value from every object-storage dollar they spend. The roll-as-you-go model of object storage gives IT the ability to scale storage, connectivity and management independently of one another. IT and departmental budgets aren’t wasted on unnecessary upgrades to every plane simultaneously.
Data, both structured and unstructured, is a major asset in most enterprises, and operations increasingly rely on fast, available, always-on data systems. Companies don’t try to conduct business without financial institutions that manage their money and utilities to keep buildings and equipment running, so why try to manage the surging growth of critical data without the proper data solutions and infrastructure? Enterprises have to make a choice: invest in building a vessel robust enough to ride the exabyte waves or risk drowning in a sea of untamed data and lost opportunities.
About the Author
Paul Speciale is chief product officer at Scality. He leads product management, being responsible for defining RING functions, solutions and roadmaps. Before Scality, he was part of several cloud-computing and early-stage storage companies, including Appcara, Q-layer and Savvis. In the storage space, Paul was VP of Products for Amplidata, focusing on object storage, and Agami Systems, building scalable high-performance NAS solutions. Paul has over 20 years of industry experience that spans both Fortune 500 companies, such as IBM (twice) and Oracle, and venture-funded startups. He’s also been a speaker and panelist at many industry conferences. He loves backpacking and cars, and he’s a long-standing fan of football and basketball at UCLA, from which he holds master’s and bachelor’s degrees in applied mathematics and computing.