The year 1993 was memorable for many reasons. It was the year Michael Jordan retired, PLO leader Yasir Arafat and Israeli prime minister Yitzhak Rabin signed the Peace Accord to end 45 years of fighting, Jurassic Park was released in cinemas across the globe, Intel launched its first Pentium processor and the term “spamming” was coined after a bug in a program sends an article to 200 newsgroups simultaneously. The data centers of 1993 were not designed to handle critical data. At the time, only 1 percent of telecommunications traffic crossed the Internet, and the Mosaic web browser had only just been launched. Smartphones and tablets were only seen on Sci-Fi TV shows such as Star Trek: The Next Generation. How times have changed.
Compared with 20 years ago, the devices that use data center networks today span every aspect of life. For example, today children have access to smartphones and tablets, with each device boasting more storage and compute capacity than the most advanced servers of the 1990s. With 34 percent of the world’s population (more than 2.5 billion people) enjoying access to the Internet, and with businesses relying on IP-based communications (voice and video) as well as cloud services, the demands being placed on today’s data centers are immense. But many of the data centers that exist today—and underpin mission-critical services in both the commercial and domestic environments—are based on 20-year-old technologies, and the simple fact is that they can no longer keep up with demand. Analysts predict that the number of servers will increase tenfold in the next three years and that 70 percent of all traffic in data centers by 2015 will be between servers (east-west traffic). How can a network frozen in time cope with such demand?
As such, users are paying the price for the lack of innovation by businesses. Data centers can no longer be operated with a “sticking plaster” mentality: one that tries to sweat the most out of legacy infrastructure. IT needs to drive the business forward, but this approach is a costly strategy in money and personnel at a time when cost reduction is a top priority; it is an approach that must be stopped. Data center owners and business leaders need to embrace innovation—now. So, how does one approach the data center of the future?
Businesses (and their IT departments) need to recognize the three key reasons that have led to data center networks becoming such a critical part of modern commerce.
The first is virtualization. Two decades ago (and even more recently), applications sat on individual servers and were supported by their own storage. This design was simple, but it was limited by physics—when more applications were needed, more servers and storage had to be deployed, but this expansion depended on the physical floor space in a data center. No space, meant no more servers or applications, so IT departments began to virtualize applications and machines. This approach required a much more elegant and robust network topology to provide the raw performance and management flexibility to deal with the uplift in east-west traffic, as well as the “traditional” north-south traffic.
Second, with the advent of faster networks, more (virtual) data center capacity and tougher security requirements, network resilience is critical. As companies empower mobile working, the need for a secure and reliable network is paramount, and with businesses adopting cloud-based services either on an application level (such as CRM tools like salesforce.com) or by outsourcing their entire IT requirements to hosting providers, 24/7 network accessibility and resilience has never been greater.
Finally, modern data centers have to deal with more volume and must service more users, applications and data. This demand requires a highly flexible and scalable data center as well as virtualized machines that provide elasticity, allowing providers to better respond to the needs of the customer, on demand.
Data Center of the Future: The “On-Demand” Data Center
The question is, how will data centers now and in the future respond to these trends and developments? A new approach is required—an approach that represents another major evolution in networking toward a highly virtualized, open and flexible network infrastructure, and one that will evolve with new technologies and practices, such as software-defined networking (SDN). With an infrastructure that combines physical and virtual networking elements, customers can provision the capacity—compute, network, storage and services—required to deliver high-value applications faster and easier compared with legacy data center networks.
How can businesses begin to make the transition to the data center of the future? There are a few steps to consider:
1. Solid Foundations
At the heart of any data center is the physical networking infrastructure, one that provides the connectivity between applications, servers and storage. Not all networking infrastructures are equal, however, and for businesses that want to embrace a highly flexible and agile on-demand model—one that delivers a blueprint unifying vital areas of the data center, from Fabrics to storage to physical and virtual infrastructure—a fabric-based networking topology is required. A fabric-based network, both at the IP and storage layers, will simplify network design and management to address the growing complexity in IT and data centers today and deliver key features like logical chassis, distributed intelligence and automated port profile migration. Fabric-based networks are more attuned to operate in highly virtualized data centers to support techniques such as VM mobility in a fabric and across data centers, thereby providing the ideal hardware foundation for the on-demand data center.
2. Virtual Infrastructure
On top of the physical infrastructure will be a virtual or logical layer. This is well-established in the server domain with hypervisor technology. The same concepts are now being applied to both storage and IP networks with technologies such as overlay networks enabled through a variety of tunneling techniques. Next we will see network services virtualized, thanks to the introduction of virtual switches and routers. “NFV,” or network functions virtualization, represents an industry movement toward software or VM-based form factors for common data center services. Customers want to realize the cost and flexibility advantages of software rather than continuing to deploy specialized, purpose-built devices for services such as application delivery controllers. This is especially the case in cloud architectures where these services want to be commissioned and decommissioned with mouse clicks rather than physical hardware installations and moves.
In addition to the physical and virtual/logical layer will be controllers (for the network, servers and data storage). One such example is the network controller, which is implemented in software and tracks the status of the network and provides well-defined KPIs. The complete architecture is built around applications that directly affect the underlying infrastructure, and it guarantees the best possible application uptime, performance and security.
4. Orchestration Frameworks
Finally, the entire data center environment must be managed by orchestration frameworks that allow for the rapid and end-to-end provisioning of virtual data centers. There are many approaches in the market, such as VMware vCloud Director and the OpenStack community. OpenStack, for example, allows customers to deploy network capacity and services in their cloud-based data centers far quicker than with legacy network architectures and provisioning tools.
The data center of the future will be a combination of the most valuable aspects of the physical and virtual layers. Such a data center will give organizations the ability to flexibly deploy data center capacity—compute, networking, storage and services—in real time, whenever and wherever they need it. Additionally, the simplified management and elastic nature of such a data center design will also deliver much-improved ROI (thanks to scaling, multi-tenancy, and time and money savings).
So, an organization wanting to make the journey to the on-demand data center must look for technology partners focused on delivering a network infrastructure that enables this vision and able to demonstrate that the network delivers the following:
- New levels of network simplification and automation that enable the deployment of network capacity far more quickly than with legacy networking technologies
- Dramatic improvements in network efficiency, utilization and performance to ensure that the network is performing optimally today, fully using all available capacity, and with the headroom to massively increase network capacity over time without having to rip and replace infrastructure or revisit the architecture
- The agility to deploy network services in both virtual and physical form factors and to use network virtualization and programmatic control to influence network behavior in ways never before possible
Find this and you are on the way to creating a real data center environment for the future.
Leading article image courtesy of cbowns
About the Author
As Vice President of Brocade’s Data Center Switching and Routing business, Jason Nolet is responsible for defining the company’s vision, strategy and execution plans to drive business results for all IP/Ethernet data center switching and routing products, as well as network management and cloud orchestration solutions. He joined Brocade in 2009, bringing nearly 25 years of experience in the networking industry.
Before Brocade, Jason held a variety of executive positions at Cisco during a 13-year period. He was most recently Vice President of Engineering, leading product development for a wide range of security products and solutions. Before Cisco, he spent more than a decade in engineering leadership roles at Stratus Computer, AT&T Global Information Systems and NCR Corporation.
Jason has a BS in Business Administration and Information Systems from San Diego State University.