Reliance on a traditional three-tier data center network design can no longer match the need for an enterprise to innovate while remaining financially viable. Instead, organizations are shifting toward designs that rely on a “start small and innovate quickly” concept. Often this means selecting a cloud model delivered by outside service providers while retaining sensitive functions in the enterprise. Implementing a strategy to support this approach involves operating privately owned data centers, as well as procuring communications services and cloud solutions. This article looks at three solutions for constructing a private cloud and incorporating public cloud services to create a hybrid cloud model, including the transport technologies and commercial offering trade space.
Three Tiers Worked for Years
For years, data center networks were designed following a three tier blueprint. A Layer 3 core forms a packet backplane for flows around the data center. The core is fed by an aggregation layer that integrates various service modules with a Layer 2 switching function. Finally, an access layer physically attaches servers to the network. Server components are usually 1RU-high devices stacked in equipment racks and interconnected with top-of-rack (ToR) or end-of-row (EoR) switching platforms. Various forms of patch cabling and cross connections complete the architecture. It is a reliable and scalable model that served well in an environment of predictable enterprise growth and equipment costs.When Layer 3 routing was much more expensive than Layer 2 switching, and when enterprises tended to retain absolute ownership of physical data center facilities, constructing three-tier networks in data centers made perfect sense. But rapidly changing applications, unending increases in storage volume and the availability of virtualized cloud resources have changed the landscape. Constructing a modern, agile and cost-attractive enterprise data center requires a fresh approach, incorporating privately owned as well as outsourced virtualized elements.
Hyperscale Needs a Different Blueprint
Hyperscale data centers recognized the bottlenecks from link oversubscription, potential blocking and latency issues in the three-tier model and turned toward novel architectures. One example is the leaf-and-spine or fat-tree approach where a core fabric is connected to pods comprising multiple aggregation and access switches that are incrementally added as needed. The pods are designed to match a set of requirements. For example, a colocation multitenant data center (MTDC) might design its core network to accommodate aggregation and access pods supporting storage and compute assets of various customers. Various approaches to data center networks have either been studied or are in trial; they include Jellyfish, DCell, BCube, FiConn and others. Jellyfish is an interesting example of “a degree-bounded, random graph topology among top-of-rack (ToR) switches. The inherently sloppy nature of this design has the potential to be significantly more flexible than past designs.”By its name, hyperscale implies huge size. But that’s not necessarily the important distinction. A huge data center can be built with an oversubscribed three-tier internal network. Hyperscale data centers such as those built by Google or Amazon not only need the ability to create larger storage farms or increased compute capacity; they need to rapidly provision applications and deliver new services while adjusting to the associated compute, connect and store needs. Despite their overall massive size, hyperscale data centers can be established small and expanded quickly—and that’s true for any data center.
What’s an Enterprise to Do?
How an enterprise decides to evolve its corporate data center depends on its business model, growth expectations, financial culture, staffing and expertise. A small business may need significant storage capacity and secure transactions, but operating a wholly private data center could be both a distraction and prohibitively expensive. A large corporation likely owns some of its own data centers yet needs higher degrees of agility and growth capacity.
Interconnecting data centers through the public Internet or by private connection is the basis of what we now call the cloud. Cloud services are available to perform compute and storage functions for enterprises of all sizes. Small and some medium-size businesses will simply purchase what they need from a selection of cloud providers, accessed through a communications-service provider. Other (especially larger) enterprises will form a virtual private data center comprising privately owned resources complemented by cloud solutions.A private cloud can be formed by building a core data center with essential servers or storage farms located on the corporate campus, connected by a local core network. This facility can expand to include pods supporting other compute or storage functions locally in the enterprise data center to expand the private cloud, or remotely at a colocation data center to form a virtual private cloud.
Interconnecting data center locations is typically accomplished through a dedicated data center interconnect (DCI) transport service, such as an E-VPN, or through a private optical connection over a leased dark fiber. Locating enterprise-owned assets at a remote location reduces costs while offering the ability to add or remove data pods as needed. This flexibility allows the enterprise to embrace innovative digital technologies that better service their customers and expand market share while retaining a high degree of internal control.
Start Small and Innovate
Data centers can gain agility through the modular nature offered by a core-pod and outsourced architecture. Data centers containing larger server farms and high storage capacity become very complex to cross connect. Intra-data-center networks require a great deal of physical space, power and planning to operate and grow. Novel architectures such as leaf-spine or Jellyfish certainly help, but they assume that resources are contained in the data center.
DCI allows remote location of virtualized resources and is the basis of the cloud. Private clouds are formed by DCI between enterprise locations. Virtual private clouds are formed by DCI reaching colocation MTDCs. And hybrid clouds are formed by DCI extending across administrative boundaries to include public cloud services, usually through the Internet but increasingly through private connections in MTDCs. All of these DCI approaches combined provide a cloud interconnect solution that offers the enterprise the ability to add or delete compute and storage capabilities in concert with their business cycles. An enterprise can construct a traditional data center core that meets its initial needs and then add capacity through a cloud option. A smaller core capability is complemented by incremental resources delivered through the cloud.
The “start small and innovate” idea was described in a 2014 InfoWorld article by Forster and Lapukhov. They tell of how hyperscale data center operators, such as Microsoft or eBay, have used this approach in the data center network, allowing them to match incremental demand for storage and compute capacity using pods that are designed, deployed, operated and retired as a unit. This concept need not be constrained to the data center but should extend to virtualized resources in a private or virtual private architecture.
An enterprise that designs its data center to start small, with the intent of adding pods to its private data centers and virtualized resources through a cloud implementation, gains enormous agility to make use of powerful digital tools such as analytics, content distribution, network load balancing, identity management, security optimization, CRM and countless others.
Some of these tools otherwise would be out of reach for smaller enterprises and could prove capital intensive even for larger enterprises. Innovating quickly means that compute and store resources must be easily accessible and just as easily releasable. Enterprises that attempt to attack the changing size and scope of data centers through a “build it big and expand it later” philosophy do so with high cost and at great risk of slowing their ability to change later.
About the Author
Chris Janson is currently Director of Industry Strategies at Alcatel-Lucent. In this role, he follows networking-technology trends and their application to enterprise customers, including finance, health care, utilities, transportation and government. He also serves on the board of directors for the Rural Telecommunications Congress and OpenCape Corporation. He holds an MBA from Boston University and Bachelor of Science in engineering from Wentworth Institute of Technology.