Data Center Trends
In recent years, the rapid development of the Internet has led to the emergence of new businesses, such as e-commerce, social media and Internet financial services. With the use of smart mobile devices growing, both professional and personal lives are connected more closely to the Internet than ever before, creating a huge demand for data. By living in “big data” times, we generate, process and store lots of data every day. Therefore, more and more businesses have become aware that managing the data more effectively will be a critical factor in their future success. As such, data centers have also become one of the fastest-growing market segments.
Recent forecasts show that the future global driver for Ethernet optical transceivers will mainly come from the rapid growth of large-scale data centers, telecommunication operators and traditional enterprises. According to Dell'Oro Group's forecast for 2015 worldwide Ethernet server shipments, 10GbE servers will account for the largest market share in 2016, and the market share of 25GbE servers will show rapid growth in 2017.
In the past, traditional enterprise data centers mainly focused on data storage and preparation for disaster recovery, but they never made excessive demands for real-time multiuser data retrieval and the carrying capacity of a sudden large-flow access. With the advent of big data, however, data center operations are gradually shifting from data storage to the real-time data analysis and processing on the basis of demand. Businesses and individual consumers are increasingly demanding more and more real-time data access. Many small to medium-size enterprises (SMEs) cannot support large data services by self-built large-scale data centers, the need of which could be met by cloud computing. Some forecasts are predicting that up to one-third to one-half of IT budgets will be for cloud services in future enterprise networks.
Large data centers are making leaps in development, driven by the rapid growth of cloud computing. Many global Internet giants are now focusing on cloud services as an important strategy for the future. They search worldwide to find suitable places for building mega data centers, hoping to grab a greater share of the global cloud-computing market. Even many emerging Internet giants who started late have come to realize that cloud computing is critical to the future of the entire networking industry, and they have increased their investments in data centers both domestically and abroad.
Three-Level Network Structure vs. Two-Level Spine-and-Leaf Network Structure
Compared with the traditional enterprise where data center traffic is dominated by local client-to-server interactions (north to south), the network traffic of the large Internet data center is dominated by server-to-server traffic (east to west), which is required for cloud-computing applications. The number of users of these data centers is huge, they have diversified and fragmented demands, and they require uninterrupted user experiences. Internet data centers require higher bandwidth and a more efficient network architecture to deal with traffic spikes from a large number of users, such as the demand for online music, videos, gaming and shopping.
The current mainstream three-level tree network architecture is based on the traditional north-to-south transmission model. When a server needs to communicate with another server from a different network segment, it must pass through the path of access layer–aggregation layer–core layer–aggregation layer–access layer. In a big data service with thousands of servers communicating in a cloud-computing environment, this model is ineffective, as it consumes a large amount of system bandwidth and creates latency concerns. To address these problems, the world's large Internet data centers have in recent years increasingly used the spine-and-leaf network architecture, which is more convenient for transferring data between the servers (east to west). See Figure 1.
This network architecture mainly consists of two parts: a spine switching layer and leaf switching layer. As its best feature, each leaf switch connects to each spine switch in a pod, which greatly improves the communication efficiency and reduces the delay between different servers. In addition, the spine-and-leaf two-level network architecture avoids the need to purchase an expensive core-layer switching device, and it can make it easier to gradually add switches and network devices for expansion as business needs warrant, saving on the initial investment.
Figure 1: Traditional three-level vs. spine-and-leaf two-Level network architecture.
Dealing With the Cabling Challenges of a Spine-and-Leaf Two-Level Architecture
Data center managers encountered some new problems when deploying a data center with a spine-and-leaf two-level architecture. Since each leaf switch must connect each spine switch, dealing with a massive quantity of cabling became a major challenge. Corning’s mesh interconnection module (Table 1), for example, solves this difficult problem.
Table 1: Mesh-module product description.
Now, many users have started employing high-density 40GbE switch line cards to break out as part of 10GbE applications. For example, a high-density 10GbE SFP+ line card has 48x10GbE ports, whereas a high-density 40GbE QSFP+ board may have 36x40GbE ports. As such, a 40GbE line card can be used to obtain 4x36 = 144x10GbE ports in the same cabling space and power-consumption conditions, thus reducing the cost and power consumption of the single-port 10GbE.
Figure 2 shows three typical applications of mesh modules in the cabling system. Four QSFP 40GbE channels (A, B, C and D) are broken out into 4x4 channels of 10GbE at the MTP input of the mesh module. The 10GbE channels are then shuffled inside the mesh module such that the four 10GbE channels associated with QSFP transceiver A are split across the four MTP outputs. The result is that the four SFP transceivers connected to one MTP output receive a 10GbE channel from each of the QSFP transceivers A, B, C and D. Thus, we achieve a fully meshed 10GbE fabric connection between the QSFP spine-switch ports and the leaf-switch ports without ever having to break out to LC connections at the MDA.
Figure 2: Three typical applications of mesh modules in the cabling system.
The example below shows how to optimize the cabling structure of a spine-and-leaf configuration at the main distribution area (MDA). For example, we use a leaf switch with a 48x10GbE SFP+ port line card and a spine switch with 4x36x40GbE QSFP+ port line cards. If a leaf switch has an oversubscription ratio of 3:1, 16x10GbE uplink ports from each leaf switch must connect to 16 spine switches. Given that the 40GbE port from the spine switch operates as four 10G ports, each spine switch must connect 4x36x4 = 576 leaf switches, as Figure 3 shows.
Figure 3: The Spine-and-leaf two-level network topology in a 10GbE application.
Figure 4: Full cross-connection cabling-structure comparison of the spine-and-leaf network architecture MDA.
If you use traditional cabling to achieve a full fabric mesh of the spine and leaf switches, a 40GbE QSFP+ port from each spine switch is broken out into 4x10GbE channels through an MTP-to-LC module in the MDA and then cross-connected through a jumper with the corresponding number of the MTP-to-LC modules that connect to the 10GbE channels of the leaf switch (as the left side of Figure 4 shows). The traditional method has not been widely used because the cabling system is very complex, the cost is relatively high and it requires a lot of rack space at the MDA. In this scenario, a mesh module can be a good solution to resolve these problems. As the right side of Figure 4 shows, in the case of a network module in the MDA, the full mesh of the leaf switches is achieved without having to break out the 40GbE port of the spine switch into the 10GbE channels via an MTP-to-LC module. This approach greatly improves the MDA cabling structure and can be of great value for the user, as Table 2 shows.
Table 2: Advantages of a mesh module in the MDA.
Conclusion
As the network bandwidth requirements for the data center increase, the data center backbone network has seen gradually upgrades from 10GbE to 40GbE, and it will reach 100GbE in the future. As such, by using 40GbE broken out into 4x10GbE now and 100GbE broken out into 4x25GbE in the future, the spine-and-leaf network will be an economical and efficient network structure for the support of large data transmission. Using the mesh module to achieve a full fabric mesh of the spine-and-leaf network supports the current 40GbE network and also allows a seamless transition to future 100GbE network capabilities.
Leading article image courtesy of Lars P. under a Creative Commons license
About the Author
Rui (Justin) Ma, Enterprise Networks Marketing Manager for Corning Optical Communications in APAC, has more than 10 years of experience in the design, installation and promotion of optical structured cabling systems. He is a subject-matter expert in the deployment and acceptance of high-density and advanced optic structured cabling solutions for the data center industry. Rui holds a master’s degree in optical engineering and an MBA in marketing.
1 comment
[…] Now, many users have started employing high-density 40GbE switch line cards to break out as part of … […]