Technologies such as cloud computing and big data have made once luxurious computing speeds commonplace. Central computing offers scalable solutions to companies, universities, individuals and others who are offloading computationally intensive tasks (data mining and artificial intelligence in particular) to data centers designed to handle these tasks.
This situation presents a challenge to data center operators attempting to outpace competitors and the growth in demand for computing power. To match that growth, suppliers must make considerable improvements to their server setups, but the electronic technologies in many consumer products today may be insufficient to meet the demands on data centers.
A whole new proposed medium for transmitting and processing information is optical fiber and photonics. Already, fiber optics have become a new standard even at the consumer level, with Google Fiber being a notable provider. But what’s happening at the ends of these fibers—the proverbial last-mile problem—to convert between electric and optical signals may prove to be the next important focus for data center performance.
Single-mode cables like the two shown here can generally propagate signals farther than multimode cables, but at the cost of lower bandwidth. Single-mode fibers, which are only nine microns in diameter, serve in both transoceanic cables and shorter-distance applications.
The History of Fiber Optics
Although electronics have always been the standard in computation and wired telecommunications, light was a part of the picture from the very beginning. Notably, the first wireless telephone, invented in 1880 by Alexander Graham Bell and Charles Sumner Tainter, modulated light to transmit sound. Bell, who also patented the telephone, called the wireless photophone his “greatest achievement” and on his deathbed told a reporter it was “the greatest invention [I have] ever made, greater than the telephone.”
Electricity powered the Second Industrial Revolution, and for years following it, fiber-optic communication received little attention from researchers and scientists. Corning Glass Works put the optical-communications field back on the map in 1970 when it developed the optical fiber. Only seven years later, optical fiber was hitting the mass market.
How Optical Telecommunications Has Grown
Nearly 40 years after Corning’s advancements, fifth-generation fiber-optic communications are now under development. Some research is addressing the challenges that would’ve been familiar to the first generation of fiber-optic researchers, but some of the science under development is novel.
Light reflecting inside an optical fiber. Fiber that allows multiple paths (i.e., “modes”) for transmitted light (such as those depicted here) are called multimode fibers. Their acceptance cone can be large, making it easy for LEDs to be used for transmission. Single-mode fibers require more-precise light emissions, such as from a laser. (Image courtesy of David Eccles via Wikimedia Commons)
For example, Corning was attempting to solve problems related to contaminants that created high signal loss in optical fibers compared with existing copper solutions.
Now, researchers are working to extend the wavelength range over which wavelength-division multiplexing (WDM) systems can operate by looking to a new kind of material called dry fiber. This improvement alone offers a potential sevenfold increase in network bandwidth, with additional research promising further upgrades. Researchers are also hoping to use the concept of optical solitons to address challenges facing fiber optics, an approach that would likely be totally unfamiliar to the first generation of fiber-optic scientists.
The Current State of Telecommunications
Intel announced in 2015 that its pace of innovation on integrated circuits (ICs) had begun slowing as early as 2012. This announcement created concerns among observers that physical constraints would end the doubling rate of transistor counts on ICs, better known as Moore’s Law.
Record transistor counts on IC chips over time. The rate of doubling transistor density was a trend first noted in 1965 by Gordon E. Moore, then director of R&D at Fairchild Semiconductor. (Image courtesy of Our World in Data)
Such a dramatic change would be felt across the technology sector, with institutions at the bleeding edge of computing technologies feeling the pain first; data centers would be some of the first to respond to any slowdown in innovation.
Although electronic processors look to be reaching their limits, network speeds have continued to grow. The popularity of fiber-optic networks presents a great opportunity for improving network speeds, but the new bottleneck is from the electronics inside these data centers. What if fiber optics could not only carry information but also process the information?
Moving Optical Fiber Into Data Centers
The field of silicon photonics has received more attention in recent years as optical fiber takes the lead as the standard information carrier for long distances. Optical interconnections between servers and clients have increased data speeds dramatically, but data centers must make similar improvements to keep up with the growth of their clienteles.
Silicon photonics is a possible solution to this need for faster data center connections. These optical links can be anywhere from tens of kilometers to mere centimeters long, with differing information protocols and physical constraints at each side of the scale. At the longer side, achieving high-bandwidth transmissions with fiber optics leads to the high cost of working with single-mode fibers. These fibers are about 8 nanometers thin and require expensive lasers for transmission as well as equally precise “microphotonic” circuits for signal processing.
This 300mm silicon-photonics wafer is one example of an optical circuit used for processing optical signals in systems that use fiber optics for data transfers. (Image courtesy Wikimedia Commons user Ehsanshahoseini)
These components are already in use, but the greatest challenge facing widespread adoption is the state of laser technology. Not only must the cost of lasers decline, but physical barriers to mounting them on optical microchips must also be overcome.
For now, scientists are split as to whether on- or off-chip lasers will become the standard, but the development of an adhesive with proper thermal and electrical properties may settle the debate. The right adhesive solution would enable mass production of optical ICs with on-chip lasers, paving the way for mass adoption of silicon-photonic chips that are already extremely low in cost.
How Fiber-Optic Technology Is Progressing
Researchers are making progress wrangling the difficult thermal properties of silicon in its use with optical circuits by reducing energy consumption of passive and active optical components. They’re also continuing their search for acceptable adhesives that enable on-chip lasers, and they’re making progress toward reducing the overall cost of producing and packaging microphotonics.
The third development trend by researchers, improvements to fiber-optic transmission protocols, is perhaps the most important. This research is different because it’s conceptual rather than physical and much more closely resembles the familiar work of computer scientists in developing efficient computing algorithms. This trend does more than reduce costs for innovation; it also stands to directly address a foremost challenge facing data centers: computing capacity. By improving transmission protocols in fiber-optic systems, researchers can address a networking bottleneck by making intra-data-center communications as fast as possible.
What Optics Means to Data Center Operators
In 2006, Former Intel Senior VP Pat Gelsinger said, “Today, optics is a niche technology. Tomorrow, it’s the mainstream of every chip that we build.” Indeed, optics is quickly becoming an integral part of central computing. Fiber optics are probably in wide use in and around your data center already, and this trend is only set to grow in the coming years. Preparing a data center for fiber-optic integration may be the most scalable solution for many operators, and directly funding research and development in the field may also be a good option.
About the Author
Daniel Browning is the business-development coordinator at DO Supply Inc. In his spare time, he writes about automation, AI, technology and the IoT.