The Now and Later of Processors

November 10, 2011 4 Comments »
The Now and Later of Processors

They’re the workhorses of computing, including in the data center: processors. This term covers a wide variety of silicon chips designed for a variety of purposes, but broadly, they are collections of transistors (and a few other components) that perform operations on electronic data. Since the advent of the first integrated circuits in the late 1950s, processors have increased in complexity and ubiquity to the point that they can be found in a variety of machines and gadgets beyond just what we might consider traditional computers (PCs, laptops, and servers, for instance). Today, these devices have achieved remarkable capabilities in tiny packages, but can the steady innovation of the past half century be maintained, and what will the future look like?

Moore’s Law

A virtual slogan of the semiconductor industry, Moore’s Law (a 1965 prediction by Intel cofounder Gordon Moore) states that the number of transistors in a microchip will double every two years. (The precise details and best way to phrase Moore’s Law may be up for debate, but this simple statement captures the general spirit of it.) The semiconductor manufacturing industry has largely kept pace with or exceeded Moore’s Law, taking technology from simple integrated circuits with a few transistors to complex processors with billions of transistors in just a few square millimeters.

Predictions of the upcoming failure of Moore’s Law are perennial. And although these prognostications have all failed thus far, one day they will be correct—particularly for silicon processor technology. At some point, the size of transistors will reach a minimum (on the order of the size of one or several atoms), and the only option will be to scale outward. But even this approach runs into a fundamental problem: the speed of light.

Since electronic signals cannot (at least if you believe Einstein) exceed the speed of light in a vacuum—about 300,000,000 meters per second, or about 186,700 miles per second—processor sprawl limits the device’s capabilities. Hence, the need to pack more components into smaller areas is a primary consideration in designing faster processors.

Regardless of the technology, whether traditional silicon or something else (like quantum computing), computing in general may have a fundamental speed limit of its own. According to Popular Science (“Scientists Find Fundamental Maximum Limit for Processor Speeds”), “scientists say that processor speeds will absolutely max out at a certain point, regardless of how hardware or software are implemented.” Citing research by Boston University researchers, the article suggests that computer speeds will reach their maximum in about 75 years.

So, will processor research reach a dead end at that time? Who knows. The idea of an infinitely fast processor, however—even if it takes a very long time to achieve—has some disturbing philosophical implications. But, setting aside potential revolutionary technologies like quantum computing and photonic computing, what about the near future of semiconductor processors?

Semiconductor Process Technology

As mentioned above, a necessary part of making processors faster is packing more transistors into a smaller area, and this in turn means making transistors smaller. Current semiconductor process technologies have feature widths in the 20nm range—that’s 0.02 microns, or 0.00002 millimeters. The leader in this arena, Intel, has already introduced 22nm processors (“Ivy Bridge”) and is building a manufacturing plant for 14nm chips in Chandler, Arizona, according to EE Times (“Update: Intel to build fab for 14-nm chips”).

But don’t expect process technologies to continue scaling down at this rate forever. A single hydrogen atom—the smallest of the elements—is only about a tenth of a nanometer in diameter. Larger atoms (such as silicon) are, well, larger, so building transistors at the atomic level already has a fundamental limit. Furthermore, as transistors are built using thinner layers (approaching a single atom), the consequences of defects become more substantial. Thus, the process of fabricating chips becomes more difficult and, hence, more expensive.

In another innovative step, however, Intel also introduced its “FinFET” transistors in the 22nm generation. Instead of involving only planar structures, FinFETs build upward, making the transistors three-dimensional structures (part of the transistor structure has the appearance of a fin, hence the name). This new technology allows the company to increase the speed of its transistors and to pack more of them into a smaller space and into a lower power budget.

But are FinFETs and similar three-dimensional approaches to semiconductor manufacturing in line with the spirit of Moore’s Law, or are they a cheat that artificially keeps Moore’s Law alive? Since Moore’s Law is not a strictly scientific law (from a physics standpoint), this distinction may not be worth quibbling over. Clearly, however, FinFETs are a technology aimed at maintaining technological momentum in the face of an approaching barrier to further innovation.

Energy Efficiency and Future Technologies

Power consumption is a growing concern for data centers in particular but also for other industries, and for consumers. Processor technology is making strides toward greater performance per watt; for instance, smaller process technology generations generally provide more processing performance for less power than previous generations, so some increase in efficiency follows with more densely packed integrated circuits. With process technologies approaching their minimum size limit, however, this avenue for greater efficiency is also reaching its end.

The question is whether a new technology will come along to create more room for innovation—both in performance and in efficiency. One such possibility under intense study is quantum computing. Whether a quantum computer can be realized in a form that can be practically deployed in consumer and business products remains to be seen, but many scientists and engineers have high hopes for this avenue of research.

Regardless of the processing technology, however, the finite speed of light still creates problems for computer implementations. Imagine, for example, a processor with no peripherals (like a data storage device, input and output interfaces, and so on): what good is it? Processors must be supplied with data, and the output data must be stored or transmitted to some other device. This transfer of data is limited to the speed of light. Thus, for instance, if the processor must access a storage device like a hard drive, it must wait for the data to propagate over the connection. Even an infinitely fast processor would be hobbled by this limitation. And again, only so much equipment can be crammed into a given volume, meaning that the speed of an actual computer system—not just a processor—is limited. Unless, that is, you can find a way to transmit data faster than the speed of light!

Conclusions

The physics of materials and light is not the only limit on processor innovation. Developing a new process technology requires a large amount of capital, and building the facilities to manufacture actual chips is expensive as well. (Intel plans to invest $5 billion in its new 14nm manufacturing plant.) Economic factors may therefore place a limit on innovation, even if physical limits haven’t been reached.

Whatever the limits, the next 5 to 10 years are may see the end of Moore’s Law unless some new technology beyond standard planar (or even the new three-dimensional) semiconductor manufacturing is realized. At least that’s what the approaching fundamental limits (such as the size of atoms) would indicate.

For data centers, new process technologies mean more processing power in smaller and more power-efficient devices. But the ever-growing demand for IT resources means this pace of innovation may not be enough. Hence, companies are looking to lower-performance processors (for instance) at the heart of their servers in an attempt to increase efficiency.

The past half-century or so (or century, if you want to include the theoretical foundations) has seen unbelievable progress in computer technology. From wimpy computers that filled entire rooms to tiny handheld devices that put to shame desktop PCs of just a few years ago, innovation has been a train with remarkable momentum. But will that train begin to slow as the limits of traditional semiconductor technology are reached, or will it simply hop the tracks to another technology and continue moving forward? Only time will tell.

About Jeff Clark

Jeff Clark is editor for the Data Center Journal. He holds a bachelor’s degree in physics from the University of Richmond, as well as master’s and doctorate degrees in electrical engineering from Virginia Tech. An author and aspiring renaissance man, his interests range from quantum mechanics and processor technology to drawing and philosophy.

4 Comments

Add Comment Register



Leave a Reply