Will the Data Center of the Future Be a Learning Computer?

June 26, 2012 2 Comments »
Will the Data Center of the Future Be a Learning Computer?

If you’re a fan of the Terminator franchise, the idea of a learning machine probably evokes images of Skynet, hunter-killers prowling for human prey, and Arnold Schwarzenegger saying something to the effect of “I’ll be back.” A learning or thinking machine is one of the holy grails of computer science, and it is the subject of numerous research projects. One such project involves a neural network constructed by Google’s X Lab, results from which are slated for presentation soon (for the technically minded, a paper describing the results is available online: “Building high-level features using large scale unsupervised learning”). The results of this project hint at the possibility of vast computer networks that can learn or think, but is this the likely future of data centers?

For Cat’s Sake

Maybe it’s human laziness, or maybe it’s a more noble quest of some form, but the idea of a thinking machine is fascinating to many (including sci-fi creators). And a headline like “Google scientists find evidence of machine learning” is certain to capture attention. The Google scientists constructed a neural network using 16,000 processors. According to the New York Times (“How Many Computers to Identify a Cat? 16,000”), “The neural network taught itself to recognize cats…It performed far better than any previous effort by roughly doubling its accuracy in recognizing objects in a challenging list of 20,000 distinct items.” The article quoted lead scientist Andrew Y. Ng of Stanford University as saying, “The idea is that instead of having teams of researchers trying to find out how to find edges, you instead throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data.”

According to the paper published by the scientists, the algorithm was fed thousands of images—some containing cats and some not—from which the network extracted data about cats. The system then achieved a roughly 75% successful detection rate when tested on images containing cats (or not). But the devil may be in the details (and extracting plain-English details from a scientific paper is about as simple as getting a computer to identify a cat in an image).

Is This Really a Step Toward Machine Intelligence?

Whether a computer can really ever become “intelligent” is a matter that, to be discussed thoroughly and cogently, requires careful thought on a number of fronts—philosophical as well as technological. Computers can obviously do things that could easily give the appearance of intelligence: they can run complex simulations, find specific information out of reams of junk on the Internet (although they may be more or less successful at this task), run game characters that respond to your actions as you play and so on. But is this smarts, or just the appearance of smarts?

The key to knowing whether a machine can actually think is to develop a test that only a thinking machine could pass. But sticking to the realm of image recognition, what might such a test look like? Interestingly, a young child doesn’t need to see thousands of different cats to be able to identify a cat when he or she sees one. It may only take one or two cats and one or two corrections (“no, honey, that’s a dog”). The child doesn’t perform any algorithm (in the computer sense of the term) when a cat walks by, whereas a computer system performs a variety of mathematical comparisons and calculations—actions a computer is undoubtedly good at. One might very understandably think, then, that a computer is thus “faking” intelligence.

Part of the problem in pursuing machine intelligence is a lack of true understanding about human intelligence. To be sure, claims in the area of neuroscience are a dime a dozen—we’re just 10 years away from understanding X or Y or Z (not unlike how we are always 10 or so years away from a cure for this or that cancer). And although progress is being made, one could easily (and rather convincingly) argue that a full understanding of the brain is impossible—after all, we think using a brain, so can we really ever gain a full appreciation for it as an outside observer would? One might easily wonder if we really have even a basic understanding of how any of the brain works at all.

To be sure, these are complex questions—questions that no short article can address adequately. But they are worth raising. Obviously, computers can do amazing things, and data centers and the networks that connect them have employed large amounts of computer power to produce many important benefits to society (and some downsides). And as processors get faster, smaller and cheaper, leading to a proliferation of computing power, questions naturally arise as to the limits of that power.

A Level-Headed Assessment?

Computer intelligence seems to be just as far off now as it was 10 or 20 years ago. Computers can do more, faster, but the nature of what they do really hasn’t changed: they follow a set of instructions fed to them by programmers, and they do so to the letter (or, perhaps more accurately, to the bit). Indeed, the programs become more complex as computing power and memory capacity increase, but they have not fundamentally changed. That leads to what might be the most salient question: can more of the same truly generate something new?

Let’s be honest: computers are stupid machines (the moment of truth: the blue screen). They do certain tasks and do them well—they convert one set of ones and zeros into another set of ones and zeros, and nothing more. Naturally, if you think that is basically what the human brain does, then you might conclude that a computer—given sufficient development—could eventually produce all the complex responses and characteristics of the brain. But phenomena like self-consciousness cannot be explained purely on the basis of ones and zeros—whether in the brain or in a computer. And, sure, materialist philosophers will try, but they invariably explain away self-consciousness. Self-consciousness is not material, so at best, a materialist understanding of it would lead to an explanation whereby humans simply act as though they are self-conscious. A philosopher might even do a pretty convincing job of maintaining this position, but it in no way corresponds to or explains an individual’s own experience of self-consciousness.

Okay, so maybe we’ve gone off the deep end here. The brief discussion above doesn’t do justice to the topic, but it does hint at all the different lines of reasoning that must be pursued—not just to develop a thinking or learning computer, but to first figure out what such a machine would even look like. The answers to such questions are far from clear, but in addressing them carefully—even if incompletely—we can take several important steps toward understanding the role and capabilities of computers, as well as how we should assess claims of computer intelligence.

For my own part, I believe computers are great tools, but they’re still just tools. I don’t care how intricately you design a hammer, it’s still going to be a hammer; and although it may be able to “fake it” to some extent as another tool (say, a screwdriver), it will always be primarily for jobs that require a simple hammer. Over the decades, computers have become faster, cheaper and better at what they do—but they still basically do the same tasks. You can string a bunch of them together, but ultimately, they still just convert one set of ones and zeros to another. I feel quite safe in predicting that computers will ever remain stupid (but useful) machines—for whatever that’s worth.

Photo courtesy of hfb

About Jeff Clark

Jeff Clark is editor for the Data Center Journal. He holds a bachelor’s degree in physics from the University of Richmond, as well as master’s and doctorate degrees in electrical engineering from Virginia Tech. An author and aspiring renaissance man, his interests range from quantum mechanics and processor technology to drawing and philosophy.

2 Comments

Add Comment Register



Leave a Reply