Artificial intelligence (AI) has long been the stuff of sci-fi/horror hybrids like the Terminator franchise, as well as literary works that preceded Hollywood movies. Recently, a number of celebrities have expressed concern over the possibility that AI could lead to the demise of the human race: namely, for instance, Elon Musk and Stephen Hawking. But like the idea that automation will leave the working man high and dry (and unemployed), fears of AI even emerging—let alone causing any mischief—are at best unfounded. So don’t plan a “Save the Humans” protest at your local data center or silicon foundry just yet.
Matter in Motion
Computers and the machines they power can indeed replicate (often quite convincingly) human behaviors. Machines build things, do laborious tasks, perform calculations at incredible speeds and more. But replicating human behavior is a far cry from achieving self-determination, thought and/or self-awareness. And the claims of technology companies that they are on the heels of a “brain-like” chip or other system capable of AI often put the cart before the horse.
Here’s what I mean.
Scientists and engineers all too often ignore difficult philosophical questions, especially when those questions intersect with their work. “We’re (fill in the blank—physicists, engineers or what have you), not philosophers” is the implicit or explicit refrain. In quantum physics, very smart men and women regularly punt difficult but important questions about the implications of their theories. I remember, for instance, the brilliant Richard Feynman suggesting in a video presentation that by using the theory of quantum electrodynamics, one could in principle predict the behavior of his audience. By implication, then, people (and thought, presumably) are nothing more than the product of particles interacting according to the rules of physics. Implicitly or explicitly, that’s the same theory that underlies efforts to create “thinking machines” (or AI).
The problems with this theory, however, are deep and insurmountable. If intelligence is nothing more than the product of particles obeying the laws of physics, then what a person thinks is likewise the product of those interactions. In other words, each person “thinks” a certain way because that’s the unavoidable result of certain particles moving and interacting in specific manner. There is no rational reason to expect that such thinking—whether it’s Richard Feynman’s or Stephen Hawking’s or a janitor’s—has any relationship with reality. You might think your reasoning makes perfect sense, but that’s because you couldn’t possibly think any other way. Appeals to pragmatism (“Well, it seems to work regardless of your arguments”) are simply a way of having your cake (or theory) and eating it too. And appeals to quantum randomness don’t solve this problem; they just make an individual’s choices less predictable. (In other words, free will and random will are not synonymous.)
Thus, approaching intelligence as something that can be modeled as a bunch of particles (such as electrons) moving around runs into serious difficulties, particularly if we are trying to replicate human intelligence. But even ignoring that problem, there is the (related) issue of how the brain works—a matter that remains a tremendous mystery. The brain seems to be the focus of both the material and the immaterial, the simple and the complex. As a result, efforts to create a brain-like processor are tantamount to someone from a thousand years ago trying to create a smartphone: the fundamental insight is simply lacking. Imagine a medieval monk slapping together some glass, maybe a rectangular piece dark wood (or something that looks and feels like plastic), a few chunks of metal and some glowing coals in hopes of having Siri pop up with advice on where to find a good restaurant. Such are efforts to replicate the brain in silicon without knowing the first thing about how a brain works.
Stupid + Stupid != Smart
Anyone who has seen a garbled text message from an iPhone or similar device probably recognizes the humorous contradiction inherent in the term smartphone. Despite decades of Moore’s Law driving ever more-powerful processors, simple tasks that even a child could do remain beyond the capabilities of some very sophisticated gadgets. Many efforts to create AI essentially involve stringing together a number of, let’s be honest, stupid machines in hopes that eventually they will become truly “smart.” Granted, the combination of numerous simple objects or rules can lead to tremendous complexity, but complexity should not be mistaken for intelligence. In fact, many of the same purveyors of the above-mentioned mechanistic theory will strenuously object to any attempt to relate complexities in the universe with some kind of intelligence.
Phenomena like self-awareness, intelligence and the gamut of human emotions and experiences (e.g., good and evil) cannot be reduced to matter in motion without effectively rendering these phenomena illusory. And therein lies the crux of the problem with AI efforts: they are, following the story of Frankenstein, trying to create a living being without having discovered the spark of life. But no matter how big your pool table, a bunch of billiard balls (or waves, or something in between) rolling around do not an immaterial reality make.
Back to Reality
To reiterate, even if the quest for AI proves fruitless, humanity will still create machines that increasingly “fake” human behavior. In the hands of immoral organizations, like many governments, the results are often ugly, as the numerous wars around the world prove. Even ignoring malevolent uses of technology, however, many fear that the ostensibly good uses will lead to evil in the form of unemployment for the masses.
Technology has always displaced workers from certain industries: the advent of textile manufacturing no doubt hurt plenty of professional sewers and weavers, but it also improved the lot of humanity at large by making it easier and cheaper to clothe people. The automobile was a serious blow to the ecosystem of industries that surrounded horse-based transportation. And so on. But the resulting increase in the standard of living for most of the world makes arguing against these technological developments on the basis of altered economic choices rather difficult. To be sure, no technology is an unadulterated blessing, but with each new wave of innovation, society has managed to reorient the labor force displaced by machines to new tasks. Does anyone honestly believe that the near future will see a dearth of available work?
One of the chief limits on the use of machines, however, may be energy. Imagine machines doing all the work of humans; now, imagine where all the energy is going to come from—and how it can be extracted and used in an environmentally friendly manner. Energy is a far more acute problem than unemployment due to machines.
Fear of real artificial intelligence is on par with fear of an alien invasion: it skips over the important questions (such as whether there’s anybody out there) in favor of the drama of science-fiction. Pending persuasive answers to some fundamental philosophical and even technical questions, the idea that we will simply stumble on AI is tantamount to expecting a medieval monk to stumble on a smartphone. And speaking of smartphones, it’s unlikely that stringing together millions of “dumb” machines will give us Skynet. A little closer to reality is concern over machines creating unemployment, but machines have historically improved the standard of living for humanity—granted, however, often at the expense of certain industries. But life never runs into a shortage of work available. The question is simply which tasks will need to remain in the province of humans and which can be relegated to machines.
Leading article image courtesy of Patrick Hoesly