Elon Musk Can Sleep Easier
Elon Musk, CEO of SpaceX and Tesla Motors, was quoted yesterday comparing artificial intelligence (AI) to “summoning the demon.” “I think we should be very careful about artificial intelligence. If I would guess at what our biggest existential threat is, it’s probably that… With artificial intelligence we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and… he’s sure he can control the demon? Didn’t work out.” This is not a new sentiment for Musk, who called AI “more dangerous than nukes” earlier this summer.
Could AI truly be an “existential threat” – could computers, intended to help us, instead make us extinct? In theory, yes. Musk referred to HAL 9000, the sentient computer that murdered the crew in 2001: A Space Odyssey, as “a puppy dog” compared to what AI could produce. Colossus: The Forbin Project, the 1970 movie about two supercomputers that took over the world (and nuked a city when not obeyed), enslaving mankind for the “good” of mankind, seems more in line with his concerns.
If Musk has erred, it’s not because he has overestimated the power of consciousness. On the contrary, he sells it short, as the field of computer science has since its inception. If AI isn’t as scary as he imagines, it’s not because of what a sentient computer could do, but because it can only happen with a sentient computer.
Professor Alan Turing of Manchester University is often referred to as the “father of the modern computer” without much exaggeration. He and his peers changed our world – but they believed that the field of computer science would progress in a very different way. Whether or not anyone envisioned a global information network, enabling you to read this article on a handheld wireless device, they certainly believed that by the end of the last century, computers themselves would “awaken,” and add information on their own initiative. While the relevant field is usually called artificial intelligence, artificial consciousness is arguably more accurate; the intent was to produce a computer able to demonstrate creativity and innovation.
Turing needed an impartial way to determine if a computer was actually thinking. He proposed, in a 1950 paper, that if a teletype operator were unable to determine after five minutes that the party at the other end was a computer rather than another human being, then the computer would have passed the test. Turing proposed development of a program that would simulate the mind of a child, which would then be “subjected to an appropriate course of education” in order to produce an “adult” brain.
With all the phenomenal developments in the field of computer science, we are but marginally closer — if, indeed, we are closer at all — to developing a “child brain” than we were then. “Eugene Goostman,” recently declared to have passed the Turing Test during a competition at the University of Reading, was simply a chatbot programmed with evasive answers. It presented itself as a 13-year-old Ukrainian boy (who spoke English as a third language) not because it possessed the faculties of a young teenager, but to cover for its many errors and fool the assessors. Deceptive programming isn’t the intelligence Turing had in mind.
But “Goostman” was also in no way unique. Since 1990, inventor Hugh Loebner has underwritten an annual Turing contest at the Cambridge Center for Behavioral Studies in Massachusetts. And every year, all of the contestants are programs intended to fool the judges, and nothing more; the creativity or passion comes not from the silicon, but only from the programmers behind them.
As it turns out, Turing was preceded by over a millenium in determining his standard of human consciousness. The Rabbis of the Talmud stated the following, in Sanhedrin 65b:
Rava made a man. He sent him before Rebbe Zeira. [R. Zeira] spoke to it, but it did not answer. R. Zeira said, “are you from the scholars? Return to your dust!”
What the teacher Rava created was a Golem, an artificial humanoid that certain righteous individuals were purportedly able to create via spiritual powers. Much like a robot, it could obey commands and perform tasks – but it could not engage in conversation. The Maharsha explains why Rava’s Golem was unable to properly answer R. Zeira:
Because [Rava] could not create the power of the soul, which is speech. Because [the Golem] did not have a neshamah [soul], which is the spirit that ascends above, [but] only the life spirit which is also in animals, which descends below, [R. Zeira] said to it, “return to your dust.”
What this Talmudic passage and commentary tell us, then, is that creating an artificial consciousness isn’t nearly as simple as Turing imagined it to be. The Maharsha essentially tells us that intelligent speech is a manifestation of the soul invested in human beings — not something that programmers can simply drum up with several pages of well-written code. When Turing wrote that “presumably the child brain is something like a notebook … rather little mechanism, and lots of blank sheets” — he was making an assumption that, today, seems positively foolish.
Yet without any true progress towards development of artificial thought, many in the research community remain undeterred even today. Ray Kurzweil, now Director of Engineering at Google – and one of the great innovators and thinkers in computer science – predicts we’ll achieve this goal in 15 years, simply because technology progresses exponentially. An article in Princeton Alumni Weekly recently stated, regarding a prominent professor of psychology, that “if the brain is just a data-processing machine, then [Professor Michael] Graziano sees no reason we cannot create computers that are just as conscious as we are.”
That “if,” of course, is simply a restatement of Turing’s invalid assumption. Today’s supercomputers already process information more rapidly than we do, have larger memory banks, and of course have essentially perfect recall. Computers can see well enough to drive vehicles and hear and transcribe speech. But they cannot find meaning in what they see, nor respond as humans do to what they hear.
On the contrary, the failure to produce a semblance of a thinking computer should be causing a lot of second thoughts about the nature of human consciousness itself. We have proven that the brain is not simply a data-processing machine. When our most dedicated thinkers are unable to produce human thought, or even make substantive progress after decades of effort, are we perhaps not fools to imagine it developed by accident?
Musk is referring to the excellent recent book Superintelligence by Nick Bostrom. I really enjoyed reading the book simply because the arguments are so well constructed. Bostrom deftly preempts every single counter-argument I could think of. I love books like that.
Bostrom is an Oxford philosophy professor with a truly amazing CV and possibly one of the smartest and most thought-out people on the planet. The lists of people associated with his organization (Future of Humanity Institute at Oxford) and with its sister organizations (Centre for the Study of Existential Risk at Cambridge, Future of Life Institute at MIT, Machine Intelligence Research Institute near UC Berkeley) include some of the biggest names in technology and science. It would seem extremely arrogant for those who haven’t even read the book to casually dismiss the ideas as naive.
On the other hand, what Musk and Bostrom are referring to is a true existential risk – the extinction of humanity. As religious Jews who believe in HKBH, Hashgacha, Moshiach, and Techiyas HaMeisim, this is not something we should lose any sleep over. So although Musk as an atheist should perhaps not sleep easier, we as Jews can pretty much ignore the whole debate as nothing more than a (perhaps interesting) intellectual exercise.
My computer’s only aspiration is to slow me down.
I think you might be interested in this interview with IEEE Fellow Michael I. Jordan, Pehong Chen Distinguished Professor at the University of California, Berkeley. http://spectrum.ieee.org/robotics/artificial-intelligence/machinelearning-maestro-michael-jordan-on-the-delusions-of-big-data-and-other-huge-engineering-efforts#qaTopicOne
An interesting point to note: Turing actually addresses the “theological concerns” in his paper.
This is true… in a fashion that demonstrates that while Turing was a genius at mathematics and computing, he understood very little about theology.
What this Talmudic passage and commentary tell us, then, is that creating an artificial consciousness isn’t nearly as simple as Turing imagined it to be. The Maharsha essentially tells us that intelligent speech is a manifestation of the soul invested in human beings
———————————-
The debate may boil down to this statement. Others reject the notion of a soul:
The mind–body problem in philosophy examines the relationship between mind and matter, and in particular the relationship between consciousness and the brain.
The problem was famously addressed by René Descartes in the 17th century, resulting in Cartesian dualism, and by pre-Aristotelian philosophers,[2][3] in Avicennian philosophy,[4] and in earlier Asian traditions. A variety of approaches have been proposed. Most are either dualist or monist. Dualism maintains a rigid distinction between the realms of mind and matter. Monism maintains that there is only one unifying reality, substance or essence in terms of which everything can be explained………
Several philosophical perspectives have been developed which reject the mind–body dichotomy. The historical materialism of Karl Marx and subsequent writers, itself a form of physicalism, held that consciousness was engendered by the material contingencies of one’s environment
KT