The Innovators: How a Group of Inventors, Hackers, Geniuses, and Geeks Created the Digital Revolutio - Isaacson Walter - Страница 125
- Предыдущая
- 125/155
- Следующая
Rather than demonstrating that machines are getting close to artificial intelligence, Deep Blue and Watson actually indicated the contrary. “These recent achievements have, ironically, underscored the limitations of computer science and artificial intelligence,” argued Professor Tomaso Poggio, director of the Center for Brains, Minds, and Machines at MIT. “We do not yet understand how the brain gives rise to intelligence, nor do we know how to build machines that are as broadly intelligent as we are.”9
Douglas Hofstadter, a professor at Indiana University, combined the arts and sciences in his unexpected 1979 best seller, Godel, Escher, Bach. He believed that the only way to achieve meaningful artificial intelligence was to understand how human imagination worked. His approach was pretty much abandoned in the 1990s, when researchers found it more cost-effective to tackle complex tasks by throwing massive processing power at huge amounts of data, the way Deep Blue played chess.10
This approach produced a peculiarity: computers can do some of the toughest tasks in the world (assessing billions of possible chess positions, finding correlations in hundreds of Wikipedia-size information repositories), but they cannot perform some of the tasks that seem most simple to us mere humans. Ask Google a hard question like “What is the depth of the Red Sea?” and it will instantly respond, “7,254 feet,” something even your smartest friends don’t know. Ask it an easy one like “Can a crocodile play basketball?” and it will have no clue, even though a toddler could tell you, after a bit of giggling.11
At Applied Minds near Los Angeles, you can get an exciting look at how a robot is being programmed to maneuver, but it soon becomes apparent that it still has trouble navigating an unfamiliar room, picking up a crayon, and writing its name. A visit to Nuance Communications near Boston shows the wondrous advances in speech-recognition technologies that underpin Siri and other systems, but it’s also apparent to anyone using Siri that you still can’t have a truly meaningful conversation with a computer, except in a fantasy movie. At the Computer Science and Artificial Intelligence Laboratory of MIT, interesting work is being done on getting computers to perceive objects visually, but even though the machine can discern pictures of a girl with a cup, a boy at a water fountain, and a cat lapping up cream, it cannot do the simple abstract thinking required to figure out that they are all engaged in the same activity: drinking. A visit to the New York City police command system in Manhattan reveals how computers scan thousands of feeds from surveillance cameras as part of a Domain Awareness System, but the system still cannot reliably identify your mother’s face in a crowd.
All of these tasks have one thing in common: even a four-year-old can do them. “The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard,” according to Steven Pinker, the Harvard cognitive scientist.12 As the futurist Hans Moravec and others have noted, this paradox stems from the fact that the computational resources needed to recognize a visual or verbal pattern are huge.
Moravec’s paradox reinforces von Neumann’s observations from a half century ago about how the carbon-based chemistry of the human brain works differently from the silicon-based binary logic circuits of a computer. Wetware is different from hardware. The human brain not only combines analog and digital processes, it also is a distributed system, like the Internet, rather than a centralized one, like a computer. A computer’s central processing unit can execute instructions much faster than a brain’s neuron can fire. “Brains more than make up for this, however, because all the neurons and synapses are active simultaneously, whereas most current computers have only one or at most a few CPUs,” according to Stuart Russell and Peter Norvig, authors of the foremost textbook on artificial intelligence.13
So why not make a computer that mimics the processes of the human brain? “Eventually we’ll be able to sequence the human genome and replicate how nature did intelligence in a carbon-based system,” Bill Gates speculates. “It’s like reverse-engineering someone else’s product in order to solve a challenge.”14 That won’t be easy. It took scientists forty years to map the neurological activity of the one-millimeter-long roundworm, which has 302 neurons and 8,000 synapses.I The human brain has 86 billion neurons and up to 150 trillion synapses.15
At the end of 2013, the New York Times reported on “a development that is about to turn the digital world on its head” and “make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control.” The phrases were reminiscent of those used in its 1958 story on the Perceptron (“will be able to walk, talk, see, write, reproduce itself and be conscious of its existence”). Once again, the strategy was to replicate the way the human brain’s neural networks operate. As the Times explained, “the new computing approach is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information.”16 IBM and Qualcomm each disclosed plans to build “neuromorphic,” or brainlike, computer processors, and a European research consortium called the Human Brain Project announced that it had built a neuromorphic microchip that incorporated “fifty million plastic synapses and 200,000 biologically realistic neuron models on a single 8-inch silicon wafer.”17
Perhaps this latest round of reports does in fact mean that, in a few more decades, there will be machines that think like humans. “We are continually looking at the list of things machines cannot do—play chess, drive a car, translate language—and then checking them off the list when machines become capable of these things,” said Tim Berners-Lee. “Someday we will get to the end of the list.”18
These latest advances may even lead to the singularity, a term that von Neumann coined and the futurist Ray Kurzweil and the science fiction writer Vernor Vinge popularized, which is sometimes used to describe the moment when computers are not only smarter than humans but also can design themselves to be even supersmarter, and will thus no longer need us mortals. Vinge says this will occur by 2030.19
On the other hand, these latest stories might turn out to be like the similarly phrased ones from the 1950s, glimpses of a receding mirage. True artificial intelligence may take a few more generations or even a few more centuries. We can leave that debate to the futurists. Indeed, depending on your definition of consciousness, it may never happen. We can leave that debate to the philosophers and theologians. “Human ingenuity,” wrote Leonardo da Vinci, whose Vitruvian Man became the ultimate symbol of the intersection of art and science, “will never devise any inventions more beautiful, nor more simple, nor more to the purpose than Nature does.”
There is, however, yet another possibility, one that Ada Lovelace would like, which is based on the half century of computer development in the tradition of Vannevar Bush, J. C. R. Licklider, and Doug Engelbart.
HUMAN-COMPUTER SYMBIOSIS: “WATSON, COME HERE”
“The Analytical Engine has no pretensions whatever to originate anything,” Ada Lovelace declared. “It can do whatever we know how to order it to perform.” In her mind, machines would not replace humans but instead become their partners. What humans would bring to this relationship, she said, was originality and creativity.
- Предыдущая
- 125/155
- Следующая