As HAL gasps in that scene, "he" became fully operational on Jan. 12, 1992, in Urbana, Ill. Just for fun, Stork decided to celebrate the occasion by inviting over some friends and colleagues. The Associated Press even showed up, and photographed the revelers as they stood around a HAL-styled birthday cake and sang "Daisy," HAL's dying words, in honor of the miraculous computer that was, in HAL's own words, "foolproof and incapable of error," a gifted lip-reader, a kick-ass chess player, an elegant conversationalist -- and a coldblooded killer of four men.
Dr. Stork's big idea has parallels to HAL, but he's loath to suggest that what he's working on in Menlo Park is a HAL per se -- it's not that simple, and besides, it makes him look a bit like a mad scientist. In truth, he's a mild-mannered, middle-aged man with a Ph.D. in physics, a 20-page CV, and a job as chief scientist at Ricoh Silicon Valley, where he mainly researches new technologies for the Japanese imaging giant. His office is cramped and modest, and his work seems modest as well, when you consider that most of his office neighbors on this stretch of Sand Hill Road are the world's leading venture capital firms.
For nearly three years, with Ricoh's blessing, Stork has been quietly working on and promoting a project called the Open Mind Initiative. To define it roughly, it's a way of advancing research on artificial intelligence (AI) by realizing that scientists alone can't create a useful model for human intelligence. Instead, the Open Mind Initiative plans to harness the collective brain power of millions of Web-enabled regular people to contribute their everyday experiences, observations, knowledge, and old-fashioned common sense.
This marks something of a radical break in the field. Thus far, AI research has been performed mainly in labs with computers that are fed cold mathematical models and huge sets of data of a specific type. The most famous result of that approach is IBM's Deep Blue computer program, which beat chess grandmaster Garry Kasparov in 1997 and led many to think that we were now in the era of computers that could outstrip human intelligence. But Deep Blue was physically incapable of savoring the victory -- or even comprehending the concept of "victory" itself. And, as Stork points out, Deep Blue hasn't the slightest idea how to play checkers.
"The hardest problems in AI have been the ones that 2-year-olds do all the time, effortlessly," says Stork. "They can tell the difference between a cat and dog just by looking. No computer can do that." So what the Open Mind Initiative is focusing on, for starters, are the aspects of humanity that aren't so easy to codify into algorithms -- our capacity to understand speech, to recognize handwriting, and to comprehend very basic, common-sense ideas. The Initiative wants to tackle "simple, basic facts," says Stork. "For example: "Bill was hungry, so he went to the store, saw an apple on the shelf, and bought it.' How do we know he bought the apple and not the shelf? I don't even have to tell you it was the apple. It's that kind of knowledge that undergirds so much of our language and knowledge about the world. Getting a machine to be intelligent requires that. There's really no way around it."
Investigating those ideas requires gathering much more data -- and more "humane" data -- than has previously been available. But, watching the steady growth of Internet users over the last five years, Stork realized that the whole wired world can be the world's largest data set -- and who better understands common-sense issues than common people? Anybody online can voluntarily contribute information on (theoretically) anything. For example, users might be asked to write a short paragraph to describe a simple picture of a chair in a room. In a case like handwriting recognition, a typical Internet user might be presented with a written number and would be asked whether it was a four or a nine. The Initiative has already developed a version of the children's computer game Animals, in which a computer tries to guess what animal someone is thinking of through a battery of yes-or-no questions. Some of the "common sense" statements being tackled are "you can use lies to confuse," "ice can be formed into cubes," and "the first thing you do when you take a shower is get undressed" -- no-brainers for people, but stunning feats of cognition for a computer. The plan is to pool and then process the results of those responses to create a computer that can make intuitive leaps only humans are capable of now. Users would get incentives to participate -- play the handwriting-recognition game for a while, for example, and you'd receive a certain number of frequent-flyer miles.
"I estimate that 10 trillion mouse clicks have been wasted on computer card games like solitaire," says Stork. "That's a huge amount of responses from people. If we could get a game that was even slightly as intriguing as solitaire, it would help. But imagine feeling as well like you were contributing to the world's largest software endeavor ever."
Stork began talking about the Open Mind Initiative at conferences and universities, initially to some head-scratching. Most people assumed this was just another way of data-mining, shoving scads of raw information into a computer. "[But] when they understand it, their eyes light up. There's been great support and encouragement."
Thus far, three branches of research are contributing to the Open Mind Initiative: speech recognition at the Université de Sherbrooke in Quebec; handwriting recognition at the University of Nijmegen in the Netherlands; and, furthest along, "common-sense thinking" at the MIT Media Lab. Push Singh, coordinator of the MIT project, realized that his common-sense research could move forward much faster if it was aligned with the Open Mind Initiative. "I felt that it was too big a project for us alone," says Singh. "This problem of common sense, if we can cast it so that everybody could participate, it would improve greatly. This is knowledge that everyone has, so everyone can participate."
Why might we want all this? Both Singh and Stork argue that in the short term we'd get some simple improvements in parts of our daily lives -- for example, better Web searches, or computers that could understand our lifestyles and schedules better than any Palm Pilot can. "In the same way the Industrial Revolution reduced manual labor, there's a lot of menial thought labor going on that we'd love to automate," says Dr. Stork. But at the moment they are more focused on the nuts-and-bolts aspects of the Initiative, like finding the best way to remove "bad" data from the contributions, or making sure that as broad a variety of information as possible makes its way into the system. The "common sense" of an upper-middle-class male in the U.S., for example, is different from that of a lower-class woman in England, and the Initiative wants to acknowledge that.
Hovering over it all is the ethical issue of whether we'd want to create a HAL-like computer capable of thinking like a human being. If we can program a computer with the capability for human understanding, might we also wind up with a creation capable of reflecting the worst aspects of human nature? There is also the question of whether projects like the Open Mind Initiative might blaze a path toward our own ruin; earlier this year, Sun Microsystems Chief Scientist Bill Joy scared the bejeezus out of a lot of people with a Wired magazine article titled "Why the Future Doesn't Need Us," in which he argued that technology research is moving so quickly it threatens our own usefulness.
Stork stresses that any possibility of artificial intelligence replacing humans is deep in the future; indeed, it might be years until Stork finds out whether he's on the right path. "This is not five years, this is not 10 years, this is not 50 years," he says. "This is very far off." And, he argues, any scientist worth his pocket protector would be remiss to stop working because of such anxieties. "We don't kill children because they might grow up to be Jeffrey Dahmer," he says.
Stork is only now ready to present the Open Mind Initiative to the public. The Initiative has hired a PR firm and legal counsel, designed a logo and slogan ("Teaching computers the stuff we all know"), started to solicit donations, and launched a Web site at openmind.org. The main promotional push begins next year, to take advantage of some of the 2001 parallels. The movie will be rereleased in theaters, the San Jose Tech Museum of Innovation will hold a 2001 exhibit, and Stork will, once again, speak about HAL and the possibility of creating something that can think purely with wires and plastic and electricity.
"It'd be fun and philosophically interesting to deal with another intelligence at some level," Stork says. "One that, for instance, didn't evolve from evolution. Our cognition is so built up with conflict. Maybe there's another way to have intelligence."