2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK? [1]

pamela_mccorduck's picture [5]
Author, Machines Who Think, The Universal Machine, Bounded Rationality, This Could Be Important; Co-author (with Edward Feigenbaum), The Fifth Generation
An Epochal Scientific, Technological, And Social—"Human"—Event

For more than fifty years, I've watched the ebb and flow of public opinion about artificial intelligence: it's impossible and can't be done; it's horrendous, and will destroy the human race; it's significant; it's negligible; it's a joke; it will never be strongly intelligent, only weakly so; it will bring on another Holocaust. These extremes have lately given way to an acknowledgment that AI is an epochal scientific, technological, and social—human—event. We've developed a new mind, to live side by side with ours. If we handle it wisely, it can bring immense benefits, from the planetary to the personal.

One of AI's futures is imagined as a wise and patient Jeeves to our mentally negligible Bertie Wooster selves: "Jeeves, you're a wonder." "Thank you sir, we do our best." This is possible, certainly desirable. We can use the help. Chess offers a model: Grandmasters Garry Kasparov and Hans Berliner have both declared publicly that chess programs find moves that humans wouldn't, and are teaching human players new tricks. If Big Blue beat Kasparov when he was one of the strongest world champion chess players ever, he and most observers believe that even better chess is played by teams of humans and machines combined. Is this a model of our future relationship with smart machines? Or is it only temporary, while the machines push closer to a blend of our kind of smarts plus theirs? We don't know. In speed, breadth, and depth, the newcomer is likely to exceed human intelligence. It already has in many ways.

No novel science or technology of such magnitude arrives without disadvantages, even perils. To recognize, measure, and meet them is a task of grand proportions. Contrary to the headlines, that task has already been taken up formally by experts in the field, those who best understand AI's potential and limits. In a project called AI100, based at Stanford, scientific experts, teamed with philosophers, ethicists, legal scholars and others trained to explore values beyond simple visceral reactions, will undertake this. No one expects easy or final answers, so the task will be long and continuous, funded for a century by one of AI's leading scientists, Eric Horvitz, who, with his wife Mary, conceived this unprecedented study.

Since we can't seem to stop, since our literature tells us we've imagined, yearned for, an extra-human intelligence for as long as we have records, the enterprise must be impelled by the deepest, most persistent of human drives. These beg for explanation. After all, this isn't exactly the joy of sex.

Any scientist will say it's the search to know. "It's foundational," an AI researcher told me recently. "It's us looking out at the world, and how we do it." He's right. But there's more.

Some say we do it because it's there, an Everest of the mind. Others, more mystical, say we're propelled by teleology: we're a mere step in the evolution of intelligence in the universe, attractive even in our imperfections, but hardly the last word.

Entrepreneurs will say that this is the future of making things—the dark factory, with unflagging, unsalaried, uncomplaining robot workers—though what currency post-employed humans will use to acquire those robot products, no matter how cheap, is a puzzle to be solved.

Here's my belief:  We long to save and preserve ourselves as a species. For all the imaginary deities throughout history we've petitioned, which failed to save and protect us—from nature, from each other, from ourselves—we're finally ready to call on our own enhanced, augmented minds instead. It's a sign of social maturity that we take responsibility for ourselves. We are as gods, Stewart Brand famously said, and we may as well get good at it.

We're trying. We could fail.