The prospect of a world inhabited by robust AIs terrifies me. The prospect of a world without robust AIs also terrifies me. Decades of technological innovation have created a world system so complex and fast-moving that it is quickly becoming beyond human capacity to comprehend, much less manage. If we are to avoid civilizational catastrophe, we need more than clever new tools—we need allies and agents.
So-called "narrow" AI systems have been around for decades. At once ubiquitous and invisible, narrow AIs make art, run industrial systems, fly commercial jets, control rush hour traffic, tell us what to watch and buy, determine if we get a job interview, and play matchmaker for the lovelorn. Add in the relentless advance of processing, sensor and algorithmic technologies, and it is clear that today's narrow AIs are tracing a trajectory towards a world of robust AI. Long before artificial super-intelligences arrive, evolving AIs will be pressed into performing once-unthinkable tasks from firing weapons to formulating policy.
Meanwhile, today's primitive AIs tell us much about future human-machine interaction. Narrow AIs may lack the intelligence of a grasshopper, but that hasn't stopped us from holding heartfelt conversations with them and asking how they feel. It is in our nature to infer sentience at the slightest hint that life might be present. Just as our ancestors once populated their world with elves, trolls and angels, we eagerly seek companions in cyberspace. This is one more impetus driving the creation of robust AIs—we want someone to talk to. The consequence could well that the first non-human intelligence we encounter won't be little green men or wise dolphins, but creatures of our own invention.
We of course will attribute feelings and rights to AIs—and eventually they will demand it. In Descartes time, animals were considered mere machines—a crying dog was considered no different than a gear whining for want of oil. Late last year, an Argentine court granted rights to an orangutan as a "non-human person." Long before robust AIs arrive, people will extend the same empathy to digital beings and give them legal standing.
The rapid advance of AIs also is changing our understanding of what constitutes intelligence. Our interactions with narrow AIs will cause us to realize that intelligence is a continuum and not a threshold. Earlier this decade Japanese researchers demonstrated that slime mold could thread a maze to reach a tasty bit of food. Last year a scientist in Illinois demonstrated that under just the right conditions, a drop of oil could negotiate a maze in an astonishingly lifelike way to reach a bit of acidic gel. As AIs insinuate themselves ever deeper in our lives, we will recognize that modest digital entities as well as most of the natural world carry the spark of sentience. From there is it just a small step to speculate about what trees or rocks—or AIs—think.
In the end, the biggest question is not whether AI super-intelligences will eventually appear. Rather the question is what will be the place of humans in a world occupied by an exponentially growing population of autonomous machines. Bots on the Web already outnumber human users—the same will soon be true in the physical world as well.
Lord Dunsany once cautioned, "If we change too much, we may no longer fit into the scheme of things."