The widespread fear that AI will endanger humanity and take over the world is irrational. Here is why.
Conceptually, autonomous or artificial intelligence systems can develop two ways: either as an extension of human thinking or as radically new thinking. Call the first "Humanoid Thinking" (or "Humanoid AI") and the second one "Alien Thinking" (or "Alien AI").
Almost all AI today is Humanoid Thinking. We use AI to solve problems that are too difficult, time consuming or boring for our limited human brains to process: electrical grid balancing, recommendation engines, self-driving cars, face recognition, trading algorithms, and the like. These artificial agents work in narrow domains with clear goals that their human creators specify. Such AI aims to accomplish human objectives—often better, with fewer cognitive errors, fewer distractions, fewer outbursts of bad temper and fewer processing limitations. In a couple of decades, AI agents might serve as virtual insurance sellers, doctors, psychotherapists, and maybe even virtual spouses and children.
We will achieve much of this, but such AI agents will be our slaves with no self-concept of their own. They will happily perform the functions we set them up to enact. If screw-ups happen, they will be our screw-ups due to software bugs or overreliance on these agents (Daniel C. Dennett's point). Yes, Humanoid AIs might surprise us every once in a while with novel solutions to specific optimization problems. But in most cases novel solutions are the last thing we want from AI (creativity in the navigation of nuclear missiles, anyone?). That said, Humanoid AI's solutions will always fit a narrow domain. These solutions will be understandable, either because we understand what they achieve or because we understand their inner workings. In some cases, the code will become too enormous and fumbled for one person to understand because it is continuously patched. In these cases we can turn it off and start programming a more elegant version. Humanoid AI will bring us closer to the age-old aspiration of having robots do most of the work while humans are free to be creative—or to be amused to death.
Alien Thinking is radically different. Alien Thinking could conceivably become a danger to Humanoid Thinking; it could take over the planet, outsmart us, outrun us, enslave us—and we might not even recognize the onslaught. What sort of thinking will Alien Thinking be? By definition, we can't tell. It will encompass functionality that we cannot remotely understand. Will it be conscious? Most likely, but it need not be. Will it experience emotion? Will it write bestselling novels? If so, bestselling to us or bestselling to it and its spawn? Will cognitive errors mar its thinking? Will it be social? Will it have a theory of mind? If so, will it make jokes, will it gossip, will it worry about its reputation, will it rally around a flag? Will it create its own version of AI (AI-AI)? We can't say.
All we can say is that humans cannot construct truly Alien Thinking. Whatever we create will reflect our goals and values, so it won't stray far from human thinking. You'd need real evolution, not just evolutionary algorithms, for self-aware Alien Thinking to arise. You'd need an evolutionary path radically different from the one that led to human intelligence and Humanoid AI.
So, how do you get real evolution to kick in? Replicators, variation and selection. Once these three components are in place, evolution arises inevitably. How likely is it that Alien Thinking will evolve? Here is a back-of-the-envelope calculation:
First consider what getting from magnificently complex eukaryotic cells to human-level thinking involved. Achieving human thought required a large portion of the Earth's biomass (roughly 500 billion tons of eukaryotically bound carbon) during approximately two billion years. That's a lot of evolutionary work! True, human-level thinking might have happened in half the time. With a lot of luck, even in 10% of the time (that's 200 million years), but it's unlikely to have happened any faster. Remember, you don't need only massive amounts of time for evolution to generate complex behavior, you also need a petri dish the size of Earth's surface to sustain this level of experimentation.
Assume that Alien Thinking will be silicon-based, as all current AI is. A eukaryotic cell is vastly more complex than, say, Intel's latest i7 CPU chip—both in hardware and software. Further assume that you could shrink that CPU chip to the size of a eukaryote. Leave aside the quantum effects that would stop the transistors from working reliably. Leave aside the question of the energy source. You would have to cover the globe with 10^30 microscopic CPUs and let them communicate and fight for two billion years for true thought to emerge.
Yes, processing speed is faster in CPUs than in biological cells, because electrons are easier to shuttle around than atoms. On the other hand, eukaryotes work massively parallel, whereas Intel's i7 works only four times parallel (4 cores). Eventually, at least to dominate the world, these electrons would need to move atoms to store their software and data in more and more physical places. This necessity will slow their evolution dramatically. It's hard to say if, overall, silicon evolution will be faster than biological. We don't know enough about it. I don't see a reason why this sort of evolution would be more than two or three orders of magnitude faster than biological evolution (if at all)—which would bring the emergence of self-aware Alien AI down to roughly a million years.
What if Humanoid AI becomes so smart it could create Alien AI from the top down? That is where Orgel's Second Rule kicks in: "Evolution is smarter than you are." It's smarter than human thinking. It's even smarter than humanoid thinking. And, it's much slower than you think.
Thus, the danger of AI is not inherent to AI, but rests on our over-reliance on it. Artificial Thinking is not going to evolve to self-awareness in our lifetime. In fact, it's not going to happen in literally a thousand years.
I might be wrong, of course. After all, this back-of-the-envelope calculation applies legacy human thinking to Alien AI—which, by definition, we won't understand. But that's all we can do at this stage.
Toward the end of the 1930s, Samuel Beckett wrote in a diary, "We feel with terrible resignation that reason is not a superhuman gift…that reason evolved into what it is, but that it also, however, could have evolved differently." Replace "reason" with "AI" and you have my argument.