2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

thomas_g_dietterich's picture
Distinguished Professor of Computer Science, Director of Intelligent Systems, Oregon State University
How To Create an Intelligence Explosion—And How to Prevent One

 

Much of the rhetoric about the existential risks of Artificial Intelligence (and Superintelligence, more generally) employs the metaphor of the "intelligence explosion." By analogy with nuclear chain reactions, this rhetoric suggests that AI researchers are somehow working with a kind of Smartonium, and that if enough of this stuff is concentrated in one place, we will have a runaway intelligence explosion—an AI chain reaction—with unpredictable results. This is not an accurate depiction of the risks of AI. The mere interconnection of AI algorithms will not spontaneously take over the universe. Instead, I argue that creating an intelligence explosion will not happen by accident. It will require the construction of a very specific kind of AI system that is able to discover simplifying structures in the world, design computing devices that exploit those structures, and then grant autonomy and resources to those new devices (recursively).

Creating an intelligence explosion requires the recursive execution of four steps. First, a system must have the ability to conduct experiments on the world. Otherwise, it cannot grow its knowledge beyond existing human knowledge. (Most recent advances in AI have been obtained by applying machine learning to reproduce human knowledge, not to extend it.) In most philosophical discussions of AI, there is a natural tendency to focus on pure reasoning, as if this were sufficient for expanding knowledge. It is possible in some special cases (e.g., mathematics and some parts of physics) to advance knowledge through pure reasoning. But across the spectrum of scientific activity, scientific knowledge advances almost exclusively by the collection of empirical evidence for and against hypotheses. This is why we built the Large Hadron Collider, and it is why all engineering efforts involve building and testing prototypes. This step is clearly feasible, and indeed, there already exist some "automated scientist."

Second, these experiments must discover new simplifying structures that can be exploited to side-step the computational intractability of reasoning. Virtually all interesting inference problems (such as finding optimal strategies in games, optimizing against sets of complex constraints, proving mathematical theorems, inferring the structures of molecules) are NP-Hard. Under our current understanding of computational complexity, this means that the cost of solving a problem instance grows exponentially with the size of that instance. Progress in algorithm design generally requires identifying some simplifying structure that can be exploited to defeat this exponential. An intelligence explosion will not occur unless such structures can be repeatedly discovered (or unless our current understanding of computational complexity is incorrect).

Third, a system must be able to design and implement new computing mechanisms and new algorithms. These mechanisms and algorithms will exploit the scientific discoveries produced in the second step. Indeed, one could argue that this is essentially the same as steps 1 and 2, but focused on computation. Autonomous design and implementation of computing hardware is clearly feasible with silicon-based technologies, and new technologies for synthetic biology, combinatorial chemistry, and 3D printing will make this even more feasible in the near future. Automated algorithm design has been demonstrated multiple times, so it is also feasible.

Fourth, a system must be able to grant autonomy and resources to these new computing mechanisms so that they can recursively perform experiments, discover new structures, develop new computing methods, and produce even more powerful "offspring." I know of no system that has done this.

The first three steps pose no danger of an intelligence chain reaction. It is the fourth step—reproduction with autonomy—that is dangerous. Of course, virtually all "offspring" in step four will fail, just as virtually all new devices and new software do not work the first time. But with sufficient iteration or, equivalently, sufficient reproduction with variation, we cannot rule out the possibility of an intelligence explosion.

How can we prevent an intelligence explosion? We might hope that Step 2 fails—that we have already found all structural short cuts to efficient algorithms or that the remaining shortcuts will not have a big impact. But few electrical engineers or computer scientists would claim that their research has reached its limits.

Step 3 provides a possible control point. Virtually all existing AI systems are not applied to design new computational devices and algorithms. Instead, they are applied to problems such as logistics, planning, robot control, medical diagnosis, face recognition, and so on. These pose no chain reaction risk. We might consider carefully regulating Step 3 research. Similar regulations have been proposed for synthetic biology. But no regulations have been adopted, and they would be difficult to enforce.

I think we must focus on Step 4. We must limit the resources that an automated design and implementation system can give to the devices that it designs. Some have argued that this is hard, because a "devious" system could persuade people to give it more resources. But while such scenarios make for great science fiction, in practice it is easy to limit the resources that a new system is permitted to use. Engineers do this every day when they test new devices and new algorithms.

Steps 1, 2, and 3 have the potential to greatly advance scientific knowledge and computational reasoning capability with tremendous benefits for humanity. But it is essential that we humans understand this knowledge and these capabilities before we devote large amounts of resources to their use. We must not grant autonomy to systems that we do not understand and that we cannot control.