AI & THE FUTURE OF CIVILIZATION
Some tough questions. One of them is about the future of the human condition. That's a big question. I've spent some part of my life figuring out how to make machines automate stuff. It's pretty obvious that we can automate many of the things that we humans have been proud of for a long time. What's the future of the human condition in that situation?
More particularly, I see technology as taking human goals and making them able to be automatically executed by machines. The human goals that we've had in the past have been things like moving objects from here to there and using a forklift rather than our own hands. Now, the things that we can do automatically are more intellectual kinds of things that have traditionally been the professions' work, so to speak. These are things that we are going to be able to do by machine. The machine is able to execute things, but something or someone has to define what its goals should be and what it's trying to execute.
People talk about the future of the intelligent machines, and whether intelligent machines are going to take over and decide what to do for themselves. What one has to figure out, while given a goal, how to execute it into something that can meaningfully be automated, the actual inventing of the goal is not something that in some sense has a path to automation.
How do we figure out goals for ourselves? How are goals defined? They tend to be defined for a given human by their own personal history, their cultural environment, the history of our civilization. Goals are something that are uniquely human. It's something that almost doesn't make any sense. We ask, what's the goal of our machine? We might have given it a goal when we built the machine.
The thing that makes this more poignant for me is that I've spent a lot of time studying basic science about computation, and I've realized something from that. It's a little bit of a longer story, but basically, if we think about intelligence and things that might have goals, things that might have purposes, what kinds of things can have intelligence or purpose? Right now, we know one great example of things with intelligence and purpose and that's us, and our brains, and our own human intelligence. What else is like that? The answer, I had at first assumed, is that there are the systems of nature. They do what they do, but human intelligence is far beyond anything that exists naturally in the world. It's something that's the result of all of this elaborate process of evolution. It's a thing that stands apart from the rest of what exists in the universe. What I realized, as a result of a whole bunch of science that I did, was that is not the case.
My children always give me a hard time for this particular quote: "The weather has a mind of its own." Well, that's an animistic type of statement, and it seems like it has no place in modern scientific thinking. But that statement is not as silly as it first seems. What that's representing is, if we think about a brain—what is a brain doing? A brain is taking certain input, it's computing things, it's causing certain actions to happen; it's effectively generating a certain output.
We can think about all sorts of systems as effectively doing computations, whether it's a brain, whether it's a cloud responding to the different thermal environment that it finds itself in. We can ask ourselves, are our brains doing vastly more sophisticated computations than happens in these fluids in the atmosphere?
I had first assumed that the answer to that was, yes, we are carefully evolved, we're doing much more sophisticated stuff than any of these systems in nature. But it turns out that's not the case. It turns out that there's this very broad equivalence between the kinds of computations that different kinds of systems do. That realization makes the question of the human condition a little bit more poignant, because where we might say, "There's one thing we've got—we're special, we've got all this intelligence and all these things which nothing else can have." But that's not true. There are all these different systems of nature that are pretty much equivalent in terms of their computational, or for that matter, intellectual, capabilities.
What makes us different from all these things? What makes us different is the particulars of our history, which gives us our notions of purpose and goals. That's a long way of saying when we have the box on the desk that thinks as well as any brain does, the thing it doesn't have, intrinsically, is the goals and purposes that we have. Those are defined by our particulars—our particular biology, our particular psychology, our particular cultural history.
The thing we have to think about as we think about the future of these things is the goals. That's what humans contribute, that's what our civilization contributes—execution of those goals; that's what we can increasingly automate. We've been automating it for thousands of years. We will succeed in having very good automation of those goals. I've spent some significant part of my life building technology to essentially go from a human concept of a goal to something that gets done in the world.
There are many questions that come from this. For example, we've got these great AIs and they're able to execute goals, how do we tell them what to do? One answer to that is, just talk to them like you talk to WolframAlpha or Siri or whatever. We're understanding the natural language of human utterances, and we're doing something based on those utterances. It works pretty well when you're holding up your phone and asking one question. It's a pretty successful way to communicate, to use natural language. When you want to say something longer and more complicated, it doesn't work very well.
I just had this experience. I've been interested in teaching programming to the world and to kids. I was just writing this book, and I was writing exercises, which is a very bizarre thing for me to do because I've never done exercises myself in any textbook. I was writing exercises, and these exercises typically were in the form of, write a piece of code to do X. At the beginning of the book, when the exercises are pretty simple, it's easy to write the English, to say, write a piece of code to make a list of numbers from one to ten, or something. But by the end of the book, it was getting bizarrely frustrating. I was thinking, "This is the exercise I want to write, I know what the code is supposed to be, but how on Earth am I going to write the piece of English text that represents that code?" What I increasingly realized is some of this text was starting to sound like the language that you would find in a patent or something like this—a very ornate, precise, stylized English.
The realization from that is that the thing I've spent a large part of my life doing, which is building computer languages, is not such a bad idea. In a computer language, you do get to represent more sophisticated concepts in a clean way, which can be progressively built up in a way that isn't possible in natural language.
One of the things that I'm interested in is how we communicate goals to AIs. How do we talk to the AIs? My basic conclusion is that it's a mixture. Human natural language is good up to a point, and has evolved to describe what we typically encounter in the world. Things that exist from nature, things that we've chosen to build in the world—these are things which human natural language has evolved to describe. But there's a lot that exists out there in the world for which human natural language doesn't have descriptions yet. Even though our AI systems might effectively find those descriptions, we don't have ways to say those ourselves.
When it comes to describing more sophisticated things, the kinds of things that people build big programs to do, we don't have a good way to describe those things with human natural language. But we can build languages that do describe that.
One question I've been interested in is, what does the world look like when most people can write code? We had a transition, maybe 500 years ago or something, from a time when only scribes and a small set of the population were literate and could write natural language. Today, a small fraction of the population can write code. Most of the code they can write is for computers only. You don't understand things by reading code.
But there will come a time when, as a result of things I've tried to do, where the code is at a high enough level that it is a minimal description of what you're trying to do. For example, contracts are written in English and you try to make the English as precise as possible. There will be a time when most contracts are written in code, where there's a precise representation. It might be for cases where it's a computer that says, "Can I use this API to do this?" Well, there's some service level agreement that's going on there that isn't a human contract; it's something that's written in a piece of code that is understandable to humans, but also executable by the machines. This question of, can I do this according to this contract? is an automatic question. That's one tiny example of how the world starts to change when most people can write and read code.
The interesting language point is that today we have computer languages, which, for the most part, are intended for computers only. They're not intended for humans to read and understand. They're intended to tell computers in detail what to do. Then we have natural language, which is intended for human-to-human communication.
I've been trying to build this knowledge-based language, where it's intended for communication between humans and machines in a way where humans can read it and machines can understand it too, where we're incorporating a lot of the existing knowledge of the world into the language in the same way that in human natural language we are constantly incorporating knowledge of the world into the language, because it helps us in communicating things. One branch that I'm interested in right now is what the world looks like when most people can read and write code.
What's the future of the humans in a world where, once we can describe what we want to do, things can get done automatically? What do the humans do? One of my little hobby projects is trying to understand the evolution of human purposes over time. Today, we've got all kinds of purposes. We sit and have a big discussion about purposes, which presumably has some purpose. We do all the different things that we do in the world.
If you look back 1000 years, people's purposes were different: How do I get my food? How do I keep myself safe? In the modern Western world for the most part, you don't spend a large fraction of your life thinking about those purposes. You've evolved to different kinds of purposes.
From the point of view of the thousand years ago, some of the purposes that people have today, some of the things people do today would seem utterly bizarre, like a walking on the treadmill. Imagine 1000 years ago saying somebody's going to spend an hour walking on a treadmill. What a crazy thing to do. Why would one ever do that?
One of the things that amuses me in today's world is the fraction of people who play video games that take them back to the middle ages. What happens in the future? What do people do in the future? A lot of purposes that we have today are generated by scarcity of one kind or another. There are scarce resources in the world. People want to get more of something. There's scarce time in our lives. Eventually, those forms of scarcity will disappear.
The most dramatic discontinuity will surely be when we achieve effective human immortality, which, whether it's achieved by biology or digitally is not clear, but that is something that inevitably will be achieved. An awful lot of current human purposes have to do with: "I'm only going to live a certain time, so I'd better get a bunch of things done." What does it look like when things can be executed automatically? If you have a purpose, it can be executed automatically, so you don't have the kinds of drivers for purpose that we have today. What does it look like?
There are some bizarre hypotheses one might have. One hypothesis is, well, people will look back to a time when there was scarcity, and people could say, "What did people choose to do at that time?" Just as for a very long part of history, and even to some extent today, people look back to antiquity, to the religions created long in the past and say, "When those things were created, people had important issues going on. Let's look at how they resolved them at that time."
One of my more bizarre hypotheses is that today is the first time in history at which a large fraction of what goes on in the world is being recorded in some way or another. This is the first time in which that's been broadly happening. One of the things that could happen in the future, when the current set of purposes aren't issues anymore, people would say, "At a time when people did have scarcities of various kinds, what did they choose to do? Let's go study that time as carefully as possible."
Then every detail of what we do in our time, which ends up getting recorded, ends up becoming fodder for what it means to be human with purposes, so let's go do what they did in 2015 or whatever. That's a slightly extreme version; although, when we look at the large span of history, people who looked at purposes from a few thousand years ago, it's not quite as crazy as it might at first seem.
One of the things I would like to have a great answer to is, what do the derivatives of humans in the future end up choosing to do with themselves? One of the potential bad outcomes is that they're just playing video games all the time. The future of civilization is everybody's playing video games. They're playing World of Warcraft of the future, so to speak.
~ ~ ~
The history of AI is a funny history, and it's an evolving word in its use in technical language. In these years, AI is very popular, and people have some idea of what it means. We can talk about AI and people have some notion of what we're talking about. I've watched this evolution over the course of forty years now.
Back when computers were first being developed, in the 1940s and 1950s, the typical title of a book about computers or an article about computers in the newspaper was "Giant Electronic Brains." The idea was that just as things like bulldozers and steam engines and so on have automated mechanical work, so computers will automate intellectual work; there'll be a giant electronic brain. That promise turned out to be harder than many people expected. People didn't know what was involved in making brain-like activity, and it turned out it wasn't very easy.
There are even amusing movies from the 1950s. Computers as AIs got into science-fiction-ish portrayals from long ago. There's a cute one called Desk Set, which is about an IBM computer being installed in some company and making everybody not have a job to do. It's cute because the computer gets asked a bunch of reference library questions in this movie. As we were building WolframAlpha, one of the questions [we had] was, can we do all of the reference library questions from the Desk Set movie from 1957? Eventually, we could do them all in 2009.
There was first a great deal of optimism that we could automate intellectual work in the same way as we've been able to automate mechanical work. A lot of government money got spent on that in the early 1960s. It basically just didn't work.
Neural networks had been discussed, particularly by McCulloch and Pitts in 1943. They had come up with this model for how brains conceptually formally might work. They made the observation that their brain-like model would correspond to being able to do computations like Turing machines. They knew about the universal Turing machine idea from Alan Turing in 1936. From that it emerged that we can make these brain-like neural networks that will be able to be general computers.
In fact, that thinking was the way that Turing's work on universal computation flowed into the practical work that was done by the ENIAC folk and von Neumann and people like that, on practical computers. It didn't come directly from Turing machines, it came through this side road of neural networks. People set up simple neural networks, and the simple neural networks didn't do terribly interesting things.
Frank Rosenblatt invented these things called perceptrons, which were one-layer neural networks. Another terrible thing that happened in the '60s to neural networks was Marvin Minsky and Seymour Papert wrote a book entitled Perceptrons, where they basically proved that perceptrons couldn't do anything interesting, which is correct. Their proof is absolutely correct. They can only make linear distinctions between things.
The problem was, and this is a typical academic trait, people said, "These guys have written a proof that these neural networks can't do anything interesting; therefore, no neural networks can do anything interesting, so let's forget about neural networks." That happened for a while.
Meanwhile, there were a couple of different approaches to AI: One based on understanding at a formal level, symbolically, how the world works, and the other based on doing statistics and probabilistic kinds of things. There was a, "Well, are we going to be able to do symbolic AI?" One of the test cases of that is, can we teach a computer to do something like integrals? Can we teach a computer to do calculus? That was a test case from the late 1960s for AI. Then there were things like machine translation that people thought would be a good example of what computers could do.
The basic bottom line was that by the early '70s, that stuff had crashed. Then there was a phase where there were these things called expert systems, which were the next round of AI, which came up in the late '70s, early '80s, which was have a machine learn rules that an expert uses to figure out what to do. That petered out. In fact, my first company ended up, somewhat against my wishes, going into that direction in the end. In any case, that was the next phase.
Then AI became this crazy, "nobody really does that," "it's a fake thing," and "there's nothing interesting about it," for quite a long time. There's been this question of AI. I, myself, have been interested in how you make an AI-like thing since I was a kid, which is a depressingly long time ago now. I was interested particularly in how you take knowledge that us humans have accumulated in our civilization, and automate answering questions on the basis of this knowledge.
I thought about this first around 1980. I thought about how you do that symbolically, by building a systematic system that can break down questions and turn them into symbolic things and answer them. I concluded that to do this well, we have to have a brain-like thing that involves fuzzy questions, fuzzy answers, these kinds of things. I thought building a brain is hard. I worked on it a bit, I worked on neural networks even at that time, and didn't make much interesting progress. I put it aside for a while.
I have this approach of having these difficult projects which I try to think about every few years and try and figure out if the ambient technology in the world is ready to do this project now. Back in mid-2002 to 2003 timeframe, I thought I should think about that question again: What does it take to make a computational knowledge system? I realized that the science that I had done pretty much showed that my original belief about how one had to do this was completely wrong.
My original belief had been in order to make a serious computational knowledge system, you first have to build a brain-like thing, then you have to feed it knowledge, just like we learn things in standard education. Then you'll have a good computational knowledge system.
I realized, as a result of a bunch of science that I'd done, that there isn't this bright line between what is intelligent and what is merely computational. I had assumed that there was some magic thing, a transistor of intelligence or something, that there was this magic mechanism that allows us to be vastly more capable than anything that is merely computational.
It turned out, what I showed scientifically, is that that's just not the case. One of the challenges always, for somebody like me at least, is how do you take these basic science, almost philosophical conclusions, and decide to do something on the basis of them? Do you take that philosophical dog food and believe in that?
For me, that [meant] building the technology. If it's possible to do this, build the technology stack that does it. That's what led to WolframAlpha, for example. What I discovered from that is that, yes, it works, to be able to take a large collection of the knowledge that's in the world, and automatically answer questions on the basis of it, using what are essentially merely computational techniques.
There's a footnote to that, an important footnote, which is that, when one thinks of what is merely computational, one often thinks one's writing a program. How does one write a program? A programmer sits down and they say, "I want to write a program that does this, so I'll write this module, I'll write that module." I think about, "How am I going to achieve what I'm trying to achieve with this program?" Every step, I'm taking one step at a time to get to where I want to go.
What I had discovered was an alternative way to do engineering, which is something much more analogous to what biology does in evolution and so on, which is just to say that out there in the computational universe of possible programs, there's an infinite number of possible programs. If you just go out and look in that space of possible programs, even just look at random at a trillion programs and ask what these programs do, programs which are simple enough that one can have good coverage of all possible programs of a given kind, one might think that none of them would do anything interesting, they'd all be just simple programs that do simple things, so who cares. But, what I had found scientifically was that that wasn't the case. Particularly I looked at cellular automata, but also Turing machines, other kinds of things, even very simple examples of those kinds of programs can already do very sophisticated things. One of my conclusions was that's interesting in terms of understanding how nature works, but that's also important in terms of finding technology.
In effect, what we normally do when we build a program is we build this piece of technology step-by-step. The other thing we can do is just go out into the computational universe and mine technology out of the computational universe. Typically, the challenge is the same challenge that we face in doing physical mining. That is, we go and we find this amazing supply of, let's say, iron with magnetic properties, or cobalt, or gadolinium with some special magnetic properties. We say, "Great, it has these wonderful magnetic properties. What we do with this?" Can we connect that capability to an actual human purpose, to a goal that we have, to something that we want that technology to be able to do? In the case of magnetic materials, we have plenty of ways to do that. What we find is that there are all sorts of wonderful things in nature. Can we entrain them into our technology by finding some useful human purpose that they achieve?
In terms of programs, the same story. There are all kinds of programs out there, even very tiny programs that do very complicated things. Can we entrain them for some useful human purpose? This is a thing that we learned how to do. Given a particular purpose, given a particular goal, just go and exhaustively search a trillion programs and find one that does a useful thing for that purpose.
Sometimes those programs are doing things like making random number generators, hash coding systems, things that have to do with natural language and understanding. Sometimes they're doing more creative things. One thing we did a bunch of years ago now was having a music generation system, where you basically just press a button, and it will search a big space of programs. It will find program that, according to some heuristic, matches some particular musical style, and it will play you that. It was an interesting case, because it was a case of automated, creativity.
People say, "You've got these machines, but there's one thing that humans are better at, and it's being creative." The thing that I find most interesting with that little musical creation site is that I had assumed that composers and so on would say, "I need some inspiration about my composition, and then maybe I can dress up that inspiration using a computer." But instead, I ran into people who'd say, "It's a nice site that you have. I go there to get inspiration for some little core of a tune, which I then dress up as a human to make it meaningful and fit into what I'm trying to do." That's a case when we're seeing that this attribute of originality and creativity is something that is readily available in this computational universe.
It's the same thing as saying, go out into the physical world and go find these beautiful places to photograph in the world. They exist already. It's a question of us picking the one we care about to look at.
Backing up to this question of the Ark of AI, one of the things that we discovered was that, as a practical engineering matter, there's a lot that you can do by discovering programs in the computational universe of possibilities, rather than merely building a program step-by-step. What we also spent a lot of time doing is building this knowledge-based language which tries to incorporate the knowledge of the world right into the language.
The traditional approach of computer language is to make a little computer language that represents the operations of computers that they intrinsically know how to do: allocating memory, setting values of variables, iterating other things, changing program counters, whatever else. It's a slightly higher level version of that, but it's fundamentally telling computers to do things in your own terms. That's been the tradition of basically program languages for fifty years.
My theory about these things is to make a language which panders not to the computers but to the humans, and try to make a language where the language is, as much as possible, able to take what the humans think of and convert it into some form that the computers can understand. Part of what humans think of is what humans know about the world. They know about the existence of Cambridge, Massachusetts, or they know that there'll be a sunrise tomorrow, that type of thing.
Can you encapsulate the knowledge that we've accumulated, both in science and in the collection of data in the world, into a language which we can use to communicate with computers? That's the big achievement of my last thirty years or something, being able to do that.
One of the things that is significant there is when you're trying to solve this problem of doing computational knowledge, having such a language, that's the way you need to encode things about the world and things that you can do in the world. In terms of this Ark of AI, one set of things that would be considered very AI-ish is being able to take the knowledge of the world and being able to answer questions on the basis of the knowledge that one has of the world.
There's a whole list of things people would say in the '60s, when we can do this, we'll know we have AI. When we can do an integral, like from a calculus course, we can do this and that, we'll know we have AI. When we can do a conversation with a computer and have it seem like a human. At this point, one of the things that had seemed to be difficult there was, well, gosh, the computer just doesn't know enough about the world. You start asking the computer what day of the week it is, and it might be able to answer that. Who's President? It probably can't answer that. These kinds of things. At that point, you know you're talking to a computer and not to a person.
At this point, when it comes to these Turing tests, conversational tests of AI, people who've tried connecting, for example, WolframAlpha, to their Turing test bots, they lose every time. Because all you have to do is start asking it sophisticated questions, and it can answer them. No human can do that. By the time you've asked it a few different disparate kinds of questions, there will be no human that knows all of those things, but yet the system can know them. In that sense, we've achieved good AI, at that level.
There's another branch, which is there are certain kinds of tasks that are very easy for humans, that have traditionally been very hard for machines. The standard one is visual object identification. What is this thing? We can know what this is, we have some easy description of it, but a computer is just hopeless at that.
In the last year, that completely changed. For example in March, April, sometime early this spring, we brought out a little image identification system website, et cetera. A bunch of companies have done somewhat similar things. Ours is, for somewhat interesting reasons, a little better than other people's. It doesn't deserve to be better, but it happens to be somewhat better. You show it something and for about 10,000 kinds of things, it will tell you what it is, and it does a pretty good job. It's fun to try and confuse it. It's fun to show it an abstract painting and see what it thinks it is. But it does a pretty good job of saying what it is. How does it work? It works using the exact same technology that McCulloch and Pitts imagined in 1943, that lots of us worked on in the early '80s for neural networks.
What happened that made it work now and didn't let it work then? If you look at what the system does today, there are maybe 5000 picturable nouns in English, common nouns which you can make pictures of, maybe 10,000 if you include somewhat specialized things like special kinds of plants and beetles that people can, with some frequency, recognize.
Well, the thing that we can now do is we trained it on 30 million images of all these kinds of things, and it's this big complicated messy neural network. It probably doesn't matter much what the details of that neural network are. We do this training, and it takes about a quadrillion GPU operations to do the training. At the end of it, it does a pretty good job of recognizing 10,000 kinds of things.
We, as humans, are impressed by this because it's pretty much what we humans can do. It has about the same training data that we have, it's about the same number of images that a human would see in the first couple of years of their life. It's about the same number of operations that have to be done to do the training, it's about the same number of neurons in at least the first levels of our visual cortex.
The details are all different. The actual way that these artificial neurons work is has little to do with the way that actual neurons in the brain work, but it's conceptually similar, and there's a certain universality to what's going on is. At a mathematical level, it's this thing that's a composition of a very large number of functions, that has certain continuity properties that allow you to effectively use calculus methods to incrementally train the thing. Once you have those attributes, you, it seems, can end up with something that does the same job that we do in doing physiologic recognition.
It's interesting because back in the '80s, people had successfully done OCR—optical character recognition. They were able to take the twenty-six letters of the English alphabet, and say, okay, is that an A? Is that a B? Is that a C? And so on. That could be done for twenty-six different possibilities, but it couldn't be done for 10,000 possibilities. It's just a matter of the scale of the whole system that makes that possible today.
But as a, "have we got to AI yet?" these are important components. There are basically a few of these. There's physiologic recognition, there's voice to text, and there's language translation. Those are three kinds of things which humans manage to do with varying degrees of difficulty. I can't do language translation for any human language. Maybe Latin a little bit. People can learn to do human language translation. These other two: voice to text—people learn to do that, and physiologic recognition—people learn in the first couple years of life to do that.
These become essentially some of the missing links to how we make machines that are humanlike in what they do. For me, one of the interesting things has been incorporating those capabilities into a precise symbolic language. There's a whole lot of stuff to say that is a 500-year story about what we now need to do, in terms of having a symbolic language to represent the everyday world. We now have the capability to say this is a glass of water or something. We can go from a picture of a glass of water to the concept of a glass of water. Now we have to have some actual symbolic language to represent those things.
In my own efforts, I started off trying to represent mathematical technical kinds of knowledge, and then went on to lots of other kinds of knowledge. We've done a pretty good job now of systematic objective knowledge in the world. Now the question is to represent everyday discourse and the kinds of things that people say to each other, in a precise symbolic way.
In a precise symbolic representation, you might say "X is greater than 5." That's a predicate. You might also say, "I want a piece of chocolate." That's also a predicate. It has an "I want" in it, rather than "Chocolate has higher calorie value than such and such." We have to try to find a symbolic representation, a precise representation, of these things that we have traditionally expressed in human natural language.
I've been interested in this. This is one of the things that I'm thinking about these days. It's interesting, because I like to do my homework and I like to find out what other people have figured out about this. I start reading literature, and most of the literature points back to the 1600s. There were a lot of people like Leibniz in the late 1600s, a man called John Wilkins. There was this period when there were these things that they called philosophical languages.
The idea of a philosophical language would be essentially what I'm now trying to do—a symbolic representation of the world. One thing that I like is I look at the philosophical language of John Wilkins, and you can see, how he divided things that were important in the world. It's somewhat sobering, but somewhat pleasing in some ways. Some aspects of the human condition have been the same since the 1600s. There's the same types of issues that come up.
Some are very different. The whole section on death and various forms of human suffering was huge in that time. In today's ontology, it's a lot smaller. Big achievement. It's interesting to see, how a philosophical language of today would differ from a philosophical language from the mid-1600s. This is a measure of progress to see what difference there is.
One of the things that I'd like to be able to do is to have a symbolic representation of everyday discourse in the way that we now have a symbolic representation of systematic discourse. There are many of these attempts at formalization that have happened over the years.
In mathematics, for example, Whitehead and Russell in 1910, their Principia Mathematica, that was the biggest showoff effort at least. There had been previous efforts by Frege and Peano that were a little more modest in their presentation, to try to see how would you formalize, in that case, mathematics in a precise system. It's interesting what they managed to do right and what they did wrong. Ultimately, they were wrong in the idea of what they thought they should formalize. They thought they should formalize some process of mathematical proof, which turns out not to be the thing that most people care about.
You had asked about what a modern Turing test would be, what the modern analog of Turing tests would be. It's an interesting question. There's being able to have the conversational bot, which is Turing's idea. That's definitely still out there. That one hasn't been solved yet. It will be solved. The only question is what's the application for which it is solved?
For a long time, I have been asking why do we care, that type thing, because I was thinking the number one application was going to be customer service. While that's a great application, in terms of my favorite way to spend my life, that isn't particularly high up on the list. Customer service is precisely one of these places where you're trying to interface, to have a conversational thing happen.
One thing I realized is that one big difference between Turing's time and our time is our method of communicating with computers. There's one huge difference, which is, in his time, what he imagined was it's a conversation: you say some things to it or you type some things to it, and it types some stuff back. In today's world, it shows you a screen back. The case that I was curious to see a few years ago was you go to a movie theater, and you can buy a movie ticket from a person, or you can buy a movie tickets from a machine.
For people like me who always like to use the latest techno-toys, as soon as those things appeared in movie theaters, I was using the machine only. For a long time, there was nobody else using the machine. Then in urban movie theaters, you started seeing more and more people using the machines. Now most people use the machines.
How is the transaction with the machine different from the transaction with the human? The main answer is there's a visual display on the machine. It might ask you something, and you just press a button, and you can see immediately, you can use your eyes to understand something, your visual system to interpret something. It's a little different.
For example, in WolframAlpha, if you ask it something, and when it's used inside Siri, if there is a short answer, Siri will say back the short answer to you. But what most people want is the visual display of the bigger report that shows the infographic of this or that. This is something which is interesting, because it's a nonhuman form of communication that turns out to be richer than traditional human communication.
If we were all incredibly fast, perfect artists, as we're talking, we could draw that infographic and say this is what I'm talking about. But in fact, in most human-to-human communication, we're left with pure language, whereas in computer-to-human communication, we have this much higher bandwidth channel of visual communication that turns out to be important.
The traditional Turing test is a little funny, because many of the most powerful applications fall away because we have this additional communication channel. For example, here's one that we're trying to pursue right now. It's a bot to communicate about writing programs. You say, "I want to write this program, I want to do this." It'll say, "I've written this piece of program, this is what it does. Is this what you want?" Blah-blah-blah. It's a back-and-forth bot.
There's also other kinds of bots, like tutoring bots, which understand a piece of chemistry or something. It's an interesting problem because you have to make a model of the human. If you're trying to explain, what is the right thing to say at this point? Okay, what is the human confused about here? You have to have a model of the human to know what they're confused about, and so on.
What has been difficult for me to understand is when you achieve a Turing test AI-type thing, there isn't the right motivation. As a toy, one could make a little chat bot that people could chat with. That will be the next thing. We can see the current round of deep learning, particularly, recurrent neural networks, make pretty good models of human speech and human writing. It's pretty easy to type in, say, "How are you feeling today?" and it knows that most of the time when somebody asks this that this is the type of response you give.
For example, I want to figure out whether I can automate responding to my email. I know the answer is no. A good Turing test for me will be when I can have a bot respond to most of my email. That's a tough test. Some aspects of the email like, "I don't care about this, throw it in the spam folder"—that's comparatively easy. But if somebody says, "What should we do about this inconsistency in some design of our product?"—to be able to say, "Do you approve this thing?"—to be able to answer that with any reasonable degree of confidence is hard.
The thing to realize is that one has to learn those answers from the human the email is connected to. I might be a little bit ahead of the game, because I've been collecting data on myself for about twenty-five years. I have every piece of email for twenty-five years, every keystroke for twenty years, and lots of other stuff like that. I should be able to train an avatar, an AI, that will do what I can do perhaps better than me, more easily than most.
In a world where AIs are figuring out a lot of stuff for us, people worry about the scenario where the AIs take over. My belief about that scenario is that something much more, in a sense, amusing will happen first. It will quickly become the case that the AI knows what you intend to do, what you want to do, and it's good at figuring out how to get there.
Just like with the car GPS, we tell it we want to go to this destination. I don't know where the heck I am, I just follow my GPS. My children are always amused by the fact that I had a very early GPS that was like "Drive this way, this way, this way," and we were on one of these piers going out on Boston Harbor—I just followed the GPS.
What will happen, more to the point, is that there will be an AI that knows our history, and knows that on this menu, you're probably going to want to order this, or you're talking to this person, you should talk to them about this. I've looked at your interests, I know something about their interests, these are the common interests that you have, these are some great topics that you can talk to them about. More and more, the AIs will suggest what we should do, and I suspect most of the time people will just follow what the AIs tell them to do. It would probably be better than what they figured out for themselves.
To me, the AI takeover scenario, the laziness of humans—it's not laziness, it's taking good advice. The AI is telling you what to do. It's better than what you would've figured out for yourself. Just do what the AI says.
There is a complicated interaction, in terms of technology. You can do terrible things with technology and you can do good things with technology. People will always be people, and some people will try to do terrible things with technology, and some people will try to do good things with technology.
One of the things that I like about technology today is the equalization that it's produced across lots of people. There was a time when I used to be very proud that I had the best computer of anybody I knew. Now, I have the same computer as pretty much anybody I know. We have the same smartphones, and pretty much the same technology can be used by a decent fraction of the 7 billion people that exist. It's not perfectly flat, but it's reasonably flat. We'll see the same type of thing in lots of other areas of technology—medical technology and other kinds of things. I don't know whether it's luck or whether it has to be that way, but these pieces of technology that one's producing can be very broadly available. It's not the case that the King's technology is different from everybody else's technology. That's an important thing.
We make stuff that we sell to people and people use it all over the world, and sometimes we've even thought about publishing these indices of how much Mathematica gets used, how much WolframAlpha gets used in different countries around the world, because you know a huge amount from that. You know all kinds of stuff. There are countries that are very technologically sophisticated, and there are countries where they are not.
The great frontier 500 years ago was literacy. Today, it's doing programming of some kind. Today's programming will be obsolete in not very long. For example, when I was first using computers in the '70s, people would say, "If you're a serious programmer, you've got to be using assembly language." Now, I often ask these computer science graduates, "Did you learn assembly language?" They say, "Yes, I had one class about assembly language."
Why do people not learn assembly language? because computers are better at writing assembly language than humans are, and it's only a very small set of people who need to know the details of how language gets compiled into assembly language. A lot of what's being done by armies of programmers today is similarly mundane. It's stuff where the goals can be described much more succinctly, and it turns into some giant blob of Java code or JavaScript code or something. There's no good reason for humans to be writing all that stuff.
That's what people like me try to do—automate that, so we can automate the process of programming, so what's important is just going from what the human wants to do to getting the machine, as automatically as possible, to get that done. One of the things that I'm interested in right at this moment is this equalization that this is producing. In the past, if you wanted to write a serious piece of code or program for something important and real, it was a lot of work. You had to know quite a bit about software engineering, you had to invest months of time in it, you'd have to hire programmers who knew this, or you'd have to learn it yourself. It was a big investment.
Now, the big achievement, from having automated a lot of the stack, is that's not true anymore. A one-line piece of code, even a thing you could tweet sometimes, already does something interesting and useful. That means that it unlocks a vast range of people who couldn't previously make computers do things for them, make computers do things for them.
Now what happens? At this point, kids and fancy professionals are at the same level in terms of telling computers what to do for them. How do you teach compositional thinking and programming to as broad a range of people in the world as possible?
One of the things that I would like to see is for there to be a large number of kids around the world in random countries, who learn the new capabilities of knowledge-based programming, get to the point where they can produce code effectively that's as sophisticated as anybody in the fanciest, most educated places can. This is within reach.
We've gotten to the point where anybody can learn to do knowledge-based programming, and more importantly, learn to think computationally. The actual mechanics of the programming are pretty easy now. What's difficult is imagining things in a computational way, and thinking through how we conceptualize this activity that we have in some computational way.
How do you teach computational thinking? In mathematics, for example, there's 1000 years of history about how we teach mathematical thinking. For some initiative we have, I was asking about calculus books. They always have fourteen chapters, if I'm not mistaken. I asked how long have they had those same fourteen chapters. The claim was that the very first calculus book, written by Colin Maclaurin in 1727, had some of the same structure. Many of the examples were the same. It's been something which has been developed over a long period of time, and is very precisely known—how you feed mathematics to humans.
A couple of points to make. First of all, if you're writing Wolfram language code, I'm ultimately responsible for the design and structure of how the language works. In the case of DNA code and biology, there's nobody you can point to and say you're responsible, you designed this. It's something that has evolved over a long period of time. Much like human natural language, there's some degree of complexity in knowing what it's going to do. When you have a designed language, it should do what the designer thought it should do. Which is not to say that it isn't super useful to program living systems, not least because we are living systems, and because living systems are the only example we know of successful molecular computing. There may come a time when we've managed to engineer things, when we've managed to design a lifelike thing that is as designed as a computer language is today. But we're not at that point. We have to be using the molecular computer that we have, which is us and our biology.
In terms of how to do that programming, it's a super interesting question. If you look at the nanotechnology tradition, there's been this idea of how do we achieve nanotechnology? Answer: we take technology as we understand it on a large scale today and we make it very small. We say, "How can we make a CPU chip that is on an atomic scale?" Maybe we'll make it mechanically, but fundamentally, we're using the same architecture as a CPU chip that we know and love.
That isn't the only approach one can take. A lot of the things I've done, looking at simple programs and what they do, suggest that you can have even very simple impoverished components, and with the right compiler, effectively, you can make them do very interesting things.
It's one of these projects—doing molecular scale computing—that I've wanted to do for so long, but I just don't think that the ambient technology is to the point where one wouldn't have to spend a decade building [it] to get to that point. I'm hoping that we're almost to the day when it's possible for somebody like me, who isn't going to build all that ambient technology, to do something with molecular computing.
My guess about how one could do that is to say that we've got these components, which are enough to make a universal computer. You might not know how to program with these components, but by doing searches in the space of possible programs, one starts to build up building blocks, and one can then create a compiler for it. The surprising thing is that impoverished stuff is capable of doing sophisticated things, and the compilation step is not as gruesome as one might expect.
One might think, there's this very tiny Turing machine, and it's the simplest universal Turing machine that has two states and three colors. It has a little rule that you could write in English and it would probably be a sentence long. But you could take a picture of it, and it's tiny and simple. You might ask the question, "How can I compile a program that I might care about down to that Turing machine?"
I haven't done that, but I think one will find a layer of nasty messy machine code, and then above that, it gets pretty simple. That layer of nasty messy machine code will add some inefficiency, maybe a factor of 10,000, maybe more. But a factor of 10,000 is nothing when you're dealing with the scale of molecules as compared to large-scale things.
I guess my own prejudice and thought would be that just searching the computational universe and trying to find programs that are interesting, find building blocks that are interesting, is a good approach. A more traditional engineering approach that tries to, by pure thought, figure out how we build stuff—my guess is that's a harder road to hoe.
It doesn't mean it can't be done, but my guess is that one will be able to do some amazing things by just saying these are the components, we have a good representation for them, let's search the possible programs we can make with these things. One might say, "We can get this combination of molecules. We'll do all kinds of fun things. It will make this big blob of stuff. It will do this, it will do that." But what do we care? Then it's back to this question about connecting human purposes to what is available from the system.
What does the world look like when many people know how to code? Coding is a form of expression, just like English writing is a form of expression. To me, some simple pieces of code are quite poetic. They express ideas in a very clean way. There's an aesthetic thing, much as there is to expression in a natural language.
In general, what we're seeing is there is this way of expressing yourself. You can express yourself in natural language, you can express yourself by drawing a picture, you can express yourself in code. One feature of code is that it's immediately executable. It's not like when you write something, somebody has to read it, and the brain that's reading it has to separately absorb the thoughts that came from the person who was writing it.
If you look at how knowledge is transmitted in the history of the world, one form of knowledge transmission is essentially genetic. That is, you have an organism and its progeny has the same features that it had, so that's level zero.
Level one is the kind of knowledge transmission that happens with things like physiologic recognition. When a new critter is born, it has some neural network. The neural network in it has some random connections in it. But as the critter goes around the world, it starts recognizing different kinds of objects, and it learns that knowledge. Throughout the animal kingdom, critters have been learning physiologic recognition. That's the next level of knowledge.
Then there's a level of knowledge that was the big achievement of our species, which is natural language. The ability to take knowledge and represent it abstractly enough that we can communicate it in a disembodied way. Brain to brain, so to speak. The individual brain doesn't have to relearn from the raw material. The knowledge can be taken abstractly and communicated to the next brain down the line. Arguably, natural language is the most important invention in our species and in human history. It's what led to, in many respects, our civilization and many other things, so it's important.
We've got another level of this, and probably one day it will have a more interesting name. With, essentially, knowledge-based programming, we have a way of taking a representation of knowledge in the world. It's an actual representation of the world. It's not just mathematics or a computer language. It's a thing that represents real things in the world, but it does so in a precise and symbolic way. It has this feature that not only is understandable by brains and communicable to other brains and to computers, it's also immediately executable.
I'm pretty sure that this is a big deal, and I'm pretty sure that, just as in some respects natural language gave us civilization, there's a question of what knowledge-based programming will give us. One bad answer is that it will give us the civilization of the AIs. That'll be disappointing for the humans. That's what we don't want to have happen, because there could be points at which the AIs are doing a great job—they're communicating with each other, they're doing all these kinds of things, and we're pretty much left out of it, because there's no intermediate language, there's nothing to interface with our brains.
One of the questions that I'm super interested in right now is a question in this fourth level of knowledge communication. What is the big thing that that would lead to? It's like if you were Caveman Ogg or something, and you were just realizing that language was starting, could you imagine civilization from that point? What should we be imagining right now?
This relates to the question for humans that if most people could code, what would the world look like? There are clearly many trivial things that would change: contracts would be written in code, restaurant menus might be written in code. "This is how the food is going to be made?" Okay, I want to change this piece and that piece," and so on. There are simple things like that that would change.
There are probably much more profound things that change. The rise of literacy gave us some things like bureaucracy, for example, which had existed in the past, but it dramatically accelerated, for better or worse. It gives us a greater depth of governmental systems. What does that look like in the case where most people can code? How does the coding world relate to the cultural world?
One of the things I've been thinking about recently is high school education. How do you teach programming, coding, computational thinking, on a high school level? One of the possibilities is you have a course and you tack it onto all the many things that people are being taught today.
The other possibility that's much more interesting is you just rethink all the existing areas. If we have computational thinking, how does that affect how we study history? How does that affect how we study languages, social studies, whatever else? The answer is, it has great effect.
Imagine you're writing your essay. Today, the raw material for a typical kid's essay is: read something—this is the raw material—and now write about that. It is not the case that kids can generate new knowledge very easily. But in the computational world, that's no longer true. It's very straightforward for a kid, if they know something about writing code, to go to the beautifully digitized historical data and figure out something new. Then you're writing an essay about something you discovered and write an essay about it.
This is the achievement of knowledge-based programming—that it's no longer sterile. The reason is because it's got the knowledge of the world knitted into the language that you're using to write code. Right now, there's this area that people teach which is the pure mathematical area. At least basic math gets into all kinds of places; it hasn't gotten so much into the humanities, but it's something where it's just part of the way we think about things, at least basic math.
Similarly, computation is something that, in these times, is part of the basic way we should think about things. The great thing about computation is that if we think about things in terms of computation, then things become immediately executable. They become things where, once we have the idea computationally and we know a little bit of the mechanics of how we write code, pretty straightforward mechanics, then given that idea, once we've formulated it computationally, we can then get the machines to go do the work. A kid can get a machine to do the work just the same way as the fancy researcher can do it.
As I was saying earlier, this is the big issue. An AI on its own does not have a goal. Goals are a human construct. The thing that came out of lots of science stuff that I've done is this realization that intelligence and computation are the same thing. There's computation all over the universe, whether it's in a turbulent fluid producing some complicated pattern of flow, whether it's in the celestial mechanics of some interaction of an asteroid with this, that, and the other, or whether it's in brains.
This question of, does it have a purpose? what is its goal? you can ask that about any of these systems. Does the weather have a goal? Does climate have a goal? Unfortunately, this is one of these things people have been asking since Aristotle. This was the final cause question for Aristotle. One can unpack it a little bit.
A lot of stuff that we see today, it's very obvious that it was made for a purpose by humans because it has a lot of the vernacular of human engineering history. The Antikythera Device—when people started looking at this lump of gunk that was dredged up from the 100 B.C. to 100 A.D. period shipwreck—was it made for a purpose? When it was dropped and broke in two, there were all these cogs sticking out and we immediately knew this was made for a purpose. It wasn't just a pile of gunk, because that's part of the history of human engineering.
Given the history, it's very easy to recognize human purpose in things. It is a little bit similar to this question of whether it's alive or not. On Earth, it's very easy to answer that question. Does it have RNA? Does it have cell membranes? These kinds of things come from the history of life on Earth.
I remember when I was a kid, the first Mars landers were landing. Is there life on Mars? Is the green stuff that seems to happen every season, vegetation or whatever? I remember I was curious about what the tests would be. From today's time, they're pretty amusing. The basic test that was used was, scoop up a piece of Martian soil, feed it sugar, and see if it eats it. That was the top test. I don't think any of us would believe that life has to be something that eats sugar. The question, what is the abstract definition of life? That's hard.
Back to this question of how you recognize purpose. One example is look at the Earth from space. Can one tell that there's anything with a purpose hanging out on the Earth? Can one tell that there's civilization on the Earth?
I did this experiment maybe fifteen years ago now, I asked astronauts what they see on the Earth that shows there is intelligence on the planet. The first thing I was told was that in the Great Salt Lake in Utah, there's a straight line. It turns out to be a causeway that divides two areas that have very different colors of algae. It's a very dramatic straight line. Then I was interested in where the longest straight line made from lights is. There's a road in Australia that's long and straight. There's a railroad in Russia, I guess in Siberia basically, that's long, that has lights that go on when it stops at stations. So you can see some straight lines and things.
Another good example is, in New Zealand, there's a more or less perfect circle called Mount Erebus. I was doing this research before the Web was common, so you couldn't just go and look up what its attributes are. We were trying to get maps of the thing and so on, and we were in touch with the New Zealand Geological Survey. They said, "If you're writing a textbook, please do not say that Mount Erebus is a circular volcano. The circle does not come from the volcano. The circle comes from the national park that was drawn around the volcano." There are sheep or something that graze inside the national park but not outside it, or the other way around, and that's what leads to the circle. This is another example of piece of geometry that comes from humans.
It's pretty difficult to find clear examples of obvious purpose on the Earth as viewed from space. Another question that comes up, and it's a great question for the extraterrestrials, is if we want to recognize extraterrestrials out there, how do we tell if the signal we're getting has a purpose? In 1968, pulsars were discovered. Every few milliseconds or seconds, you hear this flutter-like sound that's a periodic thing. At the time, the first question was, is this a beacon? Because what would make a periodic thing like that? It must be for a purpose. Well, it turns out it's just a neutron star rotating.
This question comes up over and over again: What gives evidence of a purpose? Back in the early 1900s, Marconi and Tesla were both listening to radio transmissions from away from the Earth. Marconi had a yacht in the middle of the Atlantic, where he could hear these weird sounds that sounded a little bit like whale songs, but they'd come from radio. Tesla was very much, "this is the Martian signaling us." How does one tell?
In fact, it was some modes of the ionosphere. These were hydrodynamic phenomena—just physics. It's one of these cases of "The weather has a mind of its own." How do you tell whether it's a thing which has intelligence and a purpose and all that kind of thing, or whether it's just magneto-hydrodynamics of the ionosphere?
One criterion that one can potentially apply is, if you can identify a purpose, is it minimal in achieving that purpose? That is, if you see a thing, say, a fork that you eat with, but it has incredibly elaborate ornamentation on it. Well, it's purpose is it's a fork. But it also has all this ornament, which is not relevant to its purpose. The ornament may itself have a purpose—to give people a different emotional reaction to that fork or whatever else it is.
But this question of is something minimal for its purpose, and does that mean that it was built for the purpose? When you look at a thing, there are typically different explanations you can give for what happens. One is the mechanistic explanation. The ball rolls down the hill because at the next moment of time, the gravitational pull will do this and this and this. Or the ball rolls down the hill because it's satisfying the principle of least action, and it is globally trying to optimize this particular thing.
There are typically these two explanations you can give for something: the mechanistic explanation and the teleological. Which is the winning explanation, which is the right explanation of everything? One possible criterion is the thing was built for a purpose if it is minimal in achieving that purpose.
The problem is that essentially all of our existing technology fails that test. We can imagine technology that works that way, but most of what we build is absolutely steeped in technological history, and it's incredibly non-minimal for achieving that purpose. You look at a CPU chip, there's no way that's the minimal way to achieve what a CPU chip achieves, yet it's steeped in all this history of our engineering technology.
This question of how do you identify if the thing has a purpose is hard. For example, for the extraterrestrial question, it's important. One good thought experiment is, imagine that the extraterrestrials could arrange stars however they want. How would they arrange them to show that they were arranged for a purpose? Would they put them in a straight line? Probably not, because we can imagine all kinds of physical processes which might do that. They wouldn't put them in equilateral triangles, because that is a particularly simple physical process. Would they have a "Buy Coke" sign? Would they have some piece of alien artwork? We would undoubtedly not recognize the alien artwork as having an intelligent purpose.
It's an important question, because when we look at radio noise from the galaxy, it's very similar to CDMA transmissions from cell phones. It's not fundamentally different from that. Those transmissions use pseudo-noise sequences, which happen to have certain repeatability properties. But they come across as noise, and they are set up as noise for the purpose of not interfering with other channels and so on.
It's a funny thing, how do we recognize a fundamental purpose? The whole thing gets even more messy when we ask a question like, if we observe a sequence of primes being generated from a pulsar, we'd say what generated these? Did you need a whole civilization that would grow up and discover primes and make computers and make radio transmitters and do this? Or is there another explanation that's just that some physical process makes primes? That physical process may have all kinds of weird things going on inside of it.
There's a little cellular automaton I made up once that makes primes. You can see how it works if you take it apart. It just has a little thing bouncing inside it, and out comes a sequence of primes. But that didn't need the whole history of civilization and biology and so on to get to that point.
It's a slippery thing. When you observe something, was it created for a purpose? How do you tell if it has a purpose? I don't think there is an abstract sense of purpose. I don't think there's an abstract meaning. In other words, what you end up with is this weird thing where you have to say, does the universe have a purpose? Then you're doing theology in some way. There isn't an abstract purpose. There is no meaningful sense in which there is an abstract notion of purpose. That is, purpose is something that comes from history.
One of the things that might be true about computation, might be true about our world, that would be disappointing, is maybe we go through all this history and biology and civilization and so on, and at the end of the day, the answer is 42 or something. That's the end, so to speak. We got to the answer. You went through all these 4 billion years of various kinds of evolution and then you got to 42.
Nothing like that will happen, because there's this notion of computational irreducibility, which is the thing that comes from Gödel's theorem of universal computation. There are computational processes that you can go through, and that things often go through, where there's no way to shortcut that process. In other words, you can't say you were wasting your time. Much of science has been about shortcutting computation done by nature.
For example, if we're doing celestial mechanics, we say, let's predict where the planets will be a million years from now. Well, we could just follow the equations, follow each step, and see what happens step-by-step. But the big achievement, when there's a prediction in science, is because we're able to shortcut that and just jump from where we are now and reduce the computation. We were able to be smarter than the universe and figure out this is the endpoint without going through all the steps. That's been this story of prediction and science.
It's bad news for science, but it's good news for us, having meaningful lives, so to speak, that there isn't a way to just say, okay, we can shortcut everything, with a smart enough machine and smart enough mathematics, we can always just jump ahead and get to the endpoint without going through all the steps. We have to irreducibly follow through those steps. In a sense, that's why history means something. If it was the case that we could get to the endpoint without going through the steps, history would be in some sense pointless.
This fact that it's bad for science, because we can't make these predictions, but good for the meaningfulness of the history of civilization and so on, that these details are irreducible. In a sense, when one realizes that everything can have these attributes like intelligence and so on, one realizes the thing that has to be special about us is all of these details about us. It's not going to be some big feature.
It's not going to be the case, as I thought, that there's us that is intelligent, and there's everything else in the world that's not. It's not going to be some big abstract difference between us and the clouds and the cellular automata. It's not an abstract difference. It's not something where we can say, look, this brain-like neural network is just qualitatively different than this cellular automaton thing. Rather, it's a detailed difference that this brain-like thing was produced by this long history of civilization, et cetera, whereas this cellular automaton was just created by my computer in the last microsecond.
The problem of the abstract AI is very similar to the problem of extraterrestrial intelligence. It's the recognition of when does a thing have a purpose, when is a thing intelligent. Again, these are questions that I don't consider answered. It's of course one of the great things in science is why have we not found any extraterrestrials? How can we possibly be this unique? Maybe that's a silly question, because maybe there's intelligence all over the universe, and we have to then ask how close it is. Does it have RNA? Did it invent a notion of democracy or something?
A lot of these other attributes that we think of, a lot of what we think of when we start trying to break down and say, "Well, it will be intelligent. AI will be intelligent if it can do blah-blah-blah. If it can find primes, if it can produce this and that and the other." There are many other ways to get to those results. That's a consequence of the fact that there just isn't a bright line between intelligence and mere computation.
It's another part of the Copernican story, so to speak. We used to think Earth was the center of the universe. Now at least we think we're special because we have intelligence and nothing else does. I'm afraid the bad news is that that isn't a distinction. By the way, that lack of a distinction is pretty critical for thinking about the future of the human condition.
Here's one of my scenarios that I'm curious about. Let's say there's a time when human consciousness is readily uploadable into digital form, virtualized and so on, and pretty soon we have a box of a trillion souls. There are a trillion souls in a box, all virtualized. We look at this box. In the box, there will be hopefully nice molecular computing, maybe it'll be derived from biology in some sense, but maybe not, but there will be all kinds of molecules doing things, electrons doing things. The box is doing all kinds of elaborate stuff.
Then we look at the rock sitting next to the box. Inside the rock, there's all kinds of elaborate stuff going on, all kinds of electrons doing all kinds of things. We say, "What's the difference between the rock and the box of a trillion souls?" The answer will be that the box of trillion souls has this long history. The details of what's happening there were derived from the history of civilization and people watching videos made in 2015 or whatever. Whereas the rock came from its geological history, but it's not the particular history of our civilization.
This question of realizing that there isn't this distinction between intelligence and mere computation leads you to imagine the future of civilization ends up being the box of trillion souls, and then what is the purpose of that? From our current point of view, for example, in that scenario, it's like every soul is playing video games basically forever. What's the endpoint of that?