MINING THE COMPUTATIONAL UNIVERSE
STEPHEN WOLFRAM: I thought I would talk about my current thinking about computation and our interaction with it. The first question is, how common is computation? People have the general view that to make something do computation requires a lot of effort, and you have to build microprocessors and things like this. One of the things that I discovered a long time ago is that it’s very easy to get sophisticated computation.
I’ve studied cellular automata, studied Turing machines and other kinds of things—as soon as you have a system whose behavior is not obviously simple, you end up getting something that is as sophisticated computationally as it can be. This is something that is not an obvious fact. I call it the principle of computational equivalence. At some level, it’s a thing for which one can get progressive evidence. You just start looking at very simple systems, whether they’re cellular automata or Turing machines, and you say, "Does the system do sophisticated computation or not?" The surprising discovery is that as soon as what it’s doing is not something that you can obviously decode, then one can see, in particular cases at least, that it is capable of doing as sophisticated computation as anything. For example, it means it’s a universal computer.
What that implies is that sophisticated computation is all around us. It’s not something that we humans have very sophisticatedly produced in our technology. It’s something that happens in nature, something that happens in simple mathematical systems. This one level of sophisticated computation, which is the Turing level of sophisticated computation that we see in all these different kinds of systems—whether physics and the fundamental rules of the universe operate in a way that goes beyond that, we don’t yet know. I happen to think they don’t. Many physicists believe they do. That’s still an unresolved question.
You have sophisticated computation happening everywhere. What can you do with this sophisticated computation? When we use computation today as human engineers, for example, we end up saying, "This is the thing I’m trying to achieve. Let me write a program by following a series of steps so I can foresee what’s going to happen, and I'll progressively create this program."
The thing I’ve been interested in for a long time is mining the computational universe of possible programs to find the ones that are useful for particular purposes. It’s quite a humbling thing as a human, because you find these things out in the computational universe that you can tell do very sophisticated things, but as a human it’s hard to understand what it does. So you're stuck looking at it and saying, "That’s really clever," but it’s just this little simple rule that one found by searching a wide space of these things.
My view of computation is it occurs all over the place, occurs in lots of systems in nature. We’ve got this amazing source of sophisticated processes. How do we relate those to things we humans care about? The challenge is—and you see it searching the computational universe for useful programs—you’ve got to define what you want, and then you can go out and get that thing done by some appropriate program from this computational universe.
Given this ocean of computational capability out there, how do we connect what’s possible with that ocean of computational capability with what us humans want to do? That’s led me to spend about three and a half decades trying to create computational languages that can express the things that we humans want to do and can then have that be interpreted using the things that are possible in this computational universe.
It’s easy to achieve sophisticated computation. The challenge is to pick the computation that turns out to be useful for some human purpose. What’s going to be useful for some human purpose? Well, it depends on what we want to do. People wonder what AI is going to automate in the world. One of the things that almost by definition is not automatable is the answer to "What do we want to do?" The doing of things may be automatable, but the deciding of what we want to do is something that almost by definition depends on who’s deciding that, and it depends on the human having come out of some long history of civilization to do that.
I’ve been interested in how we define the set of things that we want to do, and how we think about the kinds of abstractions that it’s worthwhile to define. In human language, for example, we come up with particular kinds of abstractions that are based on things that are common in our world. It's somewhat circular, because the abstractions that we come up with then define what we choose to build in our world, which then allows us to go on and create more levels of abstraction. This phenomenon of taking a set of things you want to do, building abstractions from them, and then going to more levels beyond that is something that plays out in the design of computational languages. I’ve watched that play out a bunch of times.
How do we think about the progressive levels of abstraction that we use to talk about things? For example, one application of that question is for education. How much stuff is there to know in the world? It could be the case that as we accumulate more knowledge, there’s just always more and more to know, and humans become incapable of learning it. That’s not actually what happens because after a while all the details of something get abstracted away, and all we have to talk about is some abstraction and then we build from that. So, it’s a question of what does this frontier of abstraction look like? What does that then mean in terms of what we choose to build in technology, for example, which is defined by what we think is worth doing and what we imagine we want to do.
We're at an interesting moment in terms of how information gets communicated. Human language, for example, has this feature that takes thoughts in our brains and tries to make some simplified symbolic representation of those thoughts that can then be communicated to another brain that will unpack them and do something with them. With computational language we have a more direct way of communicating. We have something where once we have the thing represented in computational language, we can immediately run it. We don’t have to interpret it in another brain.
I’ve been interested in the question of what features of civilization get enabled by computational language. By analogy, what features of the world got enabled by human language? The fact that it’s possible to pass on abstract ideas from one generation to another is presumably a consequence of the existence of human language. That’s the way we communicate abstract ideas.
If one can communicate in computational language, what consequences does that have? For instance, I’ve been quite involved in the whole business of computational contracts. When people make contracts with each other right now, they write those contracts in some approximation of human language, some legalese or something, which is an attempt to make a precise representation of what you want to have happen and what you’re defining should be the case. If one can make a computational language that can represent things in the world richly enough to be able to talk about the kinds of things that are in contracts, and we can now do that, then you have a different story about how you can create things like contracts. One place where that’s relevant is if you’re interested in telling your AIs how you want them to act. What you end up with is something like a computational contract with the AIs. You have to write a constitution for your AIs, which will have all of the messiness of human laws.
It’s an inevitable consequence of this whole business of principle computational equivalence and computational irreducibility that if you want any kind of richness in the activities of these devices, you’ll never be able to just have some simple Asimov-like laws of robotics. It will always be the case that there will be unexpected consequences and things that you have to patch, and things where you can’t know what will happen without explicitly running the system.
One place where this computational language idea seems to be important is in defining the goals that we want to set up for AIs.
* * * *
ROBERT AXELROD: What do you mean by a constitution?
WOLFRAM: That’s a good question. It’s a difficult thing to imagine working in a serious way. If you’re running your central bank using an AI, for example, the question is, what are the general set of guidelines that you want to put in place for what you want this AI to do? These are obviously old questions of political philosophy, which don’t have definitive answers. For the time being, it depends on what the humans want.
I was curious in Ian’s discussion about the more perfect ethic of his constructed consciousness. Where do those perfect ethics come from? Whereas we might be able to say we can find an optimal solution to this mathematical problem, there is no meaningful sense in which there is an ultimate ethic or ultimate goal. In other words, we can say given that you want something to do this or that thing, there’s an optimal way to achieve it.
If we look at the evolution of human purposes over the course of history, there’s a question of how that’s worked and what the end point of the evolution of human purposes might be. It relates to this question about progressive abstraction, because the kinds of purposes that we now define for ourselves are completely bizarre from the point of view of what they might have been 1,000 years ago.
AXELROD: Why do you use the term "end point"? I would think there isn’t necessarily an end point.
WOLFRAM: No, I don’t think there’s an end point. It’s an endless frontier. There are many related kinds of questions. For example, let’s say you’re doing mathematics. Is there an end to mathematics? Well, no, not really. You can keep adding more theorems and so on. The question is, is there an end to interest in mathematics? In other words, is there a point at which all the interesting theorems, the ones that we humans might care about, have been found and everything else is just stuff that for whatever reason we humans don’t care about?
That, again, relates to this question about abstraction. If you look at the history of mathematics, there’s a considerable degree of arbitrariness to what’s happened, but one thing that isn’t arbitrary is that there’s one piece of abstraction that gets built, and that’s a stepping-stone to allow you to get to another piece of abstraction.
Have all the interesting inventions already been made or are there going to be other interesting inventions to be made in the future? This question of what counts as interesting, what do we care about, again, is a complicated circular thing. Social networks are something that we might not have imagined would exist, but they do exist now, and there are all kinds of things built on top of them that are another layer of abstraction.
AXELROD: No, but it’s not completely circular. For example, evolution gives us a reason for wanting good health.
WOLFRAM: The kind of existential purpose of "If you don’t exist, you don’t get to have a purpose," that’s the one thing that is certainly there. In the course of history, certainly people have had times where they say the most important thing is to die well, for example, which doesn’t happen to be the typical modern point of view.
If you’re building a self-driving car, you want to tell it roughly how to think about the world, so what do you do? People have these naïve ideas that there’s going to be a mathematical theorem-like solution to that—like laws of robotics, or something. It’s not going to work. It can’t work.
ALISON GOPNIK: There is not something existential about the things that we want. If we want relative equality in making decisions about how you grant mortgages, for example, it’s computationally not possible to have all the things that we think are important about fairness being implemented by the same system. There’s inevitable tradeoffs between one kind of fairness that we all have very strong intuitions is important, and another kind of fairness that we all have strong intuitions is important.
There’s lovely formal work showing it’s not just that we don’t know what it is that we want; even if we know clearly and we have strong intuitions about what we want, you can’t get a single system that’s going to optimize for all of that. In a way, it's formal proof of the Isaiah Berlin picture of a kind of tragic moral pluralism, where it’s impossible to optimize all the things that you genuinely think are more morally significant.
WOLFRAM: One of the things that I find a lot of fun about the current time is that in the beginning it’s philosophy and in the end it’s code. That is, at some point these things that start off as philosophical discussions end up as somebody writing a piece of code.
FRANK WILCZEK: Not necessarily. With a neural net, you don’t write code for it.
WOLFRAM: You effectively write code. Whether you’re explicitly writing line-by-line code or merely defining the goals that you want to achieve and then having the machine automatically figure out how to achieve those goals—either way you’re defining something. The role of computational language is to be able to convert how we think about things into something that is computationally understandable.
WILCZEK: That’s a very broad use of the word code. It’s like saying you can code a baby.
WOLFRAM: No. By code I mean you put in concrete form a definite symbolic representation of what you want. It’s not a vague discussion about argumentation between philosophers.
WILCZEK: It doesn’t have to be that way. You can have a sophisticated artificial intelligence. You could just talk to it and tell it what to do.
CAROLINE JONES: Going back to what Alison was saying, isn’t our intuitive conception of ethics how you get there? Telling a neural net vaguely to go in this direction may not address all the moral pluralisms of how it gets there. "Lower population"—this would be a general direction. "The earth will be better if you lower human population." How it gets there is the entire ethical question.
WOLFRAM: Right, but that’s why one talks about needing constitutions, because you’re trying to define what happens at every step.
W. DANIEL HILLIS: You made the point that even being careful about it is not sufficient. What you have to recognize is that this notion of things acting according to the goal that you would want them to is an oversimplification. It’s a way that we model other people. It’s a way that we model ourselves. And in fact, it’s not a very good model. It's built into the cybernetic perspective on things.
The truth of the matter is people don’t want a set of consistent things, so by definition there’s no way to get a machine to do it. Aristotle, for a very slight moment, considers the possibility of making intelligent machines. He says, "The problem with tools is that they don’t know what they’re trying to accomplish. One could imagine in principle that you could have a loom that knew what pattern it was trying to weave, or a plow that knew where the field was, but as far as we know, those don’t exist, and so there will always be slaves." And he goes off and writes about slavery. But he at least considers it, and he realizes that the essential thing you have to have is a goal.
WOLFRAM: Nature is an example of computation without goals. One of these anti-scientific statements like, "The weather has a mind of its own." According to a bunch of science I’ve done on things about this principle of computational equivalence, it is in any reasonable sense the case that the weather is doing just the same kinds of computations as in our brains.
NEIL GERSHENFELD: Nature has extremal principles; it doesn’t have goals.
WOLFRAM: Any kind of thing we see happening in the world, we can explain it in terms of its purpose or its mechanism. You can say the trajectory of a ball that’s thrown is a parabola because at every moment it’s following the equations of motion for the ball. Or you can say that there’s a principle of least action that says that the overall thing is this parabola. Almost anything you come up with, you’ll be able to have an explanation of it in terms of its mechanism or an explanation in terms of its purpose. Which explanation you choose to say is the right explanation is a question often of the economy of explanation. But it’s not the case that there’s a set of things where you’d say, this one has a purpose, this one just has a mechanism.
PETER GALISON: The whole premise of moral philosophy is that there are these contradictions. We don’t live in the Panglossian world where fairness and equality and meritocratic adjustment aren’t compatible with one another. When we talk about the goals or ambitions of epistemic virtues for the sciences, we act as if they’re all compatible, but it often is not the case. That is to say, robustness, precision, accuracy, understandability, portability, or pedagogical utility, all these things we think should pull in the same direction, often don’t.
One of the things that we need to do is to recognize that there’s the same level of sophisticated tradeoffs or decisions that we have to make in what we want from the sciences as we have in the moral sciences.
GERSHENFELD: One of the most interesting bits at the core of machine learning is something called "no free lunch theorems." In machine learning the no free lunch theorems are a very precise way to say that something that’s optimal for something is bad at something else. You can show how you can’t be good at everything, so you have to choose.
GALISON: In the late 19th century there was a big debate about purpose and mechanism. There was a whole group of German scientists who began to talk about what you might call teleomechanism. They were very explicit about the fact that nature had goals and it was mechanistic. There was not a contradiction in recognizing this free choice that we have between extremal principles or mechanistic descriptions. They saw that as important to consider together. It’s interesting.
WOLFRAM: That’s interesting. You should tell me who those people were.
GALISON: There’s a book by Timothy Lenoir called The Strategy of Life: Teleology and Mechanics in Nineteenth-Century German Biology.
GERSHENFELD: The principle of least action was religious. It was a fight.
GALISON: At the time of Maupertuis, yes.
GERSHENFELD: It wasn't just alternate schools. It was a real religious battle.
SETH LLOYD: Alison, what are these results that you’re talking about, about showing that these systems can’t supply all the principles? Are these like Arrow's impossibility theorems for voting?
GOPNIK: They’ve got a very similar structure. Cynthia Dwork is one of the people who has done a lot of work on this, particularly along the lines of thinking about inequalities and fairness. Do you want fairness between groups? Do you want fairness among individuals? They have the same kind of structure as the Arrow theorems, where you literally can’t maximize all of those ends at the same time.
To echo what Neil was saying, that’s a general principle. We tend to have an idealist picture about computation. It’s important to recognize that you’re dealing with tradeoffs all the time. That's a very different picture, maybe more like a picture that comes from some enlightenment traditions about philosophy than other enlightenment traditions about philosophy, for example.
WOLFRAM: It’s a sad fact about axiomatization of almost anything that people start feeding in all these axioms that they say, "It better be true that this happens and this happens." In quantum field theory, for example, there were these axioms, and then it turned out the only quantum field theory consistent with all these axioms was a free quantum field theory. In other words, there are no interactions between the particles.
TOM GRIFFITHS: There’s a sense here in which we’re trying to hold machines to a higher standard than we hold ourselves to. Right? This distinction between purpose and mechanism is interesting because we like to think that other people have purposes, but in fact other people mostly have mechanisms. The part of our intuition about moral psychology that’s leading us into problems here is thinking that there is a system that we should be able to formalize and behave in accord with, when in fact none of us do so.
WOLFRAM: There’s a little thought experiment that you might find amusing. How does computation relate to democracy? In current democracy, people just say it’s a multiple-choice thing. You vote for A, B, C or whatever. But imagine a time when people can routinely speak in computational language as well as in human language, and where it’s perfectly possible for somebody to say, "This is what I want to have be the case in the world. I’m going to write this computational essay that is my representation of what I want to be the case in the world." And then imagine that 100 million people take their computational essays and feed them into this big AI that’s going to figure out what policy should be followed. That’s an alternative to the current version of picking from a small number of choices.
It throws you directly into all of the standard issues of political philosophy of what you are trying to achieve. It's a somewhat realistic view of what could happen, because by the time you have a computational language that can talk about things in the real world, it’s perfectly possible for people to represent their preferences in that much richer way.
DAVID CHALMERS: Right here is where you’re going to come up against some of these theorems in social choice theory. If everyone’s just offering a global vision of the world and we pick one, that’s totally unworkable. We’ve got to find some kind of compromise or consideration of components.
So, we break it down into ten separate issues—A, B, C, D and so on. When we come up against these results and see there’s a majority that prefers A and there’s a majority that prefers if A then B, but there’s not a majority that prefers B. You can’t just go with democracy on every component, and then suddenly need some system for somehow extrapolating from all these individual preferences. This is precisely where you need to find ways to make the tradeoffs.
This whole thing of turning morality into code is not a new problem, right? The legal code and the political code has precisely been trying to formalize this for centuries, and what do we know? The only way to do it is via a huge mess. So, I predict that once you try and turn it into AI code, it’s going to be a mess as well.
WOLFRAM: I agree. The main conclusion is that it has to be a huge mess.
WILCZEK: Arrow’s theorem ends up with the positive result, which is that the only way to enforce a consistent code is to have a dictator.
LLOYD: That is very positive indeed, Frank. Thank God. Dodged a bullet there.
WILCZEK: The point is that you shouldn’t always try to be too rational. Chomsky had this concept, that I find quite beautiful, of crackpot rationalism. Where rationalism is taking you into things that obviously are bad, you should just back off and let the world do its thing.