Emergences

Emergences

W. Daniel Hillis [9.4.19]

My perspective is closest to George Dyson's. I liked his introducing himself as being interested in intelligence in the wild. I will copy George in that. That is what I’m interested in, too, but it’s with a perspective that makes it all in the wild. My interest in AI comes from a broader interest in a much more interesting question to which I have no answers (and can barely articulate the question): How do lots of simple things interacting emerge into something more complicated? Then how does that create the next system out of which that happens, and so on?

Consider the phenomenon, for instance, of chemicals organizing themselves into life, or single-cell organisms organizing themselves into multi-cellular organisms, or individual people organizing themselves into a society with language and things like that—I suspect that there’s more of that organization to happen. The AI that I’m interested in is a higher level of that and, like George, I suspect that not only will it happen, but it probably already is happening, and we’re going to have a lot of trouble perceiving it as it happens. We have trouble perceiving it because of this notion, which Ian McEwan so beautifully described, of the Golem being such a compelling idea that we get distracted by it, and we imagine it to be like that. That blinds us to being able to see it as it really is emerging. Not that I think such things are impossible, but I don’t think those are going to be the first to emerge.

There's a pattern in all of those emergences, which is that they start out as analog systems of interaction, and then somehow—chemicals have chains of circular pathways that metabolize stuff from the outside world and turn into circular pathways that are metabolizing—what always happens going up to the next level is those analog systems invent a digital system, like DNA, where they start to abstract out the information processing. So, they put the information processing in a separate system of its own. From then on, the interesting story becomes the story in the information processing. The complexity happens more in the information processing system. That certainly happens again with multi-cellular organisms. The information processing system is neurons, and they eventually go from just a bunch of cells to having this special information processing system, and that’s where the action is in the brains and behavior. It drags along and makes much more complicated bodies much more interesting once you have behavior.

W. DANIEL HILLIS is an inventor, entrepreneur, and computer scientist, Judge Widney Professor of Engineering and Medicine at USC, and author of The Pattern on the Stone: The Simple Ideas That Make Computers Work. W. Daniel Hillis's Edge Bio Page

EMERGENCES

W. DANIEL HILLIS: My perspective is closest to George Dyson's. I liked his introducing himself as being interested in intelligence in the wild. I will copy George in that. That is what I’m interested in, too, but it’s with a perspective that makes it all in the wild. My interest in AI comes from a broader interest in a much more interesting question to which I have no answers (and can barely articulate the question): How do lots of simple things interacting emerge into something more complicated? Then how does that create the next system out of which that happens, and so on?

Consider the phenomenon, for instance, of chemicals organizing themselves into life, or single-cell organisms organizing themselves into multi-cellular organisms, or individual people organizing themselves into a society with language and things like that—I suspect that there’s more of that organization to happen. The AI that I’m interested in is a higher level of that and, like George, I suspect that not only will it happen, but it probably already is happening, and we’re going to have a lot of trouble perceiving it as it happens. We have trouble perceiving it because of this notion, which Ian McEwan so beautifully described, of the Golem being such a compelling idea that we get distracted by it, and we imagine it to be like that. That blinds us to being able to see it as it really is emerging. Not that I think such things are impossible, but I don’t think those are going to be the first to emerge.

There's a pattern in all of those emergences, which is that they start out as analog systems of interaction, and then somehow—chemicals have chains of circular pathways that metabolize stuff from the outside world and turn into circular pathways that are metabolizing—what always happens going up to the next level is those analog systems invent a digital system, like DNA, where they start to abstract out the information processing. So, they put the information processing in a separate system of its own. From then on, the interesting story becomes the story in the information processing. The complexity happens more in the information processing system. That certainly happens again with multi-cellular organisms. The information processing system is neurons, and they eventually go from just a bunch of cells to having this special information processing system, and that’s where the action is in the brains and behavior. It drags along and makes much more complicated bodies much more interesting once you have behavior.

Of course, it makes humans much more interesting when they invent language and can start talking, but that’s a way of externalizing the information processing. Writing is our form of DNA for culture, in some sense; it's this digital form that we invent for encoding knowledge. Then we start building machinery to do information processing, systems, everything from legal systems to communication systems and computers and things like that. I see that as a repeat pattern. I wish I could say that more precisely, but you all know what I’m talking about when I wave my hands in that direction. Somebody will someday make wonderful progress in finding a way of talking about that more precisely.

There’s a worry that somehow artificial intelligence will become superpowerful and develop goals of its own that aren’t the same as ours. One thing that I’d like to convince you of is that I believe that’s starting to happen already. We do have intelligences that are superpowerful in some senses, not in every way, but in some dimensions they are much more powerful than we are, and in other dimensions much weaker. The interesting thing about them is that they are already developing emergent goals of their own that are not necessarily well aligned with our goals, with the goals of the people who created them, with the goals of the people they influence, with the goals of the people who feed them and sustain them, goals of the people who own them.

Those early intelligences are probably not conscious. It may be that there’s one lurking inside Google or something. I can’t perceive that. Corporations are examples. Nation states are examples. Corporations are artificial bodies. That’s what the word means. They’re artificial entities that are constructed to serve us, but in fact what happens is that they don’t end up serving exactly the founders, or the shareholders, not the employees that they serve, or their customers. They have a life of their own. In fact, none of those entities that are the constituents have control over them. There’s a very fundamental reason why they don’t. It’s Ashby’s Law of Requisite Variety, which states that in order to control something, you have to have as many states as the thing you’re controlling. Therefore, these supercomplicated superintelligences, by definition, are not controllable by individuals.

Certainly, you might imagine that the head of Google gets to decide what Google does, especially since they’re the founder of Google, but when you talk to heads of state or things like that, they constantly express frustration that people imagine that they can solve this problem. Of course, shareholders try to influence and do influence corporations, but they have limited influence.

One of the interesting things about the emergence of them having goals of their own is the emergent goals often tend to successfully see those influences as sources of noise, or something like that. For example, before information technology, corporations couldn’t get very big because they just couldn’t hold together.

BROOKS: What about the East India Company?

AXELROD: Or China.

HILLIS: I would say that East India Company did not as effectively hold together as an entity and stay coordinated. They can be big, but I don’t think that they were as tightly coupled.

Information technology certainly made it much easier. I won’t quibble with you whether they were edge cases, but you could have skyscrapers full of people that did nothing but hold the corporation together by calling up other people in the corporation.

These things are hybrids of technology and people. As they transitioned to a point where more decisions were being made by the technology, one thing they could do was prevent the people from breaking the rules. It used to be that an individual employee could just decide not to apply the company policy because it didn’t make sense, or it wasn’t kind, or something like that. That’s getting harder and harder to do because more of the machines have the policy coded into it, and they literally can’t solve your problem even if they want to.

We’ve got to the point where we do have these superpowerful things that do have big influences on our lives, and they’re interacting with each other. Facebook is a great example. There’s an emergent property of Facebook enabling conspiracy theory groups. It wasn’t that Zuckerberg decided to do that or anybody at Facebook decided to do that, but it emerged out of what their business model was. Then that had an impact on this other emergent thing—the government—which was designed for dealing with people, not corporations. But in fact, corporations have learned to hack it, and they’ve learned that they can use their superhuman abilities to track details to things like lobbying and track details of bills going through Congress in ways that no individual can. They can influence government in ways that individuals can’t. More and more, government is responding to the pressures of corporations more successfully than to the pressures of people because they’re superhuman in their ability to do that, even though they may be very dumb in some other ways.

One of their successes is their ability to gather resources; to get food from the outside world, for example. They have been extremely successful at gathering resources to themselves, which gives them more power. There’s a positive feedback loop there, which lets them invest in quantum computers and AI, which gets them presumably richer and better.

We may be already in a world where we have this runaway situation, which is not necessarily aligned with our individual human goals. People are perceiving aspects of it, but I don’t think what’s happening is widely perceived. What’s happening is that we have these emergent intelligences. When I hear people do this hypothetical handwringing about these superintelligent AIs that are going to take over the world, well, that might happen some time in the future, but we have a real example now.

Why don't we just figure out how to control those, rather than thinking hypothetically how we ought to design the five laws of robotics into these hypothetical general AI human-like things? Let’s think how we can design the five laws of robotics or computers into corporations or something like that. That ought to be an easier job. If we could do that, we ought to be able to apply that right now.

* * * *

ROBERT AXELROD: An example of that is, what rights do they have? The Supreme Court recently said they had the right to free speech, which means they can contribute to political campaigns.

ALISON GOPNIK: David Runciman, who is a historian at Cambridge, has made this argument exactly about corporations and nation states, but he’s made the argument—which I think is quite convincing—that this is from the origin of corporations and nation states, that it’s from industrialization, that that’s when you start getting these agents.

Then there are some questions you could ask about whether you had analogous superindividual agents early on. Maybe just having a forager community is already having a superintelligence, compared to the individual member community. It’s fairly clear that that kind of increased social complexity is deeply related to some of the things that we more typically think of as being intelligences. We have a historical example of those things appearing and those things changing the way that human beings function in important and significant ways.

For what it’s worth, at the same time, the data is that individual human goals got much better on average. You could certainly argue that there were things that happened with industrialization that set back.

AXELROD: What do you mean goals got better?

GOPNIK: Well, people got healthier.

AXELROD: They achieved their goals.

GOPNIK: Yes, exactly. They stopped having accidents. They stopped being struck by lightning. Someone like Hans Rosling has these long lists that are like that. We do have a historical example of these superhuman intelligences happening, and it could have been that people thought the effect was going to be that individual goals would be frustrated. If you were trying to graze your sheep on the commons, then you weren’t better off as a result, but it certainly doesn’t seem like there’s any principle that says that what would happen is that the goals of the corporations would be misaligned.

W. DANIEL HILLIS: It’s a matter of power balance. Certainly, humans aren’t powerless to influence those goals. We may be moving toward tipping the balance, because a lot of technological things have helped enable the power of these very large corporations to coordinate, and act, and gather resources to themselves more than they’ve enabled the power of individuals to influence them.

RODNEY BROOKS: Back to the East India Company: I realized when I said that that in fact the East India Company did develop an information technology and became the education system through elementary schools of people being able to write uniformly, do calculations, arithmetic. Writing enabled their information technology that individual clerks were substitutable across their whole operation.

HILLIS: The East India Company did some pretty inhuman things.

NEIL GERSHENFELD: Al Gore said he viewed the Constitution as a program written for a distributed computer. It is a really interesting comment, that if you take what you’re saying seriously to think about what is the programming language.

STEPHEN WOLFRAM: It’s legalese. Programming language is legalese.

CAROLINE JONES: That the algorithms of homophily are a huge part of the problem. The reputed echo chamber that magnifies small differences so you get conspiracy theories—the schizophrenic model is hyper connectivity. Everything connects to this conspiracy theoretical model, so homophily, as I learned from Wendy Chun, is at its core of the programming language—like begets like—as distinguished from the parallel study in the ‘50s of birds of a feather don’t flock together; difference attracts. These were two models in the ‘50s that were at the core of this game theoretical algorithmic thinking, and everyone went with like begets like, which produces the echo chamber.

The first question is about hybridity. The DNA model has been radically complicated by translocation. So, it’s not the case that there are perfect clones. You mentioned nine out of ten E. coli, but there’s the one tenth, which has information from the chimeric gene that I have floating around me from my son when he was passing in my amniotic fluid, whatever. There’s translocation going on all the time.

In other words, do we have a resource there in this ongoing hybridization of the program? Do we have a resource point of inflection? To Bob’s rights comment, we also are giving rights, not "we," but the Bolivian constitution is giving rights to the ocean, to a tree, to cetaceans. So, can this dialogue with other life forms, with other sentiences somehow break the horrifying picture of the corporate superintelligence? Are there other translocatable informational streams that can be magnified or the algorithms be switched to proliferate differences and dialogue and external influences rather than the continuous proliferation of the self same?

HILLIS: I don’t think it’s necessarily horrifying, because I don’t think we have no influence over this. I agree that this has been going on for a long time.

JONES: But we do have the model of a government being put in place by algorithms that we no longer control demographically. We have an actual case.

HILLIS: The trend is very much in the direction of the next level of organization, which is corporations, nation states, and things like that taking advantage of these effects, like symbiosis.

WOLFRAM: That’s called strategic partnerships.

HILLIS: Exactly. Yes, it is, or acquisition of genetic material is done by acquisition. They have lots of ways of taking advantage of hybridization that is better than individuals. In fact, the technology has hurt the individual interactions, as you point out, with the way that it’s played out and, in many ways, harmed it. It’s helped it in some ways.

It’s been a mixed bag, but it’s definitely enabled the corporations because corporations before were limited just by the logistics of scale. They became more and more inefficient except in very special cases. They couldn’t hold together as they got bigger. Technology has given them the power to hold together and act effectively bigger and bigger, which is now why we’ve just gotten in the last year the first two trillion-dollar companies because they were designed from the beginning to take good advantage of technology.

PETER GALISON: Do you think that there’s a characteristic difference between the kind of research that goes on under the corporate umbrella and, say, the university umbrella? I know people have lots of views about this, and there are things you can do in university that you can’t do in one or the other, but how would you characterize in particular areas of AI-related work?

HILLIS: Corporations are much more rationally self-interested in how they focus their research.

AXELROD: You mean they’re allocating resources more efficiently? They’re more effective at promoting promising research areas? Is that what you’re suggesting?

HILLIS: They select research areas that are in alignment with their emergent goals.

BROOKS: Yes, but they’re doing an additional thing now, which is very interesting. They’re taking the cream from the universities, offering them very open intellectual positions as a way of attracting the level below who will be more steerable to what they do. So, Google and Facebook are both doing this in the extreme at the moment. Those particular people will tell you what great freedom they have.

HILLIS: I’d say that’s a great example of them being very smart and effective at channeling the energy toward their emergent goals.

WOLFRAM: As you look at the emergent goals of corporations, it’s difficult to map how the goals of humans have evolved over the years, but I’m curious as to whether you can say anything about what you think the trend of emergent goals in corporations is. That is, if you talk about human goals, you can say something about how human goals have evolved over the last few thousand years. Some goals have remained the same. Some goals have changed.

AXELROD: I’ll try my hand at it. When you get two corporations in the same niche that are competitive, they often become uncompetitive. If one of them is substantially bigger, they might try to destroy or gobble up the other one, but otherwise it might try to cooperate with the other one against the interest of the consumer. It’s called anti-trust.

As they get bigger, they also want to control their broader environment like regulations. A small restaurant is not going to try to control the regulation of restaurants, but if you have a huge chain, then you can try to control the governmental context at which you are, and you could also try to control the consumer side of it, too. Advertising is a simple way to do that. As the corporations get bigger, there’s an unfortunate tendency that the industrial competition goes down, and we see this in high tech. It’s very extreme.

There are only five huge corporations and they’re doing different things. Apple is doing manufacturing and Amazon is not doing much manufacturing. That’s likely to continue not just in the high-tech areas, but in others. It’s very worrisome that the corporations will get more and more resources to shape their own environment.

At the lower level—at a restaurant or something—you have two goals: make money for your owners and survive. But when you get much bigger it seems to me that often the goals beyond those two are to also control as much of your environment as you can.

WOLFRAM: For the purpose of stability or for further growth.

AXELROD: For both. There’s another trend that’s correlated with this, which is the concentration of capital. At the individual level, you see a higher and higher proportion of the wealth of a country is in the top one percent.

HILLIS: That’s a symptom of them getting more powerful.

AXELROD: Maybe. It’s a symptom of the returns on capital greater than the growth of productivity, which doesn’t depend so much on the level of organizational structure. So, the corporations are likely to have more and more control over resources, and that’s unfortunate. It’s a very risky thing.

WOLFRAM: So, it’s virtues and vices of corporations. Do you think the corporations will emerge with the same kinds of virtue and vice type goal structures that are attributed to humans?

GEORGE DYSON: One thing that is very much Danny’s work, and that he didn’t say, is that the world we inherited from the 1940s that brought the first Macy Conference, the huge competition was in faster computers, to break the code within 24 hours, to design the bombs. These were machines just trying to get more instructions per second.

But there’s another side to it. There’s slow computing that in the end holds the survival of the species, and that’s where the immune system is so good because of very long-term memory, and we need that too. We don’t just need the speed. Danny, of course, is building the 10,000-year clock, a very slow computer, and that’s an important thing because when you have these larger organizations, these superorganizations you’re talking about, they scale not only in size and distance but in time, and that’s a good thing—or it can be a bad thing, too. You can have a dictator that lasts for a thousand years.

GOPNIK: But some organizations don't scale. Even when they get bigger, they seem to have this very predictable life. That’s what people like Geoffrey West would say.

G. DYSON: Right. Geoffrey will say that. But a very important, possibly good, function of these systems is we’re going to get longer-term computing where you look at the very long-term time series. That evolution will be a good thing.

GALISON: Historically, we have places like AT&T, IBM, Xerox that had world-class labs that deteriorated over time. AT&T Laboratories is nothing remotely like what it was like in the 1960s and ‘50s, and they expelled a lot of research eventually because it wasn’t short-term enough for them, and they figured they’d offload that to the universities and then take the fruits of it and do things that were more short term.

One possible outcome is that even the places where they’re hiring people at a high level and giving a tranche of the research group relative freedom as a cover and attractor, one outcome is that that could expand, but it could also pull back, and you could end up with wrecking parts of the university and not having a lot of freedom in the corporation. I don’t know. It seems to me an open question what’s going to happen with this concentration of research wealth at a few companies.

BROOKS: The wealth is the important part. When AT&T labs was riding high, AT&T was a monopoly of the phone company, an incredible cash flow.

FRANK WILCZEK: They were required by law to spend money.

WOLFRAM: But the fact is, basic research happens when there’s a monopoly, because if you have a monopoly then it’s worth your while to do basic research because whatever is figured out will only benefit you. You see that even at the level of the U.S. government.

JONES: Did you hear Frank’s comment that AT&T was required by the government to do research?

WILCZEK: They were required by law to keep their profits at a certain level, so they spent a lot on research.

JONES: A monopoly will never regulate itself.

WOLFRAM: Even in our tiny corners of the technology world, it’s worth our while to do research in things where we are the only distribution channel basically, and the same thing is happening with a bunch of AI stuff that’s being done in places where the only beneficiary is a company with a large distribution channel that there’s motivation to do basic research there. As soon as you remove that monopoly, the motivation to do basic research goes away from a rational corporate point of view.

TOM GRIFFITHS: There are cases where you can tie this very directly to AI. The best example of this is the Facebook feed management algorithm. Nick Bostrom has this thought experiment where you make an AI whose goal is to manufacture paperclips, and then it consumes the entire earth manufacturing paperclips. Tristan Harris has pointed out that the Facebook feed management algorithm is essentially that machine, but for human attention. It consumes your attention. It makes money as a consequence of doing so that's fed back into the mechanism for consuming human attention. It gets better and better at consuming human attention until we’ve paper-clipped ourselves.

SETH LLOYD: That’s true for all of these companies. Anybody who has teenage children knows that there’s an attention problem.

GOPNIK: I would push back against that. That idea is highly exaggerated and let me give you the reason why I think that.

Think about walking or driving down a street where there billboards all around, if you were in a first-generation literate culture, what you would say is, "There’s this terrible problem: As you go down the street, you’re having your attention distracted by having to decode what this stuff is. There are all these symbols you have to decode. Meanwhile, you're not paying attention to anything that’s going on in the street. Your attention is terribly divided." We know even neurologically that what actually happens is when you are deeply immersed in a literate culture, you end up with Stroop effects, where your decoding of print isn’t attention-demanding in the same way. You’re not doing it by serial attention anymore. In fact, you’re doing it completely automatically and in parallel. It’s something that we all worry about because we’re in the position of the preliterate person. It’s not at all obvious that this is somehow an intrinsic characteristic.

HILLIS: I’d like to bring this back to the AI part of the comment rather than the social part of the comment. If you look at where artificial intelligence is being deployed on a large scale, where people are spending a lot of money paying the power bills for doing the computation and things like that, they are mostly being done in the service of either corporations or nation states—mostly corporations, but nation states are rapidly catching up on that.

They are making those more powerful and more effective at working their emergent goals, and that is the way that this relates. So, when we think of these runaway AIs, we should think of them as not things off by themselves. They’re the brains of these runaway things that are already hybrid AIs. So, they’re the artificial brains or the artificial nervous systems of these things that are already hybrid AIs and already have emergent goals of their own.

LLOYD: This is why I disagree with you about this. Back in the 1960s, they would say, "Oh, kids these days, they’re watching TV five hours a day. It’s just horrible." Though I enjoy preparing for the grumpy old man stage of my life, and I like practicing that, I do think that if you look what these AIs are being devoted for, the primary use of them is to get people’s attention to web pages.

HILLIS: Whether it’s attention, or dollars, or votes, it almost doesn’t matter.

JONES: The designers will tell you that they’re using the lowest brainstem functions. That’s part of the problem. They’ll tell you they’re racing to the bottom of the evolutionary channel as quickly as they can.

HILLIS: If there’s anything valuable that is valuable to them, they will use this power to get it. There will be problems with that, and there will be limits on that and so on—you’re pointing out some of the limits in getting attention—and there will be limits in their ability to get money, and their ability to get electric power and so on, but they will use all of these tools to get as much of it as they can.

GOPNIK: But again, Danny, my challenge would be, is that any different than it was for Josiah Wedgwood in 1780?

HILLIS: Yes. It’s a tip in power.

GOPNIK: It seems to me you could argue there was much more of a tip in power if you’re considering the difference between being around in 1730 and 1850.

HILLIS: For example, for the East India Company, they couldn’t establish a policy and monitor that everybody did that policy. Google can. Google can do that.

GOPNIK: That’s exactly what people at Wedgwood did. That was part of the whole point of investing industry, inventing factories was exactly doing that.

HILLIS: But in fact they couldn’t do it very effectively.

JONES: East India had to translate itself to a language with an army, which was the British Empire. So, there are meshes between corporations and governments that we have to worry about, like the one we have right now.

GOPNIK: No. I’m not saying that we don’t have to worry about that or there isn’t power. The question is why is it that you think that this is a tipping point? It looks like there’s this general phenomenon, which is that you develop these transindividual superintelligences, and they have certain kinds of properties, and they tend to have power and goals that are separate. All that’s true but we have a lot of historical evidence, and it might be that what’s happening is that there’s more of that than there was before. But why do you think that this is a point at which this is going to be different?

HILLIS: There could be a tipping point. I’m not sure exactly now. What I am saying is that there’s an explosion of their intelligence. These explosive technologies, which are driven by Moore’s law and things like that, are being used to their advantage. There are very few examples where they’re being used to an individual’s advantage. There are lots of examples where they’re being used to the advantage of these hybrid emergent intelligences.

LLOYD: That’s a very good example, because between 1730 and 1850 the life expectancy and degree of nutrition and height of the average person in England declined because they were being taken out of the countryside and locked into factories for ninety hours a week.

GOPNIK: That’s why thinking about these historical examples is helpful. If you think about the scaling difference between, say, pre-telegraph and train, so if you think about the difference in scale between the communication that you could have before you had the telegraph and afterwards and before you had the train and afterwards, for all of human history the fastest communication you could have was the speed of a fast horse.

HILLIS: Yes. It made a big difference.

GOPNIK: Then suddenly you have communication at the speed of light. It seems to me there’s nothing that I can see in what is happening at the moment that’s different.

HILLIS: I realize what our difference is. I think of that as now. When I’m saying this is happening now, I’m including railroads and telegraph. This moment in history includes all of that, so that’s the thing that’s happening right now.

GOPNIK: That’s essentially industrialization.

HILLIS: I’m not categorizing it. Industrialization focuses on the wrong aspect. A lot of things happened at once and you categorize them, but the particular thing that is interesting which happened at the same time as industrialization was the construction of an apparatus of communication of symbols and policies that was outside the capacity of a human mind to follow it. That’s the interesting thing. There are many other aspects of industrialization, but that’s the thing that’s happening now, and computers and AI are just that going up on an exponential curve.

GALISON: Seeing this moment of increased poverty and stagnation of wages for a big sector of society, and enormous increase of wealth within a concentrated group, and the consolidation of industries like Amazon and others is something that does represent the sharp edge of that increase. It’s not just a simple linear continuation of what went before.

In the post-World War II era, there was a sense that people were able in families to go to college for the first time, to get loans—at least if they were white—and that meant that you had a big class that had increased expectations and increased income. We’re seeing the echoes of what happens when that stops when you’re basically not bringing new people into the college system. You’re not giving them increased stakes and homes and real estate and things that increase in value. We’re at a tough moment.

GRIFFITHS: There’s an interesting argument about something that’s different, which is one argument that’s often made by the technology companies is we’re not doing anything different. This is something that’s been done in the past, and we’re just doing it better, but there is a case that you could make that doing it better is different. The objective function is the same, but you’re doing a better job of optimizing it, and one consequence of that is that you get all of the unforeseen consequences of doing a good job of optimizing that objective, which may not have been clear when you were doing a bad job of optimizing that function.

In machine learning we talk about regularization. Regularization is forces that pull you back from overfitting on your objective, and you can think about not being able to do a great job of optimizing as a form of regularization, but it’s helping us to avoid all of the negative consequences of really optimizing the objective functions that those companies have defined for themselves.

GALISON: They say we’re doing the same thing, but they also say we like to break stuff, and breaking stuff often means breaking the income of working-class people.

GRIFFITHS: Yes, but it’s enough that doing the same thing better is the thing that then reveals why it’s bad to do that thing.

HILLIS: If you go back to the other perspective and say, "Is a single cell better off being a part of a multi-cellular organism that they can’t perceive as living in a society that they can’t perceive?" I would argue that it’s a mixed bag, but generally they are.

GOPNIK: Right. That’s right.

HILLIS: So, I’m optimistic in that sense.

GOPNIK: If you think of the train and telegraph is the inflection point, the individual achievement of goals didn’t just get better but got exponentially better.

HILLIS: Again, I’m not seeing that as an inflection point. We’re going through a transition. We’re in the middle of a transition from going from one level of organization to another level of organization in that process. For instance, individual cells had to give up the ability to reproduce. They had to delegate it.

WILCZEK: That’s a lot.

HILLIS: We will lose some things in that process. We’ll gain some things in that process. But all I’m mostly arguing for is that we’re spending too much time worrying about the hypothetical; it’d be better to look at the actual.

FREEMAN DYSON: The most important thing that’s happening in this century is China getting rich. Everything else to me is secondary.

IAN MCEWAN: One aspect of humanizing let’s call them robots, AI, whatever you like, would be to tax them as humans. Especially when they replace workers in factories or accountants or white-collar jobs and all the pattern recognition professions. Then we would all have a stake.

AXELROD: That’s an example of where we may have passed the tipping point. The corporations are now politically powerful enough to keep their tax rates low and not only that, but the billionaires are powerful enough to keep their tax rates low. Inheritance tax, for example

MCEWAN: This is why we need to resist the point at which, perhaps in fifty years’ time, vast sections of the population are only going to be working ten or fifteen hours a week, and we might have to learn from aristocracies of how to use leisure: how to hunt and how to fish, how to play the harpsichord. In other words, it’s perfectly possible that anyone who speaks of retirement—and we were talking about this in a break—how busy you could be doing nothing. But somehow, we have to talk of distributing wealth and function here.

HILLIS: Bob’s point is this is a sense in which the rubber meets the road where taxing corporations, that window has passed. We’ve lost that. They now have more power than individuals do in influencing the political system. So, there’s an example of where the train has left the station. We’re now in a post-individual human world. We’re now in a world that is controlled by these emergent goals of the corporations. I don’t think there’s any turning back the clock on that. We are now in that world.