THE HUMAN STRATEGY
The big question that I'm asking myself these days is how can we make a human artificial intelligence? Something that is not a machine, but rather a cyber culture that we can all live in as humans, with a human feel to it. I don't want to think small—people talk about robots and stuff—I want this to be global. Think Skynet. But how would you make Skynet something that's really about the human fabric?
The first thing you have to ask is what's the magic of the current AI? Where is it wrong and where is it right?
The good magic is that it has something called the credit assignment function. What that lets you do is take stupid neurons, these little linear functions, and figure out, in a big network, which ones are doing the work and encourage them more. It's a way of taking a random bunch of things that are all hooked together in a network and making them smart by giving them feedback about what works and what doesn't. It sounds pretty simple, but it's got some complicated math around it. That's the magic that makes AI work.
The bad part of that is, because those little neurons are stupid, the things that they learn don't generalize very well. If it sees something that it hasn't seen before, or if the world changes a little bit, it's likely to make a horrible mistake. It has absolutely no sense of context. In some ways, it's as far from Wiener's original notion of cybernetics as you can get because it's not contextualized: it's this little idiot savant.
But imagine that you took away these limitations of current AI. Instead of using dumb neurons, you used things that embedded some knowledge. Maybe instead of linear neurons, you used neurons that were functions in physics, and you tried to fit physics data. Or maybe you put in a lot of stuff about humans and how they interact with each other, the statistics and characteristics of that. When you do that and you add this credit assignment function, you take your set of things you know about—either physics or humans, and a bunch of data—in order to reinforce the functions that are working, then you get an AI that works extremely well and can generalize.
In physics, you can take a couple of noisy data points and get something that's a beautiful description of a phenomenon because you're putting in knowledge about how physics works. That's in huge contrast to normal AI, which takes millions of training examples and is very sensitive to noise. Or the things that we've done with humans, where you can put in things about how people come together and how fads happen. Suddenly, you find you can detect fads and predict trends in spectacularly accurate and efficient ways.
Human behavior is determined as much by the patterns of our culture as by rational, individual thinking. These patterns can be described mathematically, and used to make accurate predictions. We’ve taken this new science of “social physics” and expanded upon it, making it accessible and actionable by developing a predictive platform that uses big data to build a predictive, computational theory of human behavior.
The idea of a credit assignment function, reinforcing “neurons” that work, is the core of current AI. And if you make those little neurons that get reinforced smarter, the AI gets smarter. So, what would happen if the neurons were people? People have lots of capabilities; they know lots of things about the world; they can perceive things in a human way. What would happen if you had a network of people where you could reinforce the ones that were helping and maybe discourage the ones that weren't?
That begins to sound like a society or a company. We all live in a human social network. We're reinforced for things that seem to help everybody and discouraged from things that are not appreciated. Culture is something that comes from a sort of human AI, the function of reinforcing the good and penalizing the bad, but applied to humans and human problems. Once you realize that you can take this general framework of AI and create a human AI, the question becomes, what's the right way to do that? Is it a safe idea? Is it completely crazy?
What we've done with my students, particularly Peter Krafft, and with Josh Tenenbaum, another faculty member, is look at how people make decisions on huge databases of financial decisions, and also other sorts of decisions. What we find is that there's an interesting way that humans make decisions that solve this credit assignment problem and make the community smarter. The part that's most interesting is that it addresses a classic problem in evolution.
Where does culture come from? How can we select for culture in evolution when it's the individuals that reproduce? What you need is something that selects for the best cultures and the best groups, but also selects for the best individuals because they're the things that transmit the genes.
When you put it this way and you go through the mathematical literature, you discover that there's one best way to do this. That way is something you probably haven't heard of. It's called “distributed Thompson sampling,” a mathematical algorithm used in choosing the action that maximizes the expected reward over a set of possible actions.
It's a way of combining evidence, of exploring and exploiting at the same time. It has a unique property in that it's the best strategy both for the individual and for the group. If you select on the basis of the group, and then the group gets wiped out or reinforced, you're also selecting for the individual. If you select for the individual, and the individual does what's good for them, then it's automatically the best thing for the group. That's an amazing alignment of interests and utilities. It addresses this huge question in evolution: Where does culture fit into natural selection?
The key to this distributed Thompson sampling way of solving the credit assignment problem is something we call social sampling. It's very simply looking around you at what other people do, finding the things that are popular, and then copying them if they seem like a good idea to you. It sounds very simple, but if you look at what people do, and you look at how good it is mathematically, what they're doing by finding out what's popular, is they're trying to find the best ideas out there. Idea propagation has this popularity function driving it, but individual adoption also has to do with figuring out how it works for the individual—a reflective attitude.
When you put the two of them together, you get decision making that is pretty much better than anything else you can do. It's a Bayesian optimal portfolio method. That's pretty amazing, because now we have a mathematical recipe for doing with humans what all these AI techniques are doing with dumb computer neurons. We have a way of putting people together to make better decisions, given more and more experience.
So, what happens in the real world? Why don't we do this all the time? Well, people are pretty good at it, but there are ways that this can run amok. One of the ways is through advertising, or fake news. There are many ways to get people to think that something is popular when it's not, and that screws the whole thing up. The way in which you can make groups of people smarter, the way you can make human AI, will work only if you can get feedback to them that's truthful. It has to be grounded on whether each particular action worked or not.
That's the key to the AI mechanisms, too. What they do is analyze whether they recognized the image correctly. If so, plus one; if not, minus one. We need to have that truthful feedback to make this human mechanism work well, and we need to have good ways of knowing about what other people are doing so that we can assess the popularity and the likelihood of this being a good choice in the correct way.
What we're doing now is trying to build this credit assignment function, this feedback function, for people so that we can make a human artificial intelligence—a smart organization and a smart culture. What we've done in many ways is duplicate some of the early insights that resulted in, for instance, the U.S. census—trying to find basic facts that everybody can agree on and understand so that the transmission of knowledge and culture can happen in a way that's truthful.
We've addressed this credit assignment problem in lots of ways. In companies, we've done it with badges that pay attention to who's connected to whom, and connect that to how good the results were on a weekly basis. Did they solve more problems? Did they invent more? Things like that. When you can get that feedback quantitatively, which is difficult because most things aren't measured quantitatively, we've found that we've been able to improve the productivity and the innovation rate within organizations by five and ten percent. That may not sound great, but it's huge.
We're now trying to do the same thing, but at scale. I refer to this as an operating system for humanity. It's like the Internet, but with the ability to perceive what's really going on, in the same way that we trust the U.S. census to do a pretty good job of telling us about population, and GDP, and people moving around.
The approach we're using is something that's called “open algorithms.” It's supported, interestingly, by the European Union as a way to deal with privacy, security, and competitiveness. China has just approached us about using this as a way of dealing with some of their internal conflicts. Countries in Latin America and in Africa are also beginning to experiment with this. It's a way of taking data from many sources—from companies and governments—and subjecting it to the scrutiny of all the stakeholders in order to make sure that the provenance of the data and the questions asked of the data are understandable and fair. Then, it gets published openly, just as the census is published.
We've done a series of experiments to develop these operating systems for humanity under the label Data for Development. We've done it in places like London, where we were able to detect communities under stress with high accuracy; in Italy, where we were able to deal with privacy concerns while sharing medical data, particularly among young families, in order to have better health; in Africa, to map poverty, and look at places where ethnic violence was going to happen, and better predict the propagation of infectious disease.
What we're seeing is this big picture of how we can make humanity more intelligent—a human AI. It's built on several threads. One is data that we can all trust, data that has been vetted by a broad community, data where the algorithms are known and monitored, much like the census data that we all automatically rely on as being at least approximately correct. The other is a fair assessment of what are people doing and not doing. That part doesn't exist yet. That's part of this credit assignment problem. That's the part where fake news, propaganda, and advertising all get in the way.
We now have the science that argues how you're supposed to go about building something that doesn't have these echo chamber problems, these fads and madnesses. We're beginning to experiment with that as a way of curing some of the ills that we see in society today. Open data from all sources, and this notion of having a fair representation of the things that people are actually choosing, in this curated mathematical framework that we know stamps out echoes and fake news.
My Trajectory
I grew up in a very blue-collar way, very unusual in the Edge community. People carried guns in my junior high school, sold drugs. It was a place that almost none of the Edge readers come from. Today, I'm a faculty member at MIT. What I've discovered is that traditional academia is too constraining to be able to ask the most important questions. So I have developed a whole network of co-conspirators—faculty and entrepreneurs around the world—who help me do things like experiment with how we can make society and organizations smarter.
In the last decade, for instance, I've enlisted experimental partners in six different places. I have appointments in Beijing, and Oxford, and Istanbul as a way of recruiting many different perspectives and talents to bear on this problem of creating a human AI. When I say "we," it's this network of people scattered all around the world who are collaborating to create a world that's based on truth and where can make good decisions.
In fact, the reason I'm here in New York today is for this UN Foundation-sponsored effort, which is called the Global Partnership for Sustainable Development Data. It's a group of hundreds of countries and organizations that are committed to this goal of producing honest data that can be used for good decision making. That's a pretty amazing thing.
Can you even imagine real transparency in government? Being able to ask questions, like how well are people doing? How is poverty changing? Is there forced migration? Imagine that information like this was truthfully available everywhere in the world. It would be completely transformative of government.
At MIT my work is sponsored by a variety of foundations and over seventy corporations. I do have sponsorship from corporations that are particularly interested in security, cybersecurity, and privacy. I'm also sponsored by the European Union Commission, the Chinese government, the U.S. government, and various entities who are all concerned with why we don't know what's going on in the world, and wondering what's stopping us from making good decisions. I don't know that they buy into my program 100 percent, but they see it as part of the solution to the societal problems that we're facing, like global warming and inequality.
On Polarization and Inequality
Today, we have incredible polarization and segregation by income almost everywhere in the world, and that threatens to tear governments and civil society apart. We have increasing population, which is part of the root of all those things. Increasingly, the media are failing us, and the downfall of media is causing people to lose their bearings. They don't know what to believe. It makes it easy for people to be manipulated. There is a real need to put a grounding under all of our cultures of things that we all agree on, and to be able to know which things are working and which things aren't.
We've now converted to a digital society, and have lost touch with the notions of truth and justice. Justice used to be mostly informal and normative. We've now made it very formal. At the same time, we've put it out of the reach of most people. Our legal systems are failing us in a way that they didn't before precisely because they're now more formal, more digital, less embedded in society.
Ideas about justice are very different around the world. People have very different values. One of the core differentiators is, do you remember when the bad guys came with guns and killed everybody? If you do, your attitude about justice is different than the average Edge reader. Were you born into the upper classes? Or were you somebody who saw the sewers from the inside?
A common test I have for people that I run into is this: Do you know anybody who owns a pickup truck? It's the number-one selling vehicle in America, and if you don't know people like that, that tells me you are out of touch with more than fifty percent of America. Segregation is what we're talking about here, physical segregation that drives conceptual segregation. Most of America thinks of justice, and access, and fairness as being very different than the typical, say, Manhattanite.
If you look at patterns of mobility—where people go—in a typical city, you find that the people in the top quintile—white-collar working families—and the bottom quintile—people who are sometimes on unemployment or welfare—never see each other. They don't go to the same places; they don't talk about the same things; they see the world very differently. It's amazing. They all live in the same city, nominally, but it's as if it were two completely different worlds. That really bothers me.
On Extreme Wealth
Today's ultra-wealthy, at this point, fifty percent of them have promised to give away more than fifty percent of their wealth, creating a plurality of different voices in the foundation space. Gates is probably the most familiar example. He's decided that if the government won't do it, he'll do it. You want mosquito nets? He'll do it. You want antivirals? He'll do it. We're getting different stakeholders taking action, in the form of foundations that are dedicated to public good. But they have different versions of public good, which is good. A lot of the things that are wonderful about the world today come from actors outside government like the Ford Foundation or the Sloan Foundation, where the things they bet on are things that nobody else would bet on, and they happened to pan out.
Sure, these billionaire are human and they have the human foibles. And yes, it's not necessarily the way it should be. On the other hand, the same thing happened when we had railways. People made incredible fortunes. A lot of people went bust. We, the average people, got railways out of it. Pretty good. Same thing with electric power. Same thing with many of these things. There's a churning process that throws somebody up and later casts them or their heirs down.
Bubbles of extreme wealth happened in the 1890s, too, when people invented steam, and railways, and electricity. These new industries created incredible fortunes, which were all gone within two or three generations.
If we were like Europe, I would worry. What you find in Europe is that the same family has wealth for hundreds of years, so they're entrenched not just in terms of wealth, but in terms of the political system and other ways. But so far, the U.S. has avoided this: extreme wealth hasn't stuck, which is good. It shouldn't stick. If you win the lottery, you make your billion dollars, but your grandkids have to work for a living.
On AI and Society
People are scared about AI. Perhaps they should be. But you need to realize that AI feeds on data. Without data, AI is nothing. You don't actually have to watch the AI; you have to watch what it eats and what it does. The framework that we've set up, with the help of the EU and other people, is one where you can have your algorithms, you can have your AI, but I get to see what went in and what went out so that I can ask, is this a discriminatory decision? Is this the sort of thing that we want as humans? Or is this something that's a little weird?
The most revealing analogy is that regulators, bureaucracies, parts of the government, are very much like AIs: They take in these rules that we call law, and they elaborate them, and they make decisions that affect our lives. The part that's really bad about the current system is that we have very little oversight of these departments, regulators, and bureaucracies. The only control we have is the ability to elect somebody different. Let's make that control over bureaucracies a lot more fine-grained. Let's be able to look at every single decision, analyze them, and have all the different stakeholders come together, not just the big guys. Rather like legislatures were supposed to be at the beginning of the U.S.
In that case, we can ask fairly easily, is this a fair algorithm? Is this AI doing things that we as humans believe is ethical? It's called human in the loop. This “open algorithms” approach is to be able to take the AIs and put them in a sandbox where you get to see what they eat and what they poop. If you see those two things, you can know if they're doing the right thing or the wrong thing. It turns out that's not too hard to do. It could be an infinitely complex AI, it doesn't really matter as long as you can see what it does and decide if you like it.
A key point with AI is that if you control the data then you control the AI. This is what my group is doing—and we're actually setting systems up on nationwide scales—then I don't need to know in detail how the decisions are made. But I do need to know what you decide, and on what evidence. As long as I can know what the AI is doing, I can ask if I like it or I don't like it. The problem that we have in so many parts of government now, the justice system, et cetera, is that there's no reliable data about what they're doing and in what situation. How can you know whether the courts are fair or not if you don't know the inputs and the outputs? You have to know that. The same is true of any of AI systems.
I remember being at Oxford and discussing this topic, and one person got up and talked about all the horrible things AI could possibly do. Then, a Justice minister from an East African nation got up and said, "What you say is true, but have you seen the current system? Almost anything would be an improvement." We need to hold current government to account in terms of what they take in and what they put out, and AI should be no different. In that way, we've got them where they live, which is that they eat data, and without data they can't do anything.
Next-Generation AI
The current AI machine-learning things are just dead simple stupid. They work, but they work by brute force, and so they need hundreds of millions of samples. They work because you can approximate anything with lots of little linear pieces. That's a key insight of current AI, that if you use reinforcement learning for credit assignment feedback, you can get those little pieces to approximate whatever arbitrary function you want.
But having the wrong functions means it won't generalize. Consequently if you're giving the AI new, different inputs it may do completely bonkers stuff. Or if the situation changes, you need to retrain it. There are amusing techniques to find the null space in these AI systems, which are things that the AI thinks is a valid example, but to a human they're crazy examples.
The stuff out there that I view as next-generation AI is that if you're going to deal with physical things, build the laws of physics into it as your basis functions, rather than these little stupid neurons. For instance, we know that physics uses functions like polynomials, sine waves, and exponentials. Those should be your basis functions. If you look at all of physics, almost all of it is combinations of those things. What they're not is little linear pieces. By using those more appropriate basis functions, you need a lot less data, can deal with a lot more noise, and get a lot better results.
Similar to the physics example, in my research group we've taken statistical properties of human networks and built that into machine-learning algorithms. As a consequence, it's just incredible what you can do. You can identify trends with very little data, and you can deal with huge levels of noise.
This is the difference between finding a solution and doing the science. What typical deep learning researchers do is take dumb models and then use an incredibly big hammer to hit them. The results don't have any explanatory power, which is why it doesn't generalize to new examples. If you knew some of the causal structure of the domain, if you had done that science, then when you use background knowledge to fit the data, what you're doing is providing a scientific explanation of the data, and it does generalize.
The interesting insight to me is that human society is a network just like the neural nets that they train for deep learning, but the neurons are a lot smarter. You and I have general functions that we can apply, and we recognize which connections should be reinforced. We're now using the social sampling theory we got from humans to beat all the deep learning stuff at its own game. We're beating the benchmarks, using the human strategy which has to do with the distributed social computation. It's remarkable that you can take something from the way that human communities come up with good strategies, take it over to DeepMind territory, and beat them.
What I want to know is the truth behind it, the science behind it. Current AI is doing descriptive statistics in a way that's not science, and would be almost impossible to make it science. If I give you a million little pixel values in a big list, you can't tell that it's a picture of a human face. If you put them all together, the human still has to perceive that as a face. Current AI is making all these little approximations, and together they may come up to be something useful, but the perception is not in their model. The science would only be in our perception of the model.
I'm interested in how you get insights into the whole system so that you can engineer the whole system, so that you can get your hands-on reality and build something.