[Video designed for full screen viewing. Click icon in video image to expand.]
HeadCon '13: WHAT'S NEW IN SOCIAL SCIENCE? (Part IX)
David Pizarro: The Failure of Social and Moral Intuitions
We had people interact—strangers interact in the lab—and we filmed them, and we got the cues that seemed to indicate that somebody's going to be either more cooperative or less cooperative. But the fun part of this study was that for the second part we got those cues and we programmed a robot—Nexi the robot, from the lab of Cynthia Breazeal at MIT—to emulate, in one condition, those non-verbal gestures. So what I'm talking about today is not about the results of that study, but rather what was interesting about looking at people interacting with the robot.
David Pizarro is Associate Professor of Psychology, Cornell University, specializing in moral judgement.
Today I want to talk a little about our social and moral intuitions and I want to present a case that they're rapidly failing, more so than ever. Let me start with an example. Recently, I collaborated with economist Rob Frank, roboticist Cynthia Breazeal, and social psychologist David DeSteno. The experiment that we did was interested in looking at how we detect trustworthiness in others.
We had people interact—strangers interact in the lab—and we filmed them, and we got the cues that seemed to indicate that somebody's going to be either more cooperative or less cooperative. But the fun part of this study was that for the second part we got those cues and we programmed a robot—Nexi the robot, from the lab of Cynthia Breazeal at MIT—to emulate, in one condition, those non-verbal gestures. So what I'm talking about today is not about the results of that study, but rather what was interesting about looking at people interacting with the robot.
Nexi is a cute robot that has a very, very limited set of facial features and range of motion, and in fact, has wires coming out at the bottom, and moves in a little bit of a weird way. But within seconds participants were interacting with her as if she were a person (I say "she" because of the voice that we used. Nexi, as you might imagine, doesn't actually have a gender that I know of. We didn't get that far). This is not a novel finding. It's not surprising that people adapted so quickly. Within 30 seconds, people were actually talking to Nexi as if she were a human being, in fact, were saying things that were quite private. At the end of the study some people were convinced that technology had actually advanced so much that Nexi really was a robot that was talking, when in reality there was a graduate student behind the curtain, so to speak.
I'll perhaps bring up the oldest experiment that has been talked about. In 1944, the psychologists Heider and Simmel actually used animated figures to display very minimal actions on a screen, and what they found was that people spontaneously described these actions as intentional and animated. They ascribed motions and goals to these small animated figures. One of the great discoveries of psychology, in general in the past 50, 60, 70 years, has been that we have a very basic set of social intuitions that we use to interpret the actions of other human beings, and that these are good for us. This is how I can even be here talking to you guys, looking into your eyes and thinking that you might understand what I'm saying. But the cues that we use to apply these intuitions have to be very minimal, right? All you need is a couple of eyes and a face, and you put that in your Email, and I know that something social all of a sudden is cued in my mind, and I apply my social intuitions.
...we see intentionality in agency where there is none at all. So we're quick to think that even a machine—a vending machine that doesn't deliver, that doesn't dispense what I order is angering me—and in some way I am making a judgment of moral blame, when, in fact, there is absolutely no intentionality there.
We have intuitions, very basic intuitions about the physical world, like one is about causality, but we also have social intuitions about intentionality and agency. These build into our moral intuitions, such as who deserves blame, and some of the work that's built this whole view of how we make these moral judgments comes from people in this room, like Joshua Greene, Josh Knobe, and others. So we know that one of the ways in which we use these social intuitions is to generate moral judgments about who did good things, and who did bad things, and who deserves blame.
But what's interesting about these intuitions is that they can easily misfire. In fact, Daniel Dennett nicely called this view of things the "intentional stance." And it turns out the intentional stance seems to be our default view. But what that means is that we see intentionality in agency where there is none at all. So we're quick to think that even a machine—a vending machine that doesn't deliver, that doesn't dispense what I order is angering me—and in some way I am making a judgment of moral blame, when, in fact, there is absolutely no intentionality there. So we're promiscuous with these judgments of intentionality.
We can even use our promiscuity with these judgments in clever ways. The psychologist Kip Williams and his colleagues have, in their experiments on social exclusion, wanted to develop a paradigm for how to make people feel excluded socially. So what they did was say, "Well, let's just do a simple game. We'll have three people in the room, and they'll toss a ball back and forth, and two people will stop tossing the ball to the third person." Now, the two people who are tossing the ball to each other are competitors, but the third person is actually the subject. People feel really, really bad when that happens, and it's a very simple game. So they said, "Well, maybe we don't need to have a physical ball game. Maybe we can just have three people in a room playing a videogame, right, and we can do it on the computer. It will be easier." Sure enough, that works. People feel really bad if they stop getting the ball. "Well," they said, "maybe we don't need to actually have the other two people in the room. We could just tell them that there are two people in the other room." People still feel bad. It turns out you can even tell them that it's just the computer that stops tossing the ball, and they seem to feel just as bad. Now, these people aren't stupid, presumably. They're college students, after all. But it's so easy to find intentionality in agency when there is none, and it's so hard to squash it that we generate these sort of weird errors.
Now, these are cute errors, and we can use them to do psychology studies on social exclusion, and we can learn quite a bit. In fact, it's kind of funny that you would kick a vending machine or that you would yell at your Windows machine when it gives you the blue screen of death, but they're increasingly failing to be that cute, because the more complex society gets, it turns out that these intuitions are some of the only intuitions we have to make sense of a social world that's quite different from the world in which we evolved. We've known this for quite some time.
Take, for instance, collective action. When we talk about the action of a company, we say, "Oh, Microsoft did this." Well, at some level we know that this is the actions of a whole bunch of people, maybe even stockholders or the board voting; it's not one person, but we seem to think of it and track it as an individual entity over time. So we can generate a moral judgment to say that Microsoft is evil. We do this with sports teams all the time. We say, "You know, I hate the Knicks because in 1983 this happened." But the 1983 Knicks may have absolutely nothing to do with the current Knicks, only by name are they the same, but we track them—as Josh was talking earlier about—we track them as if there is this essence, and it's continuous, and they're agents, and they do things, and they make us mad, and they shouldn't have done that. So those are instances in which it's becoming increasingly clear that our social intuitions may not have a good match with the actual social world in which we live.
Social media is another good example. I know what it's like to communicate to a group of people that I can see. You're giving me some feedback. I know the appropriate things to say and the things that I ought not say, I think, but now I have 600 Facebook friends. I have none of the cues that I would usually get from people in a crowd, and maybe I'm just thinking of it as talking to one person. So I say something, and all of a sudden I forget that I'm also friends with my grandma, I'm friends with my former advisor, and they all see it, and so our social intuitions don't work. It's the wrong kind of intuition to generate what we ought to do in much of today's social world.
Part of what I want to argue is that this is increasingly problematic, and it's not just the case that our social intuitions are going to fail and make it so that we're going to be embarrassed at what we say, but that, in fact, they might stifle real progress, and especially technological progress and innovation, because they're the only lens we have in which to interpret our social world, but they don't fit any more.
Let me give you an example. Algorithms that look in my Email generate personalized ads. Now, one of the first reactions that people have when they see an ad that has been personalized for them is: What the hell? Who's reading my Email? That's creepy. So "creepy" is a word that comes up quite a bit. The truth is no one's reading your Email. It's an algorithm, right? Somehow we feel like our privacy has been violated, even if we are assured that nobody, in reality, cares about your Email, but nonetheless the cue that we're getting is what would generally be a social cue—that is, somebody has generated a suggestion to us that normally would come from a friend.
So I have a new Google device, and Google now can do this very, very well. This is a service that goes through all of the information that I've given to Google—either explicitly or implicitly— and it generates these little cards that tell me, "Oh, by the way, David, you have a dentist appointment, and you better leave now because where you are it will take you this long to get here, because of the traffic." Now, imagine that somebody came up to you and said, "Hey, Josh, you've never seen me, but I think that your wife is worried because I was just over there at the house, and maybe you should call her, because you usually call her at this time, don't you?" That would be creepy. That would be extraordinarily creepy. But I love this service, because it gives me such useful information. In fact, I think I get a great deal of benefit from it.
What I fear is that, as technology progresses, and more and more good things can happen in the world—now, technologies might actually give us anything from curing diseases, to preventing disease through genetic means—we're applying intuitions that are old, and we're making moral judgments that these are wrong and inappropriate, when in fact, we can still make those judgments, but perhaps we should be using something else.
My social intuitions are firing that there is a creepy person reading all of my Emails and looking at all of my appointments, but they're wrong, nobody is, it's an algorithm. But we don't have intuitions about algorithms, and I don't think we're getting any anytime soon. The image that I have sometimes is of a middle-aged man who's a few pounds heavier than he used to be trying to squeeze those jeans that he wore in high school onto himself. And so he squeezes and squeezes, and they just don't fit any more, but he can go to the store and get a new pair of jeans, and there's no intuition store for us, right? As technology advances, there is no way in which we can rapidly generate new intuitions. So what this means is that when we hear about self-driving cars, all of a sudden we get really nervous. Even though we're certain that percentage-wise this would reduce the number of traffic accidents, it just doesn't feel right; I'm not in control; I don't like it.
So what happens? Technologies get stifled a bit, because they have to match our intuitions. One of my favorite examples of this is BMW. BMW got so good at making the cockpit silent by developing new technologies to silence all external noise, that all of a sudden people started complaining that they couldn't hear the engine, and the engine provides actually really good feedback for many people, and they actually enjoy it. What used to be a side effect is something that people now enjoy. So what BMW engineers did is they spent hundreds of thousands of dollars, if not millions, to develop an audio algorithm that could generate engine noises to get pumped through the stereo that would be contingent on the conditions in which the person was driving, what gear they were in and how fast they were going. That is now in the BMWs, and there's no way to turn it off, it is not an option to turn it off. So here's a case in which this company had to bend over backwards to accommodate an intuition people had.
What I fear is that, as technology progresses, and more and more good things can happen in the world—now, technologies might actually give us anything from curing diseases, to preventing disease through genetic means—we're applying intuitions that are old, and we're making moral judgments that these are wrong and inappropriate, when in fact, we can still make those judgments, but perhaps we should be using something else. What that something else is is maybe a question best used for philosophers and ethicists, but it's something we'd have to consider.
Those are the implications of modern society and our old intuitions. The implications seem to be for technology and for society, but is there anything that we now should conclude about the way that we study intuitions? Does this matter at all for the science of psychology? And I think that one way in which it does matter is because the normative theories that we use, that is, when we have to decide, is this decision good or is this decision bad, which has been a very, very fruitful way of understanding the human mind, and pioneered by Danny Kahneman and others, one of the ways we study human intuition is we say, "Well, let's see where people make errors." So we poke and prod people much like we use visual illusions. We look at when mistakes are made, and then we see the structure of the intuition, and we can say, and this is very useful, and it's very beneficial, we can say, "Under these conditions these intuitions misfired." We can actually now implement policy that says, here's the way to get people to make the right decision. But now what this entails is a proper understanding of what the right decision is.
In judgments under uncertainty, when we're making problemistic judgments, there are well-developed theories about what probability judgments you ought to make under human conditions. Should you be Bayesian? What information should you bring to bear on this decision? There's some controversy, but by and large people know when you're making an error.
In the field that I study, of ethical judgment, we've ported over some of those same techniques, and we used this error in bias approach to study moral judgments. We sometimes say, "Well, under these conditions it appears people are making errors in the moral domain." It's much trickier that way though, because the normative account of ethical judgments is much less certain than the normative accounts of problemistic judgments—that is, people still argue about this. But we can still say, "Well, look, we have thousands of years of philosophers who have developed normative theories of ethics that we can at least agree in some cases it's an egregious error, right?" And so many of my colleagues and I have looked at human moral judgments and compared them to normative theories, and concluded, look, your ethical judgment misfired.
As society has increased in complexity, and as some of these technologies have been innovated, though, even those normative theories are failing us. So it's unclear to me what the right answer is to whether the impersonal nature of drone attacks or robots in war is an immoral action. I'm just not quite sure whether simply removing agency makes it a more egregious violation. I want to work this out, but what this means is that I can't use a proper normative standard to compare human judgment. So I think the implications here are that, as we proceed and as we study human intuition, and as the background of these intuitions changes because society and the complexity of technologies changing, we have to more and more act in concert with people who are thinking deeply about what the right answers are. Then we can start comparing our intuitions and the judgments that they generate, but it's essentially I think a call for a bit more of the working out of the normative side, before we simply start willy-nilly accusing people of committing egregious errors in judgment. I don't think that we know quite yet what an error in judgment is about many of these things.
KAHNEMAN: I think we do know something about errors. Take framing effects; you don't know which framing is correct, but here are two things that by some logical argument should evoke the same response, and they evoke different responses. And that actually, I think, is the more common way in which when you think that there is a problem, and the problem is that we have intuitions, and they're not consistent, so that you can trigger—that is, you have three intuitions. You have intuition A, you have intuition non-A, and then you have the intuition that they should agree. And that is really the standard problem.
PIZARRO: I absolutely agree. In fact, inconsistency, I think, is one way to determine if a moral judgment is an error. So one way you can do it is you can show people both conditions of the experiment. So you say, look, if they get embarrassed, and they admit that they made an error, but not all of moral judgment studies are like this at all. In fact, for instance, omission versus commission. So some researchers call this the omission bias. But now when you show people both conditions, you say, "Look, isn't it silly that you made this judgment that killing is worse than letting die. Don't you agree that this is an error?" They don't have the framing response. They say, "No. Of course not. I didn't make an error." In fact, they jump up and down and say, "I will make the same judgment over and over again." In those conditions is where I think we're having a little bit of problem.
MULLAINATHAN: To build on what Danny is saying and to go back to the first part of what you said, there's a book by Everett Rogers, I don't know if you've ever read this book, Diffusion of Innovations; it's a good book. He's got whole chapters in there on what he calls congruents. He basically reviews a huge literature on how innovations are adopted or not. And he's full of interesting stories.
One story is very relevant to what you're saying. It's about Indian farmers adopting tractors—tractors as a substitute for bullocks that would pull. It's interesting, because people who have studied this noted that after the farmers have adopted tractors, every night they would go to the tractor and put a blanket over the tractor. Great for bullocks. Actually, a little less than good for tractors. I mean at best, mutual. That is actually a theme that appears again and again in Diffusion of Innovations, that people adopt or fail to adopt technologies and use them in a way that's congruent with the intuitions they've developed and prior technologies that they'd had.
There's also, I think, a way out of it, which is related to this notion of you have multiple intuitions. One way you can get adoption or use of a technology is to actually just find an intuition for which this technology is congruent. So oftentimes when you see this mis-adoption, it's not that it's incompatible with all the intuitions you have. It's incompatible to an intuition, but it can easily be framed, so think of Facebook as X, or don't think of the Google guy who gives, you know, that algorithm, as a creepy dude, think of it as … and then all of a sudden it becomes totally understandable, people use it quite well.
So I wonder to what extent moral and intuition, social intuitions are also fertile enough, and different enough, and inconsistent enough that inconsistency is now a good thing, because now, as a framer, you have more things to choose from.
PIZARRO: Right. I think that's very insightful. In fact, that's a solution out of this, which is, okay, let's get another intuition going. As Josh Knobe and I were talking about earlier, one of the features of some of these reactions that we have is that it's not just that Google knows information about them, it's the social delivery. It seems to match the features of other forms of social delivery, that when a friend informs you, "Hey, by the way, you know, you have an appointment, right?" It's those that seem to get the intuition going. It's not, for instance, if my car is smart, and it measures the miles that I use, and then it says, "Hey, it looks like when you're driving in the city you use this this much…" That's great. I don't feel that violated. But that's not a social issue. So one way maybe that we can work on some of these problems is quieting the social cues that might actually get inconsistent intuition.
MULLAINATHAN: Or just to build on it. An example could be imagine that the application is explicitly bad early on, so that you could see it learning in a way. That's a situation where it might be more palatable to you, because you're like, "Oh, now I can see the process by which it's learning, and I'm actually growing close." This is just to give an example of how you might want to …
PIZARRO: In fact, actually, if you could involve the individual as an agent…You know the elevator buttons that are placebo buttons, if you could make it so that I just had to just remind me to touch this button so that Google now really knows, but it actually does nothing. I would actually feel a bit more involved, and so I would feel like they didn't just surprise me with this, like someone creeping in your window.
JACQUET: But it seems like the McClure Studies in science go directly against some of the things you're saying, where people are willing to accept unfair offers in the ultimatum game from machines, and they weren't willing to accept them or reject them from humans, even though the humans were also machine generated. It was just a face that changed the way they made the decision. So it might be that humans are willing to accept computers as competent, or even be compassionate towards them as objects, but they're not willing to accept that they have actual moral domains.
PIZARRO: Maybe. That's a good point.
KNOBE: One of the really interesting points that David was making is that it somehow has to do with the packaging of what you're doing. And there was an interesting follow-up on the study Jennifer was mentioning, in which it's exactly the same study except instead of saying it's just a computer, they said it's a computer with special artificial intelligence program. And in that condition people would lose money to punish the computer. So when the computer cheated them, they were actually willing to sacrifice their own money to take money away from a computer program!
PIZARRO: So sort of ramping up of the social cues. I think maybe one mistake that we make is well, let's make our robots more social, right? And in reality, a terminal might just be exactly what we want.
DENNETT: Many, many years ago Omar Khayyam Moore, at Yale—early pioneer in computer-aided instruction—really went on the warpath against phony anthropomorphization of programs. In those days it was you typed your name, and then it said, "Well, Johnny…" and he said get rid of all of that; get rid of that because you're squandering one of the great things about computers, mainly you're in the privacy of your own room, and there isn't anybody looking over your shoulder. There isn't any other agent in the picture.
Now, it seems to me that there are more positive steps, as recommended by O.K. Moore that might be considered. Since, when people adopt the intentional stance they invariably over-interpret, they always are charitable, they always interpret more understanding than is there. I mean that's just clear. But it might be good if we deliberately built in self-exposing weaknesses and foibles so that when you start using a new app or something, you are taken through some things that it screws up on—It can't do this. It can't do that. It can't do that—so that you sort of unmask the thing before people start over-interpreting it. It might be a good idea.
Paperback [ 2011 ] | Hardcover [ 2013 ] |
Hardcover [ 2013 ] | Hardcover [ 2011 ] |
Hardcover [ 2011 ] | Hardcover [ 2013 ] |