DEONTOLOGY OR TRUSTWORTHINESS?
DANIEL KAHNEMAN: Molly, you started your career as a neuroscientist, and you still are. Yet, much of the work that you do now is about moral judgment. What journey got you there?
MOLLY CROCKETT: I've always been interested in how we make decisions. In particular, why is it that the same person will sometimes make a decision that follows one set of principles or rules, and other times make a wildly different decision? These intra-individual variations in decision making have always fascinated me, specifically in the moral domain, but also in other kinds of decision making, more broadly.
I got interested in brain chemistry because this seemed to be a neural implementation or solution for how a person could be so different in their disposition across time, because we know brain chemistry is sensitive to aspects of the environment. I picked that methodology as a tool with which to study why our decisions can shift so much, even within the same person; morality is one clear demonstration of how this happens.
KAHNEMAN: Are you already doing that research, connecting moral judgment to chemistry?
CROCKETT: Yes. One of the first entry points into the moral psychology literature during my PhD was a study where we gave people different kinds of psychoactive drugs. We gave people an antidepressant drug that affected their serotonin, or an ADHD drug that affected their noradrenaline, and then we looked at how these drugs affected the way people made moral judgments. In that literature, you can compare two different schools of moral thought for how people ought to make moral decisions.
On one hand, you have consequentialist theories, which advocate that the morally right action is the one that maximizes the greatest good for the greatest number of people. On the other hand, you have deontological theories, which argue that there's a set of absolute moral rules that you cannot break, even if there are cases in which adhering to those rules results in worse outcomes for people. These are two traditions that are at odds with one another—a very longstanding debate in philosophy.
What we found was that if you enhance people's serotonin function, it makes them more deontological in their judgments. We had some ideas for why this might be the case, to do with serotonin and aversive processing—the way we deal with losses. That was the starting point for using both neurobiological and psychological tools for probing why our moral judgments and behaviors can vary so much, even within the same person.
KAHNEMAN: When you use the word deontological, do you refer to how people behave, or how it's expressed in their reaction to others, or do you ask them how they think about it? Those can be very different.
CROCKETT: Absolutely. One thing that has long fascinated me is why there is so often a distinction between what we think is right or wrong, how we'll judge another person's action, and what we do ourselves. With respect to deontology, these are normative theories, they're prescriptions for what we ought to do. There's a long tradition in moral psychology of trying to understand how human judgments about what we think is right or wrong map onto these ethical theories that have been painstakingly mapped out by philosophical scholars.
KAHNEMAN: What is the psychological reality of these philosophical dimensions? I understand the idea of deontology, but can you classify people? Would the classification of people apply both to what they say, what they do, and what they feel, or is there a dissociation? I might have the idea that I'm quite tolerant of certain actions, and at the same time if you checked me, I'd be disgusted by them. Is it how people feel or what they say that counts?
CROCKETT: That is the crux of all of this research. When people are making judgments, much of the time they're doing this reasoned-out calculation or evaluation of consequences. We can think of them as using System 2 thinking. It's more likely that judgments are going to reflect a set of ideals or principles that people feel they ought to or, in an ideal world, would like to conform to. Of course, when people are making actual decisions that have real consequences, and there are strong incentives to behave in an unethical way, we get overwhelmed by these different sources of value and can often behave in a way that's inconsistent with our principles.
KAHNEMAN: I was asking about something that's neither of those. I was asking about indignation as an emotional response. I can think of many behaviors that I condone in the sense that I don't have the grounds to oppose them, and yet I don't like them. Does this fit into your system?
CROCKETT: Yeah. Indignation, or a retaliative desire to punish wrongdoing, is the product of a much less deliberative system. We have some data where we gave people the opportunity to punish by inflicting a financial cost on someone who treated them unfairly or fairly, and we varied whether the person who was going to get punished would find out if they'd been punished or not. We were able to do this by making the size of the overall pie ambiguous.
If people are punishing others in order to enforce a social norm and teach a lesson—I'm punishing you because I think you've done something wrong and I don't want you to do this again—then people should only invest in punishment if the person who's being punished finds out they've been punished. If punishment is rational and forward‑looking in this way, it's not worth it to punish when the person isn't going to find out they've been punished.
This is not what we find at all. In fact, people punish almost the same amount, even when the target of punishment will never find out that they've been punished. This would suggest that punishment, revenge, a desire for retaliation are a knee-jerk reaction, a retrospective evaluation of the harm, rather than a goal‑directed deliberative desire to promote the greater good.
KAHNEMAN: I agree completely. How did that map onto deontology? You don't need to think to get angry with somebody treating a stranger badly; that just happens to you. But to be deontological is something else—it's a thought. Are those two highly correlated in practice?
CROCKETT: The fact that our work suggests indignation is a knee-jerk rule-based reaction that doesn't consider consequences, suggests that it's intimately tied to deontological intuitions. There's been some interesting recent work, from my lab and Dave Rand's lab at Yale, suggesting that having an uncalculated deontological response to moral violations signals to other people that you yourself are less likely to violate those moral rules. People find you more trustworthy if you are a moral stickler, if you say it's absolutely wrong to harm one person even if it will save many others.
We've done experiments where we give people the option to play a cooperative game with someone who endorses deontological morality, who says there are some rules that you just can't break even if they have good consequences. We compare that to someone who's consequentialist, who says that there are certain circumstances in which it is okay to harm one person if that will have better consequences. The average person would much rather interact with and trust a person who advocates sticking to moral rules. This is interesting because it suggests that, in addition to the cognitive efficiency you get by having a heuristic for morality, it can also give you social benefits.
KAHNEMAN: The benefit that people get from taking a deontological position is that they look more trustworthy. Let's look at the other side of this. If I take a consequentialist position, it means that you can't trust me because, under some circumstances, I might decide to break the rule in my interaction with you. I was puzzled when I was looking at this. What is the essence of what is going on here? Is it deontology or trustworthiness? It doesn't seem to be the same to say we are wired to like people who take a deontological position, or we are wired to like people who are trustworthy. Which of these two is it?
CROCKETT: What the work suggests is that we infer how trustworthy someone is going to be by observing the kinds of judgments and decisions that they make. If I'm interacting with you, I can't get inside your head. I don't know what your utility function looks like. But I can infer what that utility function is by the things that you say and do.
This is one of the most important things that we do as humans. I've become increasingly interested in how we build mental models of other people's preferences and beliefs and how we make inferences about what those are, based on observables. We infer how trustworthy someone is going to be based on their condemnation of wrongdoing and their advocating a hard-and-fast morality over one that's more flexible.
KAHNEMAN: What's doing the work psychologically here is trustworthiness, not deontology.
CROCKETT: Correct.
KAHNEMAN: I was struck by something that I'm struck by in general with respect to experimental philosophy, which is that it takes concepts from philosophy and uses them as psychological concepts. There seems to be some tension there.
CROCKETT: There is. It doesn't quite match up, does it?
KAHNEMAN: I thought it didn't. I was going to press you on that because in your paper on deontology and consequentialist responses, you have an evolutionary theory of how that comes about. That evolutionary theory struck me as odd because I found it difficult that people evolve to take moral positions of one kind or another. I find it much easier to imagine that people evolve to look trustworthy, whether they are or not. But the way you describe it, it seems as if you have an evolutionary theory of deontological attitudes. Which is it?
CROCKETT: What we have is a potential evolutionary story for why people have intuitions that have the flavor of deontological ethics, but they don't map on perfectly. We get into that in a bit of detail in the paper. One thing that has plagued moral psychology over the years is that there are these two prevailing traditions in philosophy—consequentialism and deontological ethics—but these don't perfectly map onto human moral judgments. The average person's moral judgment looks more like an amalgam of different kinds of ethical theories. You take the best bits of consequentialism and the best bits of deontological theories.
My student Jim Everett, who's absolutely brilliant and has studied philosophy and psychology as an undergraduate at Oxford, realized that there is another set of ethical theories called contractualist theories, which have gotten a lot less attention in the literature. Contractualist theories highlight the fact that, as humans, we care about fairness and responsibilities and duties to one another. And what we think is right is what we would all agree, from a Rawlsian original position, behind a veil of ignorance, would be the right thing to do to uphold the social relationships.
The pattern in the overall results of our studies about what people seem to find most trustworthy is the judgments that conform to a contractualist morality, where you respect people's rights and duties, you don't use people as a means to an end, and you do what people would want you to do. On the menu of ethical theories, contractualism seems to be the one that matches up the best with our evolved social cognition. I'm sure that's not going to be perfect either. I agree that one might question how useful a project is that tries to shoehorn human psychology into the writings of 300-year-old texts.
That's certainly not the goal of my research program, but it can be a useful source of ideas for interesting psychological avenues to pursue. Many of these philosophers were quite astute about human emotion, particularly social emotions.
KAHNEMAN: If there were selective pressures, they were on emotion and action and not on philosophical positions.
CROCKETT: Absolutely.
KAHNEMAN: There is something that fascinates me about the project in general, which is to take the concepts from philosophy and use them, in that case possibly to force them, onto a psychological description. In consequentialism there are contexts in which saying, "I can understand why people under some circumstances would do this or that," would not look untrustworthy; it would look like empathy. If you have sympathy for the sinner and you understand the sinner, you're going to sound consequentialist. But you could sound trustworthy if you're in clerical garb or something. Tell me how you deal with that.
CROCKETT: One current project is to try to understand how different social relationships make a cost-benefit calculation more or less desirable in a partner. One can imagine that you don't want a spouse or a best friend who's constantly calculating what they can get away with. We value loyalty in our friends and family, and loyalty is something that's often at odds with consequentialism; however, in a leader—a president, a general, a surgeon—we may very well strongly prefer a consequentialist perspective.
That distinction comes from the fact that when we're in close social relationships, the decisions that are going to impact us are preferential kinds of relationships. We want our partner or our best friend to put our needs and welfare above other people's. When it comes to a leader who's making decisions for a large population, we want to be treated equally to everyone else. If you're just an average citizen, you probably wouldn't expect the president to treat you differently than anyone else. Something we're exploring is whether you prefer a consequentialist or a more rule-based morality in close family members versus more impartial relationships.
KAHNEMAN: Have you explored the correlation between the statements that people use in that context and the emotional substrate? Have you done studies in which you had both sets of variables? The trolley problems are standard ways in which you get emotional and moral decisions, and yet I'm not sure that the trolley problem is the only approach to measuring moral emotions.
CROCKETT: I totally agree. That's one reason why my lab is moving away from hypothetical trolley problems and more towards having people make trade‑offs between benefits for themselves and costs, like physical pain, for other people. With respect to your question about emotion, there is work by Dave Pizarro and others suggesting that taking a consequentialist perspective in these sacrificial dilemmas is associated with a lack of social emotions. Psychopaths, for example, tend to be more consequentialist on these sacrificial dilemmas. That's quite interesting because it suggests that there's a strong link between a deontological intuition and socially valued emotional responses, like being averse to harming others.
KAHNEMAN: This may not be entirely clear to the people watching this, so let's unpack the trolley problems, where consequentialism doesn't quite do it. It's a very special kind of problem. Could you describe them?
CROCKETT: Philosophers, and increasingly psychologists, have done a lot with the so‑called trolley dilemma, which involves a trolley headed down a track towards five workers who are going to die if you do nothing. In one variant of the trolley problem, you can flip a switch so the train will go onto a different track where there's one worker instead of five. Is it morally appropriate to flip the switch to kill one person instead of five? Most people say yes, you should do this; it's better to save five lives than one.
If you change the dilemma slightly so that you're now standing on a footbridge over the tracks and there's another person standing on the footbridge, you realize that you can push this person off the footbridge onto the track so their body will stop the trolley before it hits the five workers—again, you're killing one to save five. But now, most people say this is not acceptable. This distinction is interesting to philosophers because consequentialist theories would say in both cases you should kill one to save the five, but deontological theories say you shouldn't.
KAHNEMAN: The preferences of most people are consequentialist in the switch variant.
CROCKETT: Yes.
KAHNEMAN: It turns out that we are deontological when we don't have a very strong emotion associated with the action of pushing the fat man off the bridge. Can we describe people as either consequentialist or deontological? In that series of problems, it would seem that what is happening is that you have a powerful emotion to the idea of pushing somebody off the bridge. How did that translate into a philosophical position?
CROCKETT: This is something that folks like Josh Greene have written extensively on. It gets back to the heart of what we were talking about earlier: Why is it that the same individual will have a preference reversal—a consequentialist perspective when it comes to the switch case, and a deontological perspective when it comes to the footbridge case? I agree that emotions are key. We've gotten even more sophisticated in the way we describe the effect of emotions on these decisions in recent years.
We've imported some ideas from machine learning or reinforcement learning to suggest that a consequentialist judgment looks a lot like what's called a model-based algorithm in machine learning, where you construct a decision tree and you evaluate what the best course of action is based on this mental representation of the different branches of the tree. This can be contrasted with a model-free algorithm, or one in which you store the values in the actions themselves. In the case of the footbridge, pushing a person is an aversive action. It's been punished in the past, maybe when you were a little kid. You've probably watched television programs where pushing resulted in a lot of distress, or maybe even fights, or other bad consequences.
The idea is that you could have both of these kinds of systems in the brain. Indeed, it has been shown that model-based and model-free algorithms do map onto distinctive brain systems. Of course, behavior is going to be the sum output of the votes, if you will, of those different systems. In the case of the footbridge, because you have this action—pushing a person—that's aversive, based on our reinforcement history, this results in more votes against pushing that come from the model‑free system.
KAHNEMAN: Model-based and model‑free are terms that come from somewhere else. Most psychological terms that I've been working with come from introspection in one way or another, or they are from natural language. There is a lot of emotion and a lot of phenomenology in them.
The terms that you're using are borrowed from different places. Model-free and model-based don't correspond to any immediate intuition that we have. When you say that's a focus on consequence, or a focus on an immediate emotional reaction—the word emotion, in a way, doesn't play here, and that is interesting. Do you think that this is something that's happening generally in psychology these days, that the difference between us is a generational difference? Or do you think this is your field versus my history? Is it something special to your field, or general to your generation?
CROCKETT: It may be more of a generational thing. One of the most exciting developments in psychology in recent years is an increased focus on computational methods and integrating knowledge from reinforcement learning, and perceptual decision making, and basic reward-based decision making into these increasingly complex social and moral problems. That approach buys us an increased specificity in how we describe the latent cognitive processes that are driving decisions.
I see the work you did on prospect theory as one of the foundational examples of this. I wrote a paper recently illustrating how much more predictive power you can get when you have a mathematical or quantitative model, compared with a descriptive theoretical prediction. We know that bad is stronger than good; that's one of the most well-established psychological phenomena. What loss aversion and prospect theory buy us is the ability to make specific predictions, not just about whether someone will take a gamble or not, but what's the probability that they will?
The approach that's becoming more and more popular in psychology is to be able to write down equations or formulae that can make generative predictions about how people will make decisions, given a specific set of inputs.
KAHNEMAN: We shouldn't stay on that topic, but I can't help myself. In prospect theory, the key terms are gains and losses. Gains and losses correspond to something that people experience. Prospect theory, in that sense, has formulas in it, but it comes in large part from introspection. There are gains, there are losses, and losses are larger than gains. What I see in your work, both in the model‑based and model-free language and the deontological, is that most of the phenomena that you look at involve both. There is no direct mapping from the concept to a particular behavior that is as general as gains and losses. You are telling me that this is what psychology is looking like and presumably will look like in the next decade, where the concepts come from computation and from bioscience, and we'll have to adjust our psychology to these concepts.
CROCKETT: I think so, but you've hit on an important point. Now that you've raised this point, I'm wishing that instead of using the terms model-based and model‑free, I used the terms of goal-directed and habitual. These are other labels that essentially mean the same thing. One comes from the animal literature and the other comes from machine learning. Goal-directed and habitual, now we have a psychological introspective label that we can attach to those. It's a good point. For us to be able to get a handle on and appreciate what these algorithms mean for our psychology, it helps to have a language that gels with our introspection.
KAHNEMAN: We can agree on that. I personally would much prefer habitual and goal-directed. Then I have the sense that I know what you're talking about. There is a study that you published in PNAS that I had the good fortune of being an editor on, so I saw many reviews of it, in which you showed a very counterintuitive finding. Can you describe what you found?
CROCKETT: Sure. We brought people into the lab, and we had a rather simple problem for them to contemplate. We told them we were going to hook them up to an electric stimulator device that will deliver some electric shocks, which are physically harmless but unpleasant enough that people will pay money to avoid them. The question that we asked people was, "On one hand, how much would you pay to avoid being shocked yourself?" We also asked, "How much money would you pay to avoid delivering pain to another person who is a stranger—you've never met them, you're never going to meet them, they're sitting in a different room?" We established a procedure to convince each person that there is another participant there, but they don't know anything about them—whether they're a man or a woman, how old they are, et cetera.
Coming from the tradition of many studies in psychology and behavioral economics where people are asked to share money with another person, people are somewhat altruistic in that they will give some money to another person, but they keep most of the money for themselves. The prediction going into this was that, even though people care about other people's pain a little bit, they should care less about that pain than their own pain and therefore pay less to prevent pain to another than themselves. What we found was the opposite. Most people were willing to give up more money to prevent pain to a stranger than pain to themselves.
This was surprising in a way, but unsurprising if you think about deciding to gain a profit from shocking somebody else. That's an unambiguously immoral action. Whereas getting money from your own pain is morally neutral. You can imagine that there is a psychological cost to inflicting pain on another person for money that's there in that case, but not in the shocking yourself case. That could potentially outweigh any financial gain that you had.
KAHNEMAN: The key in what you're saying here is for money. I don't think that was in your title. Do you remember your title?
CROCKETT: Yes. And as it so often goes in science, about a year and a half has passed since that paper was published, and I deeply regret the title. The title is "Harm to others outweighs harm to self in moral decision making."
KAHNEMAN: It raises an immediate question. I don't think I raised it as an editor because my task was just to look at what the reviews said. What would your prediction be if you were to allocate ten electric shocks between you and somebody else? Money isn't involved. This is the equivalent of the ultimatum game. Then you could play the ultimatum game or you could play the dictator game. Your situation was basically a dictator game. What is your intuition? Would you get the same thing?
CROCKETT: I have this data, actually. We've done this, though it's not yet published. When it comes to exchanging pain in a dictator game, on average people are equal—50-50 split. One reason why we see this is that there's a strong norm for equal allocation.
KAHNEMAN: That's an ultimatum game or a dictator game?
CROCKETT: It's a dictator game.
KAHNEMAN: You see more fairness in allocating pain than in allocating money, because in allocating money people tend to be selfish.
CROCKETT: Yes, we've seen that. There's another study that I collaborated on with Ray Dolan and Giles Story in which we had people trade off pain for pain—x shocks for me versus y shocks for you—and the same for money. People are more altruistic or fair when it comes to sharing pain than money. When it comes to sharing pain, people are not significantly different from 50-50 split. If it does err on one side or the other, it errs on the side of people taking more pain for themselves. This relates to a sense of responsibility as a dictator, where you have control over and are morally responsible for the outcome.
The point you make is important in that almost all of the legwork in our original studies was from the fact that people don't like getting money from harming another person. The moral transgression corrupts the value of the money. We have some brain imaging evidence to support this and some additional behavioral experiments.
That reflects a very old intuition. The New Testament of the Bible is the first introduction of this term "filthy lucre," which means money obtained by dishonest means. We morally condemn others—individuals, corporations, organizations—who take money from morally tainted sources. This is interesting because it suggests that our goal-directed, if you will, representations of consequences and the moral status of actions can reshape the values that we use to make the decision.
KAHNEMAN: How would it work if I had the choice between allocating shocks to my wife or some other woman? Don't you think that people would be much more selfish in that case?
CROCKETT: Yeah. That's an interesting one. I could see two competing predictions. On one hand, the intuition is that we will protect our close friends and family more than a stranger. On the other hand, there is a sense of forgiveness in our close relationships that may not extend as strongly to strangers. That would be an interesting one to test.
KAHNEMAN: My guess is that there would be a very big difference. You're absolutely right, just by introspection, that it must be the case that when I am allocating shocks between you and me, there is a cost to allocating shocks to you, morally. That seems to go away when I choose between a close relative and somebody else. There is no cost. There, I would expect a lot more selfishness simply because the moral constraint isn't there. Last question, what would you expect incentives to do? Adjusting the intensity of the shock. You're dealing with a mild pain. How about severe pain?
CROCKETT: A couple of things on that. It may be the case that the relationship between the moral costs or benefits of harming versus helping doesn't track so closely the consequences. There may be a bright line between harming versus not harming that carries a moral cost, but once you cross that line, it doesn't matter as much how much pain you're inflicting. That would lead to the prediction that you would get maybe more selfish behavior with larger pains because if there's a fixed moral benefit that you get from doing the right thing, then that would be outweighed at some point. That's very plausible.
The other issue to consider is actions versus omissions. In our original study, we looked at the case where you can pay money to decrease the pain, but also the case where you can gain money by increasing the pain. We find this altruistic behavior in both cases, even though, as you would expect from loss aversion, people require more compensation to increase pain than they are willing to pay to decrease by the same amount. In the case of very large harms, this may amplify the distinction between actions and omissions, particularly with these moral decisions.
If you do a thought experiment where I will offer you $10,000 to break your leg or break someone else's leg, you would probably require more compensation to break someone else's leg than your own, but you would be willing to pay less money to save someone else's leg then to save your own. This distinction between taking a costly action to reduce harm and gaining a profit by causing harm might be particularly strong for these very large harms.
KAHNEMAN: Thank you, Molly. This is been quite an education. You're talking about a problem that I've been interested in for many years, in a language that in a way is quite foreign to me. That seems to be the modern language. Thank you.
CROCKETT: Thank you.