THE COST OF COOPERATING
I'm most interested in understanding cooperation, th
The way that I think about this is at two general different levels. One is an institutional level. How can you arrange interactions in a way that makes people inclined to cooperate? Most of that boils down to “how do you make it in people's long-run self-interests to be cooperative?” The other part is trying to understand at a more mechanistic or psychological/cognitive level what's going on inside people's heads while they're making cooperation decisions; in particular, in situations where there's not any self-interested motive to cooperate, lots of people still cooperate. I want to understand how exactly that happens.
If you think about the puzzle of cooperation being “why should I incur a personal cost of time or money or effort in order to do something that's going to benefit other people and not me?” the general answer is that if you can create future consequences for present behavior, that can create an incentive to cooperate. Cooperation requires me to incur some costs now, but if I'm cooperating with someone who I'll interact with again, it's worth it for me to pay the cost of cooperating now in order to get the benefit of them cooperating with me in the future, as long as there's a large enough likelihood that we'll interact again.
Even if it's with someone that I'm not going to interact with again, if other people are observing that interaction, then it affects my reputation. It can be worth paying the cost of cooperating in order to earn a good reputation, and to attract new interaction partners.
There's a lot of evidence to show that this works. There are game theory models and computer simulations showing that if you build these kinds of future consequences, you can get either evolution to lead to cooperative agents dominating populations, and also learning and strategic reasoning leading to people cooperating. There are also lots of behavioral experiments supporting this. These are lab experiments where you bring people into the lab, give them money, and you have them engage in economic cooperation games where they choose whether to keep the money for themselves or to contribute it to a group project that benefits other people. If you make it so that future consequences exist in any of these various ways, it makes people more inclined to cooperate. Typically, it leads to cooperation paying off, and being the best-performing strategy.
In these situations, it's not altruistic to be cooperative because the interactions are designed in a way that makes cooperating pay off. For example, we have a paper that shows that in the context of repeated interactions, there's not any relationship between how altruistic people are and how much they cooperate. Basically, everybody cooperates, even the selfish people. Under certain situations, selfish people can even wind up cooperating more because they're better at identifying that that's what is going to pay off.
This general class of solutions to the cooperation problem boils down to creating future consequences, and therefore creating a self-interested motivation in the long run to be cooperative. Strategic cooperation is extremely important; it explains a lot of real-world cooperation. From an institution design perspective, it's important for people to be thinking about how you set up the rules of interaction—interaction structures and incentive structures—in a way that makes working for the greater good a good strategy.
At the same time that this strategic cooperation is important, it's also clearly the case that people often cooperate even when there's not a self-interested motive to do so. That willingness to help strangers (or to not exploit them) is a core piece of well-functioning societies. It makes society much more efficient when you don't constantly have to be watching your back, afraid that people are going to take advantage of you. If you can generally trust that other people are going to do the right thing and you're going to do the right thing, it makes life much more socially efficient.
Strategic incentives can motivate people to cooperate, but people also keep cooperating even when there are not incentives to do so, at least to some extent. What motivates people to do that? The way behavioral economists and psychologists talk about that is at a proximate psychological level—saying things like, "Well, it feels good to cooperate with other people. You care about others and that's why you're willing to pay costs to help them. You have social preferences."
I believe that. That is clearly true, just from personal introspection, as well as interaction with other people. But I'm interested in a slightly different question, which is where those social preferences come from. Why is it that we care about other people? Why do we have those feelings? Also, at a cognitive level, how is that implemented? Another way of asking this is: Are we predisposed to be selfish? Do we only get ourselves to be cooperative and work for the greater good by exerting self-control and rational deliberation, overriding those selfish impulses? Or are we predisposed towards cooperating, but in these situations where cooperation doesn't actually pay, if we stop and think about it, rationality and deliberation lead us to be selfish by overriding the impulse to be a good person and help other people?
Most people, both in the scientific world and among laypeople, are of the former opinion, which is that we are by default selfish—we have to use rational deliberation to make ourselves do the right thing. I try to think about this question from a theoretical principle position and say, what should it be? From a perspective of either evolution or strategic reasoning, which of these two stories makes more sense, and should we expect to observe?
If you think about it that way, the key question is “where do our intuitive defaults come from?” There's all this work in behavioral economics and psychology on heuristics and biases which suggests that these intuitions are usually rules of thumb for the behavior that typically works well. It makes sense: If you're going to have something as your default, what should you choose as your default? You should choose the thing that works well in general. In any particular situation, you might stop and ask, "Does my rule of thumb fit this specific situation?" If not, then you can override it.
If you think about things from that perspective, what should be the default is determined by what behavior typically works well. In the context of cooperation, because of all of these forces that I was talking about earlier—like repeat interactions and reputation consequences—it's typically in your long-run self-interest to be cooperative, at least if you're living in a society or working in an organization where there are good institutions and norms that prescribe cooperation. Therefore, you should wind up internalizing cooperation as your default response because it typically works well. Then, when you find yourself in interactions where you can get away with being selfish, your first impulse should be to cooperate like normal. But if you stop and think about it, you might realize that it's different here and overrule that impulse to help, instead just looking out for yourself.
In some sense, the birth of all of this work on cooperation is the Cold War. The prisoner's dilemma, which is the workhorse of all of this research, was invented at RAND Corporation as a way of thinking about arms races and other Cold War issues. The key idea of the prisoner's dilemma is that there's this tension between what is unilaterally optimal versus what's jointly optimal. Then there was the question of how you get cooperation to succeed in the prisoner's dilemma. Amongst evolutionary biologists in the early '70s, Robert Trivers suggested the idea of reciprocal altruism, that is, if you have repeated interactions, it can make cooperation advantageous in a prisoner's dilemma.
Then Bob Axelrod in the '80s had his famous prisoner's dilemma tournament, where different people submitted strategies that played against each other and saw what won. That moved things forward a lot because it introduced the strategy of tit-for-tat, which starts out by cooperating and then imitates the other person's previous action. It's a very simple strategy that is basically just implementing the logic of reciprocity: I'll do to you whatever you did to me before. If you know the other person is playing tit-for-tat, then you maximize your payoff by cooperating so that tit-for-tat will cooperate with you in the next period.
From there, there was the question of how you deal with mistakes. Tit-for-tat is great in that it cooperates with cooperation and punishes defection by defecting. But if there's a possibility of errors, such that if I try to cooperate with you and by mistake I defect, if you're playing against tit-for-tat, tit-for-tat is going to retaliate by defecting—you're going to get locked into a cycle of retribution.
There was a lot of work in the '90s on how you deal with correcting mistakes. Martin Nowak was particularly active in that area. He suggested a couple of strategies in which you either randomly forgive with some probability if the other person defected, or have various other mechanisms for fixing errors. Then things started branching out: Instead of just pairwise interactions between two people, how do you support cooperation on a larger scale? That's where a lot of this work on reputation, signalling, partner choice, and things like that came in.
That more or less brings us up to current times. There's been this shift from interaction structures that make cooperation advantageous, like repeated interactions or reputation, to the thing that I've been spending a lot of time thinking about recently, which is this more cognitive question of how cooperation gets implemented. All of these models that people have been making, going back to Trivers, are standard game theory–type models, where the agent just has a strategy that says, do this; there's no psychology in it, no cognition.
What we're trying to do in this recent work is bring in some of the ideas from behavioral economics and cognitive psychology—the kind of stuff popularized by Danny Kahneman, of this trade-off between intuitive processes that are easy and fast but relatively inflexible, versus deliberation, which is a very flexible and sensitive to details but requires time and cognitive effort to employ—and build them into these game theory models for the evolution of cooperation.
The way to think about it is that most of the models say “here is a game, what's a good strategy for it?” In real life, though, you need to figure out how to play a whole range of different games. That is, to the extent that these games characterize different social interactions, there're lots of different interactions that you need to have. We make these models where sometimes you are having interactions that are like prisoner's dilemmas, so it's in your self-interest to just defect. Other times, you're having interactions that involve future consequences—there are repeated interactions or reputations at stake—so it can be self-interested to cooperate.
When you're facing different situations, you can have an intuitive default response that's not sensitive to the type of situation, it just says cooperate or defect. Or you can pay a cost to stop and think about which situation you're facing and then tailor your strategy accordingly. In a setup like that, you can ask what the optimal strategies are—where optimal means a Nash equilibrium, in game theory terms, or a strategy that's favored by evolution or learning.
If you're in a world where future consequences are sufficiently common, so that usually it's a good idea to be cooperating, then the optimal strategy has cooperate as its intuitive default, because typically that's going to work out well, but is sometimes willing to pay to deliberate if it's not too costly to deliberate: In situations where you have enough time or you're not that tired, then you'll stop and deliberate. If you realize that it's a one-shot game, that strategy overrides its cooperative default and switches to defection. We see this strategy that is intuitively cooperative, but uses deliberation to defect in one-shot situations where it can get away with defecting, is optimal and favored by natural selection.
That's a clear theoretical prediction. But I straddle the worlds of building formal models that make predictions, and running actual experiments to try to test the predictions. Over the last five or six years, there's emerged a big body of empirical evidence around this question.
We had a paper in 2012 that brought a lot of empirical attention to this question. We had people play one of these cooperation games: There are four people, they each get some money, and then they choose how much to keep for themselves and how much to contribute to a common project that benefits the other people in the group. Then we experimentally induced them to either rely more on intuition or rely more on deliberation. In some of the experiments, we did this by forcing some of the people to decide quickly, and the other people to stop and think about the decision beforehand. In other experiments, we did it by having people remember a time in their life where they follow their intuition and it worked out well, or a time that they carefully reasoned and it worked out well, which induced them to trust more their intuitive response or their deliberative response.
In both cases, we found that, as predicted by the theory, the people that were made to be more intuitive cooperated more, and the people that were made to be more deliberative wound up being more selfish and keeping more of the money themselves.
Since that initial work, there's been a lot of interest in this question. I just had a paper published that did a meta‑analysis of many experiments run by different labs, asking the same question in slightly different ways. What I found was consistent with the theory and the initial results: in situations where there're no future consequences, so it's in your clear self-interest to be selfish, intuition leads to more cooperation than deliberation.
This was fifty-one experiments from twenty different labs with more than 15,000 participants. It was a reasonably big effect size, too. On average, 17 percent more contributed in the intuitive condition, relative to the more deliberative condition. That suggests that when self-interest clearly favors not cooperating, deliberation makes you less cooperative.
There was also a set of other experiments where there were future consequences, so it could be in your self-interest to cooperate in order to get other people to cooperate with you. There, I found that deliberation doesn't undermine cooperation, because when you stop and think, you realize you should cooperate because it's in your self-interest.
These experiments are done using economic games—in the experimental economics tradition—where, by and large, the games are framed in an abstract way. They try to avoid words like cooperation and defection. Instead, they explain the rules: Each person gets some endowment of money, you choose how much to keep for yourself and how much to put into this common project, and all of the contributions are doubled and split equally among the four people. Then we ask what you want to contribute. It's framed very neutrally to try to avoid priming effects induced by one particular context or another.
We have a couple of papers that looked at what happened if you try to create a competitive frame. In these experiments, we still described the game in the same way, but we tell them that they're competing against the other people in their group, and the person that earns the most is the winner. There wasn't any monetary prize for winning, just symbolically framing it as a competition. We found that that didn't change things that much—intuition still favored cooperation relative to deliberation.
The effects that we find in these studies are effects on average, averaged over these 15,000 people. There is a lot of variation across individuals. There's not that much evidence of people whose default is to be selfish, and then when they stop and think about it, they cooperate. But there certainly is evidence of people whose first impulse is to cooperate, and even when they realize they're in a one-shot anonymous setting, they continue to cooperate, either because they have explicitly held moral values that say cooperating is good or being selfish is bad, or because they anticipate having intuitive emotional responses afterwards that are negative. For example, a lot of people will say, "I thought about it, and I decided that I should still cooperate. If I didn't, I knew I would feel guilty afterwards." I see that sort of thing as the intuitive or emotional system hijacking deliberation and saying, even though you could make more money, I'm going to force you to be cooperative, because I'm going to make you feel bad afterwards about it.
~ ~ ~
Martin Nowak started working on cooperation in the early '90s. There were three major categories, I would say, where Martin did a lot of work. One was the role of networks and spatial structure in cooperation. He showed that when you take into account the fact that people don’t interact at random, even one-shot cooperation can succeed.
A second was on understanding the importance of forgiveness for dealing with mistakes so that you don't derail productive repeated cooperative relationships if the other person makes a mistake. You should do a fair amount of forgiving. It's a strategy of generous tit-for-tat, and then once they lose shift, there are these strategies that correct errors and deal with that.
He also did a lot of work on reputation and what he calls indirect reciprocity, understanding how you can get cooperation not just between pairs of people but in groups. If I cooperate with you, that will make another person more likely to cooperate with me. This same logic of reciprocity works in this more distributed setting.
The Internet and the age of social media opens up these vast new possibilities for reputation systems on a much bigger scale than was possible before. I think of Yelp as a great example of this. It's a distributed reputation system. One of the ways to think of Yelp is the tourist trap killer. The idea of a tourist trap is that there are crappy restaurants that can still get a lot of business because it's only ever one-shot games. A tourist goes to a restaurant, they think it's terrible, but they never go there again and they don't know the other people that are going there, so there are no reputational consequences.
Yelp allows us to build a distributed memory, where you can leave your bad impression for everyone else to see. It's a major force for good, for making businesses perform better, by creating this distributed reputation system.
As someone doing game theory and behavioral science, it's embarrassing that my dad is an applied math professor and my mom was a therapist. This was the only possible way to exactly combine their two interests—mathematical and quantitative analysis of human behavior. I grew up in Ithaca. My dad was a professor at Cornell. I went to Cornell undergrad. I started as a computer science major. I did three semesters of computer science and realized that, at least at Cornell, computer science was about doing math and not about writing computer programs, which is what I liked.
I switched to biology, and I did a senior thesis in computational biology, which is a math model of electrical transfer and photosynthesis—plant biology. I only ever took one psychology class in undergrad, which was psychology of music, because I played in rock bands and thought it was cool. I did a second senior thesis in computer analysis of musical melodies. Then I worked at a biotech startup for a couple of years, making math models of electrical properties of heart cells. Then I started my PhD at Harvard in systems biology, which was where I connected with Martin Nowak and got into cooperation.
At that juncture, I was faced with this question of what to study in grad school. I was coming out of the biomedical world, and I was in a program where basically everyone was doing biomedical stuff and health-oriented research—lots of cancer research. At the time that I entered grad school, that same year, my mom was diagnosed with cancer, and I was very much feeling like I was in a place where I could do work that was relevant to these problems. But I just fell so in love with the prisoner's dilemma and cooperation that I was compelled to study that.
For me, it was very much a calling. It was so interesting I couldn't stop thinking about it, so I thought, this is what I'm going to spend my life working on. I fell in love with the topic in a way that made it super easy to spend all my time thinking and working on it because I just loved it. But I carried with me this understanding that I went down the road that I enjoyed more than the road that seemed most useful. A lot of what I've taken from that is the desire to try to also do something useful with the work that I've been doing on cooperation.
So, in addition to doing these abstract mathematical models and only slightly less abstract experimental economics studies in the lab, I've also been trying to work with real-world organizations—companies and government organizations—to apply the lessons that we've learned from the theory and the lab in the real world. To that end, I've started the Applied Cooperation Team with two economists, Erez Yoeli at Harvard and Syon Bhanot at Swarthmore, and my graduate student at Yale Gordon Kraft-Todd. We work with all of these different organizations to try to promote real-world cooperation. It's like prosocial consulting. We don't take money, we just want data and we try to publish papers with what comes out of it—trying to increase charitable donations, increase conservation, decrease environmental impact.
An interesting feature of all of this work around how to promote cooperation—which basically all boils down to create future consequences for current actions—is that all of these mechanisms can be used to enforce cooperative behavior that benefits others, but can equally well be used to enforce any kind of behavior. If we establish a social norm for something and say I will only help you or cooperate with you if you follow the social normal, that can be a tool that promotes the greater good, if the social norms are norms that prescribe prosocial behavior. But it can also reinforce all kinds of negative behavior if norms prescribe the negative behavior. Take reputation—there's nothing inherently prosociality-promoting about reputation. It's just a tool to get people to do whatever you want them to do. All of these tools can cut both ways.
Richard Thaler and Cass Sunstein's nudge idea is connected to all these things that I'm interested in, in a couple of different ways. One part is the applied angle, showing how these insights from behavioral science can be used to influence people. But then you have to ask the question, what are you trying to get people to do? What you are using the nudges for may not be what I would want them to be used for. For any of this applied stuff, this is an issue: We're creating tools and then different people are going to want to use the tools to do different things. I don't really know what to do about that, other than to say I hope that people use the tools the way that I would want them to use them. But that is the nature of creating tools.
There's also an important scientific distinction between the actual nudge stuff, like what Thaler meant by nudge, and most of what I am doing: The nudges are supposed to not fundamentally change the incentive structure. Nudges are gentle. The whole idea of nudging is that you don't change someone's set of possible actions or change the payoffs associated with things, you just present things in a way that makes certain options seem more attractive. A lot of the stuff that we're doing with making behavior observable to other people, creating reputation consequences, things like that, fundamentally changes the strategic nature of the decision. In some sense, it's more heavy-handed than nudges.
There's been a lot of interest on this work that I've been doing, applying this intuition versus deliberation framework to cooperation, because in some sense, the question is ancient: Aristotle had these discussions about being cooperative or selfish by default. What I bring to it is trying to get real empirical evidence to answer this question, rather than either just philosophizing about it or making abstract theoretical models. What are ways we can look at this in the lab?
Leveraging a lot of this system one, system two, dual-process work that Kahneman has been one of the leaders on, there's this whole set of tools now for exploring these questions in the lab. We did this and came up with these results that people found very counterintuitive, because most people have this expectation that we are by default selfish. There's evidence that indicates there are a lot of situations where we are by default cooperative. This has captured a lot of interest and created a lot of controversy because it goes contrary to many people's preconceptions.
When I was at Harvard, in addition to working with Martin Nowak, someone that I spent a lot of time with was Josh Greene, in the psych department. Josh is the one that introduced me to all of these ideas about intuition and deliberation and cognitive processes. Josh has done a lot of work that is related to the things that I care about, and was inspirational. He hasn't thought that much about cooperation, more about these moral dilemmas. My exposure to that is what led me to think about applying this set of tools to understand cooperation, which is the thing that I care the most about.
Someone else who was in Josh's lab at the same time I was, who's doing extremely cool work in this domain, is Fiery Cushman, now happily also at Harvard. Fiery is doing fascinating work on understanding lots of different kinds of prosocial behaviors from a cognitive perspective. He looks at model‑free versus model-based reasoning, situations where you have a clear model of the world that thinks through all the details, versus a general reinforcement learning base with no model and no explicit understanding, just "Oh, that feels good, I like to do that" kind of framework.
There are also some people in psychology and sociology who have been talking about ideas like this in an abstract sense for a long time. Toshio Yamagishi is a very influential Japanese sociologist, who had these ideas around exchange heuristics—people treating relationships as if they were relationships that involve back-and-forth and a possibility for exchange, even when they don't.
Also, a lot of the evolutionary psychologists, like John Tooby and Leda Cosmides, have for a long time been making arguments about people's psychology carrying around imprints of earlier behaviors. They think about it more as biological, hardcoded modules in the brain. I think of these things in terms of learning and reinforcement, developing different rules of thumb for different social situations.
Another person who's very relevant for these things is Gerd Gigerenzer, with these ideas of heuristics and rules of thumb being useful, and encoding typically useful behaviors or being advantageous. One way of looking at what I'm doing is taking a lot of those ideas and looking at it in a social context. How do those things work in the context of interpersonal relationships, rather than just individual choice?
In terms of thinking about what your intuitions are, some people think of them as biologically hardcoded instincts that were the result of evolution. Other people think of them as learned heuristics or rules of thumb. In different domains, both of those things are real. In terms of cooperation, it seems to me much more likely that it's going to be learned rules of thumb, because if you're going to evolve an optimal agent, whether cooperating or defecting is the best depends on what the other people are doing. There's nothing that makes cooperation across the board payoff maximizing. Things like reputation can make it so that if everybody else is cooperative, you should also be cooperative. But if no else is cooperating, you shouldn’t cooperate even if there are reputation effects. So it seems like you don't want to hardcode agents to be cooperative, you want to evolve agents that can learn what kind of environment they're in and adjust their strategy accordingly.
~ ~ ~
I'm an associate professor at Yale, in the Psychology department, with appointments in Economics and Management also. Straddling those areas is very much who I am. I think of myself as a behavioral scientist. Fundamentally, I'm interested in understanding behavior. The fact that my training was in biology, which isn't directly related to any of the things that I'm doing now, has been very useful in terms of not being indoctrinated with a particular social science’s way of thinking. Instead, I came to all these questions fairly late in my training, so I was able to ask, what about the way psychologists do things is cool and makes sense? What about the way economists do things is cool and makes sense? Let me try and put those together.
In terms of looking forward, I'm getting more and more interested and invested in these real-world applications. In the next phase of my career, I particularly want to invest in how we do something useful with the things that we're learning, and not just in the lab. We've got lots of lab experiments that present evidence for these theories we're developing, but it's a long way from these stripped-down, abstract lab situations to the mess of the real world.
Who is going to be interested in this, and interested in using these ideas in the real world? Governments. Regardless of their particular political leanings, in general, governments are interested in trying to improve the welfare of their citizens. There are disagreements about how exactly to do that, but that's generally the goal of a lot of countries. These ideas that we have been developing are useful for lots of different levels of government. That's one set of people that I hope will be interested in this.
The other is large companies and organizations that are interested in how they get their workforce to be most efficient. If you think about the way companies work, it's the exact same kind of cooperation problem. In the best of all worlds, you would have each of your employees operating in the way that would be best for the organization as a whole. These same tools for aligning individual incentives and collective incentives are important for organizations maximizing their productivity.
Another direction that I've recently become interested in, and this is driven by my brilliant graduate student Jillian Jordan, is signalling, and the role that signalling plays in a lot of social behaviors.
In addition to wanting to help other people, even when there's no personal benefit to doing so, there's a lot of evidence, both from the lab and from Twitter, etc., that people are also motivated to sanction bad behavior, punish wrongdoers, condemn wrongdoers, shame wrongdoers, people they see as wrongdoers, even when they're not personally affected by the wrongdoing. There's been a lot of interest in understanding why people engage in this type of punishing behavior.
We have been advocating the idea that, at least in part, that kind of behavior can be a form of signalling. In particular, if you punish someone for doing someone, or you condemn someone for doing something, that's a way to signal that you don't do the behavior that is being engaged in. We have these laboratory experiments where we show that if people punish selfishness, other people assume that they're more trustworthy. But that's only true if they don't have a better way of assessing trustworthiness, which suggests that the punishment really is serving as a signal rather than some other function.
This can explain a lot of behavior, particularly public shaming and condemnation behavior, where you can get these runaway situations in which punishment and condemnation is vastly disproportionate to how bad the action is. So if you were punishing for some kind of real social good target, it's clearly too much punishment, but it makes sense if you think that a lot of the punishment and condemnation is just signalling. Then you don't care about the consequences, you care about showing something about yourself.
The kinds of experiments we use to explore this are these two-stage experiments. In the first stage, you see someone act selfishly towards a third person. Then you have some money, and you have the choice to either punish that person for being selfish or not. Then, you go on to a second stage, where someone else has the choice to trust you or not. By trust, this means they have some money, and they choose how much to send to you. Whatever they send to you gets tripled, and then you decide how much to give back. If they think that you're going to be fair and send money back to them, it's worth it for them to transfer the money to you. You can both benefit. But if they think you're selfish, then they wouldn't transfer to you because they would just expect you to keep whatever you get.
Our lab, and also several other labs, have found that people trust punishers more than non-punishers. So if I punish someone for being selfish in the first stage, in the second stage this new person is more likely to send money to me, expecting that I will be trustworthy and return the money. We showed that not only is there this expectation, but that it motivates behavior: People are more likely to engage in punishment in the first stage when that's going to be a useful means for them signalling in the second stage, compared to when it's not. It suggests that a real motive for this type of behavior is trying to signal your trustworthiness.
There's another interesting dimension of signalling that relates to all of my work on intuition and cooperation. The argument that I've made thus far has been the reason people are intuitively cooperative is that intuition gives you this rule of thumb that's fast and easy to apply. It's a good baseline to be cooperative. We have a recent paper that was also done by Jillian Jordan that provides experimental evidence that another motivation for behaving in an intuitive way is reputational benefits and signalling. If I see you cooperate without thinking about it, without taking into account the details, then I know that you're not going to be easily swayed by incentives.
If I see you just cooperate, without thinking of whether it's in your self-interest, and then I interact with you later, I can count on you to cooperate, even if it turns out to be costly to do so. Whereas if I see you stop, carefully consider, and then say, "Oh yes, I'll help you," then I know next time you might not help me. The desire to signal that you are a trustworthy partner, and therefore someone good to interact with, can motivate you to cooperate in an uncalculated way, to broadcast your trustworthiness.
This is a broadly applicable thing, that when people see others being calculating, they don't trust them for this reason. A place that we're seeing a lot of discussion of this right now is in politics. A lot of it is applying this idea from interpersonal relationships. If you ask a friend to do a favor for you, and they say, "How long is it going to take? How much of a pain is it going to be? Okay, fine. I'll do you the favor," that does not get you nearly as many friend points as a friend that just says, "Oh yeah, sure, I'll help you," without asking about it.