Sunday, March 11, 2012
Prisoner's Dilemma: The Challenge of Cooperation
In my last post on ethics, I talked about the idea that selfish genes can encourage altruistic behavior (self-sacrifice for the good of another) among related individuals, because related individuals carry the same genes. This is why many organisms give preferential treatment to their relatives. This preferential treatment can be quite selfless (at the organism level, if not the gene level). Many animals work themselves nearly to death caring for their young. Some even sacrifice themselves to protect others. If you've ever been stung by a bee, and it left its stinger behind, you've been the victim of a suicide attack. The bees in a hive are all quite closely related, which is why bees are willing to die defending their hive.
Humans also tend to be very altruistic toward their kin. How we treat other people is obviously a big question in ethics, so the evolution of altruism is clearly important for understanding people's moral sense. However, the idea of kin selection only explains altruistic behavior among related organisms. It doesn't offer any reason for being nice to unrelated animals. As we might expect, many unrelated animals--even of the same species--are pretty nasty to each other. Large alligators see small, unrelated alligators as just another potential meal. If a band of chimpanzees comes upon a lone male chimpanzee from another group, they will probably try to kill it.
However, cooperation is also pretty common among unrelated animals. Crows, even unrelated ones, will cooperate to mob an owl or hawk, "encouraging" it to hunt somewhere else. Sometimes bluejays and other birds will join in, and they certainly aren't related to the crows (except in a more distant, evolutionary sense). Mobbing an owl clearly carries some risk, since an owl is quite capable of killing an individual crow or bluejay. Why do mobbing birds cooperate like this?
The likely answer is that each bird gains more by cooperating to drive away the owl than it loses in risk to itself. Mobbing owls is a win-win situation for them. Each one gains by joining in, because the more join in, the more likely they are to chase away the owl. This kind of situation can be modeled mathematically using the branch of mathematics called game theory. Game theorists refer to a win-win situation as a non-zero-sum game. We tend to think of games as competitions where one side wins and the other loses. These are called zero-sum games because we can think of a win as +1 and a loss as -1. The sum is zero. A lot of interactions that can be modeled with game theory are non-zero-sum, win-win affairs. It's advantageous to both parties, so natural selection favors it. Natural selection isn't always about being nasty to others, because oftentimes being nasty isn't adaptive.
Occasionally, though, unrelated organisms have interactions where one voluntarily sacrifices to help another. A commonly-cited example of such benevolence occurs in, oddly enough, the vampire bat. Vampire bats fly out every night in search of food. When they find a large animal, they creep up on it, bite a sliver of skin away, and lap up its blood. Oftentimes the victim doesn't notice, because the bat has an anesthetic in its saliva. But many of them do notice, and shake the bat off. The bat may return to its cave hungry. Because its metabolism is so high, it can starve to death after just a couple of hungry nights. So, it will beg another bat in the cave to regurgitate some of its blood, to get it though the night. Generally, they can find a bat willing to do so.
This is different from a clearly win-win interaction, because the donor bat gets nothing in return. If it is unrelated, why does it do it? One possible answer is that it can expect the other bat to donate on another night, when it comes home hungry. I'll barf up blood for you, if you'll barf up blood for me. Evolutionary biologists call this reciprocal altruism, although this is really a misnomer, since the donor bat is only sacrificing in the short term. I'll use the term delayed reciprocity, to avoid the suggestion of true altruism. It's a win-win situation (ideally), but the win for one party is delayed.
And that's why delayed reciprocity is rare in nature. It's vulnerable to cheaters who don't return favors. It's true that a bat who receives blood from another, and then gives blood on another occasion, is better off than it would be if it didn't cooperate. But it would be even better off if it made a habit of accepting blood, but never giving any back. In a large population of reciprocating animals, it's only a matter of time until a mutant appears that will exploit its more trusting brethren. It will be better nourished than the others, and better able to survive and reproduce. The "cheater" genes will spread through the population, until it becomes a population of cheaters and reciprocity vanishes. Sure, the group as a whole will do worse, but as I explained in another post, evolution (probably) doesn't happen at the group level.
The only way delayed reciprocity can be stable in a population is for its members to learn to avoid cheaters. This generally means they are smart enough to recognize individuals, and to remember whether that individual has cooperated with them in the past. This means delayed reciprocity is only feasible among relatively smart animals, living in stable social groups small enough that most members are recognizable. It also means that reciprocators can't be too forgiving. They have to be willing to cooperate with animals that cooperate with them, but not with animals that don't. They may even punish cheaters. Let's say a monkey asks another monkey, which it has groomed in the past, to return the favor. If the other monkey refuses, the first monkey is likely to scream and even attack the cheater. Monkeys, and many other intelligent animals, have a sense of fair play, and get very indignant when others don't play nice. In a population of monkeys that mostly groom each other when asked, but refuse to groom those that have refused to groom them, delayed reciprocity can be stable.
Economists and evolutionary biologists have used game theory to model this this kind of reciprocity. This stuff can get pretty abstract, but it has some extremely important implications. In game theoretic terms, a delayed reciprocity situation resembles what's known as a prisoner's dilemma. This is a situation where it pays to cooperate, but there is a temptation not to. It gets its name because it is commonly formulated as a situation faced by two prisoners accused of committing a crime together. They are put in separate cells, and each one is given the following options. "If you confess, and implicate your partner, you'll go free and he will get five years in prison. If he confesses, and implicates you, he'll go free and you'll get five years. If you both confess, you'll both get two years. If neither of you confess, we'll hold you for a few months, but we'll have to release you for lack of evidence."
Each prisoner--let's call them Shorty and Biggy--sits in his cell and ponders his situation. Shorty thinks, "If we cooperate, and we both refuse to talk, we'll both get out after a few months. But if Biggy talks, and I don't, I'll get five years, and he'll walk. If I talk, and he doesn't, I'll walk and he'll get five years. I don't want to stay in here six months, much less five years. I'm gonna talk". Of course, Biggy will go through the same thought processes, and he will probably decide to squeal, too. So they'll both talk, and both get two years. Both of them would have been better off if they had cooperated, but they were both too tempted to defect, so they both suffer for their lack of cooperation.
Of course, in real life Shorty and Biggy would also be considering the fact that they will probably meet again. Shorty thinks "I'd love to rat out Biggy and walk, but in five years, he'll get out and come after me". Of course, Biggy will do that because he's a product of evolution, wired to punish cheaters. Most of us are, at least to some extent. So, they are more likely to cooperate if they think they will meet again.
What happens in real life is that prisoner's dilemma type situations are played over and over again. This is known as an iterated prisoner's dilemma. In the early 1980's, a political scientist named Robert Axelrod decided to run computer simulations of an iterated prisoner's dilemma involving multiple players. He turned it into a contest, inviting people to send in programs that used a particular strategy. When two programs met, each would choose to cooperate or defect, and was awarded a certain number of points depending on the outcome. The biggest payoff for each one happened when it defected and the other program didn't. But it was better for both to cooperate than for both to defect.
People sent in programs with all kinds of different strategies. Some were simple, some complex. There were "nice" programs, that always cooperated, and "nasty" ones, which never did. The simulation was run, and each program played against the other several times. At the end, the scores were tallied. The most successful one was called "Tit for Tat". This program's strategy was to cooperate initially, and then, whenever it met another program for a second time, do whatever that program had done the last time. If it had cooperated, Tit for Tat would cooperate. If it had defected, Tit for Tat would defect. So, Tit for Tat was "nice" by default, but not a pushover. It wouldn't cooperate with a program that had cheated it in the past. But it was forgiving. It would start cooperating again with a cheater as soon as that cheater decided to cooperate.
Over many tournaments, Tit for Tat kept on beating other programs. The only time it didn't win was when most of the programs submitted were "nasty" ones, that rarely cooperated. In this environment, Tit for Tat proved to be too "nice", and could never get a foothold. The average score for all the programs was lower when the population was dominated by "nasty" programs than when it was dominated by "nice" ones. At the individual level, it turns out that it pays to be nice, but only if you're not willing to be a sucker, and only if you're not in a completely nasty environment. At the group level, the group average is always better if the population is composed of cooperators. While an individual may do best in a nasty environment by being nasty, he would be better off as a cooperator in a nice environment.
When evolutionary biologists use game theory to simulate organisms in a population, the ones who get the most points become more common in the population. Whole populations can oscillate back and forth, as one strategy, and then another, gets more common. In most populations, organisms with a Tit for Tat strategy come to dominate. Cooperation prevails, and cheaters can't get a foothold. However, if the population is too uncooperative in the first place, then cheaters do better, and soon the population is nothing but cheaters. Now everybody does poorly, but cooperation can't get a foothold, so that's the way it stays. Another possibility is that in a population of Tit for Tat strategists, even "nicer" strategies can appear in the population. Let's call these Saints. Saints always cooperate, and never retaliate against cheaters. They do fine in a Tit for Tat environment. But when the first mutant cheater appears, they are robbed blind. The cheaters do well in this environment, because of the population of Saints that don't retaliate. In this way, the cheaters can get a foothold. The population swings back toward cheating, and everybody suffers.
So, what does all this have to do with human ethics and behavior? A lot, in my opinion. For one thing, it seems very likely to me that humans have a natural moral sense constructed by evolution. Just as our urge to caring for certain others may have its roots in kin selection, perhaps our tendency to cooperate comes from the fact that we are social animals, who can often do better by cooperating than by fighting or going it alone. Like many animals, we cooperate with non-relatives to take advantage of win-win situations where everybody gets an immediate payoff. There's nothing particularly altruistic about this. Both parties cooperate because it is too their advantage. Adam Smith, the founder of modern economics, made this point beautifully: "It is not from the benevolence of the butcher, the brewer or the baker, that we expect our dinner, but from their regard to their own interest. We address ourselves, not to their humanity but to their self-love, and never talk to them of our own necessities but of their advantages." Trade is cooperation based self-interest, not altruism.
We humans also seem to be unusually good at delayed reciprocity, where we cooperate with others in the expectation of getting rewards later. Among other animals, clear evidence of delayed reciprocity (AKA reciprocal altruism) is pretty scarce. Even among vampire bats, it's been argued that the bats that feed each other are likely to be related, which would mean they're practicing kin altruism, not reciprocal altruism.
People are able to reap the benefits of delayed reciprocity, because we are smart enough to recognize a large number of people, and remember how they have treated us in the past. This may be why we form friendships with unrelated people, and why we get so indignant when we think someone isn't treating us fairly. In fact, some biologists think one reason we became so smart is that big brains helped us to reap the benefits of cooperation, while remembering those that cheated us. This may even help explain language. Humans spend a lot of time talking about each other. We're tireless gossipers. While gossip is often frowned upon, gossip may have allowed us to expand the circle of people that we can safely cooperate with. Maybe we learned to talk so we could gossip more effectively. An animal deciding whether to cooperate with another has to just try it, and see what happens. People, on the other hand, can ask around. "I'm thinking about taking Zog mammoth hunting. Does he pull his weight on a hunt?" With our big brains, and our ability to talk about the reputation of others, we were able to combine into much larger, more cooperative groups than most other animals.
Of course, I don't mean to suggest that we should only make friends and cooperate with unrelated people when we can benefit from it. I do think our cooperative instincts evolved because they were beneficial, but there's no reason we can't go beyond the dictates of biology. Besides, most of us don't go around calculating the benefits we'll receive from making friends and being nice. We just feel drawn to people we like and think we can trust.
Besides, cooperation and delayed reciprocity can't explain everything about human behavior, because people can be altruistic toward complete strangers they will never meet again. A common example is that most people tip servers at restaurants, even if they are traveling, and will probably never meet that person again. Some people are extremely altruistic toward people they've never met, for example, by sending aid money to people in other countries. Of course, a lot of people aren't very nice to anyone. Still, it's clear that humans have found a way to expand the scale of cooperation far beyond what most animals are capable of. We are able to do so partly because of our biology, which gave us our big brains and our ability to talk. But it is also cultural. Over time, people have learned to live in larger and larger groups, and to engage in more and more complex types of cooperation. We have developed cultural institutions like religions, governments, and laws that encourage us to cooperate, and to treat at least some non-relatives decently.
Just how this cultural expansion of cooperation might have happened is a huge topic, and I'm certainly not going to tackle it now. For now, I want to conclude by considering the lessons we can learn from biological ideas about win-win cooperation and delayed reciprocity. As I look back over this post, I realize that my conclusions are more pragmatic than ethical. I do think that cooperation between non-kin can tell us about human ethics, because it can tell us where our sense of friendship and fairness came from. However, it really doesn't tell us that much about altruism. At least in nature, true altruism (in the sense of self-sacrifice without expectation of future gain) is pretty rare. But cooperation is common, and can be beneficial, even if it isn't altruistic. So, even though I admire altruism, my conclusions here will have more to do with applying lessons from game theory to encouraging mutually-beneficial cooperation, a much easier task than encouraging selfless altruism.
One thing we learn from the game theory models of reciprocity is that cheating is a big problem. Any time there is a situation where people can take advantage of others, a few of them will. Conservatives reading this may be nodding and saying, "See, we have to make sure people on welfare don't cheat the system." That's true, we do, but we also have to make sure corporations don't cheat. There are incentives to take advantage of others at all levels. Liberals tend to have an overly rosy view of individual human nature, but conservatives tend to have an overly rosy view of "corporate nature". When I hear people saying that oil companies will "police themselves", I think, "And they say liberals are the starry-eyed ones."
In any competitive system, some of the players will always be tempted to play dirty. That's why, at all levels, we have to make sure that there are rules to keep competition from getting too nasty. That's why we have laws. One of the most interesting results of computer simulations of the prisoner's dilemma is that there is no universally successful strategy. What is a good strategy for dealing with others depends on the environment; on the strategies others are using. In a big city, where most people are strangers to each other, people tend to be less trusting than in a small town, where most people know each other. This makes perfect sense. You have to be more careful when you're surrounded by strangers, even though many of them are perfectly trustworthy. In the same way, if you grow up on the streets in a rough neighborhood, you may need to cultivate a reputation for toughness that you don't need if you grow up in the middle class suburbs. The tough guy approach may be a problem if you try to move into the middle class, but it may have been a necessity back on your block.
It seems to me that if people perceive that they are in a fair, safe environment, they will be more likely to be cooperative. If people think they are in a dangerous, unfair environment, they will decide they have to play dirty to survive. People are smart, and we adapt our behavior to suit our environment. If we decide to have a dog-eat dog world, we shouldn't complain if we get bitten. It seems to me that a major challenge for society is to establish ground rules that discourage cheating, and encourage fair play.
I'm not advocating for pure top-down government control of everything here. Some cooperative ventures can evolve on their own. This includes free-market trade networks. Lots of people think that countries have gone to war with each other less in the last fifty years because they trade with each other more. As the psychologist Stephen Pinker puts it "The spread of trade and commerce has brought violence down, when it becomes cheaper to buy something than to steal it and more and more of the world becomes more valuable alive than dead." Of course, competition is an essential part of market economies. Companies that don't have to compete start making shoddy, overpriced goods. The trick, with corporations as well as people, is to have ground rules to keep the competition from getting too nasty. In basketball, it's OK to block your opponent's shot. It's not OK to gouge him in the eye. Competition can bring out amazing things in people and organizations, but it can also bring out a lot of nastiness, because it leads to incentives to play dirty. We have to have competition within the framework of cooperation, where we agree to have rules to ensure that we play fairly, if not perfectly nicely. We live in a competitive world, but we can choose what kind of competitive environment we want to have. If we encourage everyone to play fair, everyone benefits. If we let cheaters get too common, to the point it no longer pays to cooperate, everybody suffers. The prisoner's dilemma turns into a losing game, even though it didn't have to be.
I'm not the sort to see much purpose in nature. However, I can't help seeing the prisoner's dilemma as a challenge nature has handed us, even if only by chance. It's as though nature is saying "You're a pretty smart species. Are you smart enough to figure out the prisoner's dilemma, and how to win it?" Well, are we?
Cooperation Between Non-Kin in Animal Societies / Tim Clutton-Brock
The Selfish Gene / Richard Dawkins
Nonzero: The Logic of Human Destiny / Robert Wright
Prisoner's Dilemma: John von Neuman, Game Theory, and the Puzzle of the Bomb / William Poundstone