Wednesday, 7 June 2017

Trump, Paris, and the repeated prisoners' dilemma

I just finished reading this 2008 paper by Garrett Jones (George Mason University), entitled "Are smarter groups more cooperative? Evidence from prisoner’s dilemma experiments, 1959–2003", and published in the Journal of Economic Behavior & Organization (ungated earlier version here). In the paper, Jones collates data from 36 studies of the repeated prisoners' dilemma that were undertaken among U.S. college students between 1959 and 2003.

The classic prisoners' dilemma game goes like this (with lots of variants; this is the version I use in ECON100):
Bonnie and Clyde are two criminals who have been captured by police. The police have enough evidence to convict both Bonnie and Clyde of the minor offence of carrying an unregistered gun. This would result in a sentence of one year in jail for each of them.
However, the police suspect Bonnie and Clyde of committing a bank robbery (but they have no evidence). The police question Bonnie and Clyde separately and offer them a deal: if they confess to the bank robbery they will get immunity (and be set free) but the other criminal would get a sentence of 20 years. However, if both criminals confess they would both receive a sentence of 8 years (since their testimonies would not be needed).
The outcome of the game is that both criminals have a dominant strategy to confess. Confessing results in a payoff that is always better than the alternative, no matter what the other criminal decides to do. The Nash equilibrium is that both criminals will confess. However, that assumes that the game is no repeated.

In a repeated prisoners' dilemma game, where the game is played not once but many times with the same players, we may be able to move away from the unsatisfactory Nash equilibrium, towards the preferable outcome, through cooperation. Both criminals might come to an agreement that they will both remain silent. However, this cooperative outcome requires a level of trust between the two criminals.

Jones's data records the proportion of the time that the repeated prisoners' dilemma (RPD) resulted in cooperation in each of the 36 studies. He then shows that there is a large positive correlation between the average SAT scores at the college that the study was undertaken at (a measure of average intelligence), and the proportion of students choosing to cooperate in the RPD game. This is the key figure (2006 average SAT score is on the x-axis, and the proportion of students cooperating is on the y-axis): [*]

The regression results support this, and the effects are relatively large:
Using our weakest results, those from the 2006 SAT regression, one sees that moving from “typical” American universities in the database such as Kent State and San Diego State (with SAT scores around 1000) to elite schools like Pomona College and MIT (with scores around 1450) implies a rise in cooperation from around 30% to around 51%, a 21% increase. Thus, substantially more cooperation is likely to occur in RPD games played at the best schools. It indeed appears that smarter groups are better at cooperating in the RPD environment.
So, the results are clear. Smarter groups are more likely to cooperate in a repeated prisoners' dilemma game. This brings me to the Paris climate change agreement, which as I noted in this earlier post, is an example of a repeated prisoners' dilemma. If more intelligent groups are more likely to cooperate in this situation, what does exiting the Paris agreement (i.e. not cooperating) imply about the Trump administration's collective level of intelligence?


[*] The results are not sensitive to the choice of which year's average SAT scores to use, and similar figures are shown in the paper for 1966 and 1970 SAT scores, and 2003 ACT scores.

Read more:

No comments:

Post a Comment