HN2new | past | comments | ask | show | jobs | submitlogin

The game is kind of weird:

Each player of the pair begins with a set amount of money, say $5. Each puts any part or all of that $5 into a mutual pot, without knowing how much the other player is investing. Then a dollar is added to the pot, and the sum is split evenly between the two. So if both put in $5, they each wind up with $5.50 ($5 $5 $1, divided by 2). But suppose the first player puts in $5 and the second holds back, putting in only $4? The first player gets $5 at the end ($5 $4 $1, divided by 2), while the cheater gets $6 ($5 $4 $1, divided by 2--plus that $1 that was held back).

It seems to me that there isn't actually anything to be gained from cooperation. If both players "cheat" completely (put $0 into the pool), they still get $5.50. In that sense, the Nash equilibrium (both are cheating) is also socially optimal. Kind of untypical for something where you want to demonstrate the advantages of cooperation.



It is a weird game, but experimental situations are usually contrived. They define cooperation as the absence of cheating. But the surprise is that people jump at the chance to fine the cheater, even though they have to pay the same amount as the fine:

You can fine the cheater by taking away some money, as long as you're willing to give up the same amount yourself. In other words, you can punish a cheater if you're willing to pay for the opportunity.


Another famous experiment that supports the finding that people are hyper-sensitive toward cheating:

http://en.wikipedia.org/wiki/Wason_selection_task#Policing_s...

This experimental evidence supports the hypothesis that a Wason task proves to be easier if the rule to be tested is one of social exchange (in order to receive benefit X you need to fulfill condition Y) and the subject is asked to police the rule, but is more difficult otherwise. Such a distinction, if empirically borne out, would support the contention of evolutionary psychologists that certain features of human psychology may be mechanisms that have evolved, through natural selection, to solve specific problems of social interaction, rather than expressions of general intelligence. In this case, the module is described as a specialized cheater-detection module.


I noticed this too. The example would make more sense if the rules were changed so that the pot is multiplied, say by 2. Then the optimal case is both put in $5 and end up with $10.


It has to be multiplied by less than two - if it's multiplied by two or more, putting money into the pot is always worthwhile (or at least break-even), no matter how much the other player puts in.


I thought the same so looked for the original article on Google Scholar. From Fehr and Gachter, "Altruistic Punishment in Humans":

... groups with four members played the following public goods game. Each member received an endowment of 20 money units (MUs) and each one could contribute between 0 and 20 MUs to a group project. Subjects could keep the money that they did not contribute to the project. For every MU invested in the project, each of the four group members, that is, also those who invested little or nothing, earned 0.4 MUs. Thus, the investor's return from investing one additional MU in the project was 0.4 MUs, whereas the group return was 1.6 MUs.

Your return on an investment of 1U is 0.4U, so it's always best for you personally to keep a unit. But the group return is 1.6U so the group as a whole is enriched for every unit invested.


I have not read the original paper, but it seems plausible that the experimenters would have tried different game rules and this is the version which gives the reported result most strongly. On that assumption, one way to look at the experiment is as a test of the limits of rational behaviour. The 'vengeful chump' might be interpreted as externalising their embarrassment at having failed to identify the optimum strategy. Unconsciously they know they should have cheated, but it is more gratifying to blame someone else. I'd like to read the entire paper if anyone can please show where it can be got (for free natch).


Furthermore, since "punishing" a cheater involves paying $X to force the cheater to pay $X, the cheater will still come out ahead no matter how many times he cheats. This game is... interesting.


That's the point of the game isn't it? To determine if people will irrationally pay to punish a cheater for the sake of revenge? At least that's what I got from the article.


Behavioral economics: exploring progressively less efficient ways to transfer beer money from grant committees to undergrads since 1970.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: