Shall I put my neck on the line or should you? And why should I chip in if my neighbours aren’t?
In 2013 I wrote an article summarising some of the research I’d been reading about in preparation for a PhD interview. Last week I had an interview for a post-doc position, so I thought I’d write about something related. The post-doc would involve using approaches from evolutionary biology and epidemiology to understand some of the underlying processes which shape patterns in human culture and cooperation. Understanding how and why humans (and non-humans) cooperate is a fascinating area which has been keeping researchers in anthropology, economics, sociology and biology busy for decades, so I’m going to focus on a smaller problem called the collective action problem.
Collective action problems concern scenarios which group living animals (including humans) sometimes face in where the group can receive a large benefit if they work together to solve a problem, but there is a cost associated with the effort individuals put into helping. This can include things like cooperative hunting or territory defence, and maybe even the production of extracellular enzymes in bacterial colonies! Collective action problems also crop up in lots of uniquely human situations, such as employees working together to put pressure on their employers for better working conditions or individuals paying (or evading) taxes to fund public infrastructure and services.
The ‘problem’ part comes in because, while the benefits are reaped at the group level, the costs are incurred by individuals. This makes it possible for social cheats to “free-ride” by not paying the cost of helping but still receiving the same benefits as everybody else. Just because you avoid paying any taxes doesn’t mean you can’t use public transportation infrastructure. From an evolutionary perspective, you increase your fitness by not paying the cost of participating while still receiving the rewards. Interestingly, both theoretical and experimental studies show that this logic leads to a collapse of cooperation and a selfish stable strategy in which the evolutionary fitness of everyone is lower than if they cooperated.

This weekend I wrote a simple visualisation to demonstrate this effect (what do you do on an overcast Sunday?). In the simulation, there is a population of squares (individuals) on a grid. For every step in time, each individual receives a small number of resources which they can either keep or redirect a proportion of towards a community project. Resources put into the community project receive interest (and therefore increase in value) before being shared equally with all neighbouring squares. This setup creates a collective action problem for each local neighbourhood. The optimal solution is for everyone to contribute all of their resources to community projects, but this creates a scenario ripe for freeloaders and the whole system collapses! See the video below, where the bluer the square the more they contribute towards community projects. Ignore the coloured flags; they indicate different groups but there’s only one group in this video.
Overcoming social cheats
So how do we overcome the problem of free-riders? Why do we (generally speaking) pay our taxes?
If you don’t pay your taxes you’re likely to receive a fine or prison sentence. This sort of punishment could be a key ingredient in explaining why humans cooperate with large numbers of unrelated individuals in collective action problems. Experimental games with volunteers find that having the ability to spend resources to deprive others of resources, and therefore punishing them, is effective at encouraging cooperation. Humans may even have specifically evolved to be highly attuned to detecting and reacting to social cheats.
When the simulation is run again, this time with individuals sacrificing some of their own resources to punish neighbours who contribute too little to the community, we can see that cooperation increases over time. If you keep increasing the threshold for punishment then greater and greater levels of investment in the community are achieved. See the video below.
One problem with punishment is that it creates another opportunity for a free-ride. Dishing out punishment is costly, and individuals can sit back and let others do the hard work of punishment while still contributing just enough to avoid punishment themselves (these are called second order free-riders). This isn’t seen in my simple visualisation because individuals always punish free-riders, but it is seen in experimental public goods games using human volunteers. This can be solved by introducing centralised punishment, whereby punishment is dished out from a central pot of money which people optionally pay into. When punishment is levied against both non-cooperators and those refusing to contribute to punishment, second order free-riding becomes unstable.
There are interesting parallels between centralised punishment strategies and human legal institutions like courts, prisons and police forces. While these simple models and experiments are very different from the complexity seen in real human societies, they can nevertheless be used to test the logic of hypotheses and help us understand some of the fundamental factors underpinning of human cooperative behaviour. Is cooperation within the large scale societies we see today only possible because we have developed centralised institutions to punish non-conformers? It’s an intriguing, if not very optimistic concept.
Other explanations
Punishment is one mechanism by which cooperation can be maintained, but there is still plenty we don’t know. Centralised and/or peer-to-peer punishment may play an important role in shaping human societies and the diversity they exhibit, but it is not the only explanation for cooperation. Kin selection can explain cooperation between groups of genetically related individuals and is frequently used to explain cooperation in non-human animals. Another hypothesis suggests that in certain situations altruism could act as an honest signal of fitness (if you can afford to be altruistic, you must be doing well for yourself) and may determine the prestige of group members leading to other benefits like mate choice.
Cooperation in some animals is more ad hoc and can simply depend on the relative pay-offs. Pied wagtails will cooperate with a small number of helpers to defend their territory, but they only do this when the energy balance means the food eaten by the helpers is less than that which would be lost to invaders without the help.
Humans solve collective action problems over much greater scales than any other animal. We’ve developed complex social structures and rules to limit social cheating, including institutions which allow centralised punishment and enforcement to ensure broad compliance in many collective action problems. There are undoubtedly other important processes, and many questions still left to answer; perhaps we’ll explore some in a future post.
The model/visualisation
The videos were produced by a simple simulation written in C# using the Unity game engine. In case anybody wants to have a play I’ll upload the source code and executables for the model to github in the next day or two, along with a description of how the model works and an explanation of the various parameters. There are options to have more than one group; in which case, cooperation is only possible between members of the same group (that’s what the flags are all about, by the way), so it is possible, for example, to see which conditions favour invasion by new groups. Different filters can be used to visualise the fitness, age and wealth of individuals or groups.
Great. That’s a really clear explanation. Thanks.