Nectunt Blog

Social Dilemmas and Human Behavior

Rationality and cooperation (I).

Leave a comment

In my first entry on this blog, let me address a topic that constitutes a key point in almost any study on human behavior, namely rationality. The word rational belongs to everyday language, and its meaning depends strongly on the context in which it is used. Even in its technical sense, rationality does not have a single meaning, having different meanings in evolutionary biology, sociology, economics and politics. And even within a particular science, rationality may have different forms: the German sociologist Max Weber distinguished four types of rationality. Just as any other sociological approach, this interpretation has its detractors, among them pragmatists. In this entry, I will try to discuss what could be called rational in a certain number of situations modeled by game theory, and not always coincide with the classical definition of rationality as utility maximization.

(Versión en español aquí)

To avoid getting lost in the welter of different interpretations of rationality, let us narrow down the subject into the central topic of this blog: human cooperation and game theory. Game theory is an area of applied mathematics that uses models to study interactions with formalized incentive structures. Usually a rational decision relates to the possible profits that it would yield. In order to properly analyze the incentives and preferences of individuals in the process of decision making, let us discuss some concepts about actions, payoffs and preferences. A preference relation is complete if for any pair of actions, a player prefers one of them over the other, or is indifferent between the two. If the game is well defined all the actions can be ranked in a complete partial ordering of preference. The transitivity rule stands that, given three actions, if the first actions is better than the second one, and the second action is better than the third one, then the first action will be better than the third one. It seems reasonable to assume that rational players will follow this rule.

Anyway, although game theory belongs to mathematics (i.e., exact sciences), some aspects are amenable to different interpretations. What should we expect from a rational player? In this respect, there are different interpretations: In economics, Homo Economicus refers to individuals that act to obtain their highest possible outcome given the available information. Applied to game theory, Homo Economicus makes his choices to maximize his expected payoffs. According to this, rational players are assumed to have perfect rationality, that is, they always act in a way that maximizes their payoff, being capable of arbitrarily complex deductive processes to achieve that end. This opposes to that of Homo reciprocans, which corresponds to those who are motivated to improve the common wealth.

In the framework of game theory, the Prisoner’s Dilemma has become a standard for the study of cooperative behavior. It is a two-players-two-actions game, where each player chooses one of the two available actions, cooperation or defection. Given the payoff’s ordering, whatever the adversary’s action is, the expected payoff is higher for defection. Nevertheless, the sum of both players’ payoffs is greater when both choose to cooperate, while the lowest payoff is received when both players defect. Accordingly,  since defective strategy is the strict best response to itself and to cooperation, a rational player (in the sense previously described) will chose always to defect. Moreover, considering than human (and biological in general) interactions can occur repeatedly, cooperative behavior is often studied through evolutionary game theory. In the iterated Prisoners’ Dilemma, both players play repeatedly knowing the previous actions of their opponent, and can change their strategy accordingly. This version of the game expands the potential payoffs in terms of the own’s and opposite’s strategies. Now, a momentary action can determine future actions of our opponent, and therefore our future benefits.

hola
Figure 1. Schematic representation of the Prisoner’s Dilemma (left) and Stag Hunt (right). CXJJensen & Riestenberg.

 

To study the problem in more depth, we can take some assumptions or others. The concept of superrationality is based on the idea that two superrational players (playing a symmetric game) will take the same strategy. Although the idea is quite old (it was introduced by Kant), in its present form it is owed to Douglas Hofstadter (whose book “Gödel, Escher, Bach: An Eternal Golden Braid” many of us have enjoyed). A pair of superrational agents playing an iterated Prisoner’s Dilemma will not choose defecting (at least, not repeatedly).

This concept of superrationality differs from the (much more accepted) concept of rationality, based on the idea of Nash Equilibrium which in turn is based on the payoffs’ maximization without constraints between players’ strategies, even though in a symmetric game the equilibrium strategy may be the same for both players. In the Prisoner’s Dilemma, mutual defection is the only strong Nash equilibrium, that is, it provides the only outcome such that any player could only obtain a lower payoff by unilaterally changing his strategy.

On the other hand, when we defined rational players, we said “being capable of arbitrarily complex deductive processes to achieve that end”. How complex can their reasoning methods be? There are no limits other that the constraints of the game (such as the time available). In 2012, William H. Press and Freeman Dyson introduced the zero-determinant strategies, a new class of strategies for the iterated Prisoner’s Dilemma. Some of these strategies allow one player to set the other player’s payoff, extorting opposite player to achieve a payoff lower than his own. If the extorted chooses defection, he obtains a lower payoff than choosing a given mixed strategy (by cooperating with a given probability). The extorted player could defect but would thereby hurt himself by getting lower payoff. Forcing the opponent player to cooperate to increase his payoffs as a method of increasing own’s benefits may be considered as a rational behavior, depending on the meaning of “rational behavior”.

The definition of rational player gets more complicated when we consider other factors related to real life. Let us consider a situation in which two interchangeable agents have the previous available actions, cooperate or not. Now, unlike the Prisoner’s Dilemma, a given agent will not obtain the highest payoff by choosing a fixed action regardless his opponent’s choice, but his optimal action varies depending on the other player’s choice. This situation is modeled in game theory through coordination and anticoordination games. Particularly, in coordination games both players obtain higher payoffs when both choose the same action. Stag Hunt Game constitutes a paradigmatic example of coordination game which is obtained by reordering the payoffs of the Prisoner’s Dilemma. In the Stag Hunt Game, the payoffs obtained when both players choose to cooperate are greater than those obtained when both choose to defect. Despite this fact, there are two Nash equilibria, corresponding to the combinations in which both players choose the same action with probability equal to one. Given the fact that both pure strategies are Nash equilibria, and that mutual cooperation provides higher payoffs than mutual defection, is more rational to choose cooperation than defection? Without considering more aspects, the answer is not obvious. But the problem is more complicated, since cooperation involves more risk than defection. This is because, in the case that the actions of both players do not match, the player who cooperates receives the lowest payment. Which brings us to the next question, suppose a lottery game in which the expected payoff is greater than the cost of betting, does a rational player bet independently of other variables? In other words, how correlated is risk-aversion with rationality? In a future entry of this blog we will address the relation between risk and cooperation.

Advertisements

Author: Carlos Gracia

Researcher at the Institute for Biocomputation and Physics of Complex Systems (BIFI), Universidad de Zaragoza, Spain.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s