## A New Foundation for Game Theory.

### Written by: C. B. Garcia and W. I. Zangwill

**Professors of Management Science at the Booth School of Business (both retired)**

Revised August 18, 2018 from (Garcia and Zangwill [8, 9]).

**Keywords**：game theory, prisoner’s dilemma, bayesian, subjective probabilities

**Abstract**：Von Neumann and Morgenstern (VNM), using the expected utility hypothesis, provided the fundamental formulation of the game theory problem. Up to this point, however, that formulation had been difficult to solve without imposing additional assumptions. Nash had to assume that the players were decoupled so that the probability of player A taking an action was independent of the probability of player B taking an action. In this paper we eliminate Nash’s assumptions, including an assumption that players’ strategies are common knowledge, and propound a model that is fully equivalent to the general VNM problem. Our easily solvable formulation eliminates some of the inherent difficulties with the Nash approach, that often produced contradictory and counterintuitive results, e.g., to the prisoner’s dilemma, the chicken game, Newcomb’s paradox, the stag hunt and many other games. For instance, by dropping Nash’s mutual independence assumption in the prisoner’s dilemma, our model demonstrates that the players are able to achieve superior payoffs, and to achieve that, they need not play cooperatively or communicate, but merely apply Bayes theorem, in the style of (Harsanyi [10]; Kadane and Larkey [11]). Our approach divides the probability space into two half-spaces or regions, whose relative size depends upon the payoffs. Now, one need not estimate the probability precisely, but only determine what region it is in. This provides significant advantages as, if one region is considerably larger than the other, this immediately yields substantial insight about how to play the game. Our general solution, which is not correlated, say in the sense of Aumann [1], contains the Nash equilibria as particular solutions. In contrast to the descriptive Nash solutions, our solution is a prescriptive pair of rational-expectations pure strategies, yielding a new foundation for game theory. We extend our approach to general M-Person games, as we illustrate in the rock-paper-scissors game and the bar-crowding problem.

**Summary of Results.**

We now summarize some results, based upon the details and explicit payoffs provided below. We believe these results demonstrate the value of our approach for teaching and research as the results often present new solutions.

**Coordination game**: The Nash assumption of independence misses the superior Bayesian approach we take. For the payoffs provided below, play the first strategy if you believe that the opponent’s probability of playing its first strategy is at least 1 / 3, else play the second strategy. Nash provides no insights about when to apply which strategy. Also, if the payoffs are changed, our approach provides revised probabilities. Battle of the sexes: Two parties differ on where they should go, but are not allowed to communicate. Both parties obtain a good payoff if they both go to the same choice, since at least they are both together. A given party will get a bonus if they both go to that party’s choice. Neither gets a good payoff if they go different places. Given the payoffs presented below, player A should play its desired strategy if it believes the other player will also select A’s desired choice with probability of at least 33%. In contrast, Nash provides three equilibria without any insight into which to play when and no analysis of the probabilities. Matching pennies: Two players, Even and Odd, simultaneously reveal a penny. If the pennies match, Even keeps both pennies; otherwise Odd keeps both pennies. The unique Nash equilibrium for this zero-sum game is for both players to play randomly. Given the payoffs below, Even should play heads if it believes that Odd will play heads with probability of at least 50%. On the other hand, Odd should play heads if it believes that Even will play heads with probability of at most 50%. Chicken game: Two cars are speeding towards each other and about to have a head-on crash. Nash suggests one car should swerve and the other go straight, but offers little insight into which should swerve. Given the payoffs below, our approach suggests you swerve if you believe that the opponent will swerve with probability of at most 90%, else go straight. Observe here that both players swerving (or both going straight) is not a Nash equilibrium but that both players swerving (or both going straight) in the expectation that the opponent will go straight (or swerve) is an equilibrium scenario. Also, if the payoffs are changed, our approach provides updated probabilities. Arms Race: each country initially stockpiles arms lest it be attacked. But as demonstrated below, diminishing returns on stockpiling arms materialize, opening an opportunity for a peace treaty. Nash does not identify the opportunity for the peace treaty. Stag hunt: hunt stag if you believe that the opponent will hunt stag with probability at least 50%, else hunt hare. (The pure Nash equilibria are for both to hunt stag, or for both to hunt hare). Newcomb’s problem: if Newcomb’s problem is posed as a prisoner’s dilemma, the solution to Newcomb’s problem can be arrived at in two ways: as the non-cooperative Nash equilibrium using the dominance principle, or as a cooperative solution using the expected utility hypothesis. Rock-paper-scissors game: The Nash equilibrium is for you to play a 3-sided die randomly. What appears to be a new strategy for this ancient game is for you to play rock if you believe that your opponent will play paper with probability of at most 33% and scissors with probability of at least 33%; to play paper if you believe that your opponent will play scissors with probability of at most 33% and rock with probability of at least 33%; else to play scissors. (Our approach can help you if say, you have data on your opponent’s previous plays of the game.) Bar-crowding game has 3 friends A, B, and C: Anyone who goes to the bar alone gets nothing – staying home is a better choice. If two friends go to the bar, that is the best option. If all three go, the bar throws all three out. The Nash equilibria are for all to stay home, or for all to play their first strategy with probability equal to 33%. But if you have any insight into your friends and can estimate the Bayesian probabilities of their behavior, our strategy can help.

We also extend our approach to the M-person game and obtain similar insights. For example, we show the complete solution for general 2-person games and general 3 persons x 2 strategies games.

**The Expected Utility Hypothesis.**

In a 2-Person game, let players A and B have 2 strategies: A1 or A2 for player A, and B1 or B2 for player B.

The basis for expected utility theory is the von Neumann – Morgenstern utility theorem (von Neumann and Morgenstern [20]): let Aij and Bij be the payoffs to players A and B respectively if player A plays Ai and player B plays Bj, for i, j = 1 or 2. The expected utility hypothesis states that players A and B must maximize their expected payoffs1:

where pA(Ai and Bj) is player A’s probability that A plays Ai and B plays Bj, and similarly for player B.

**Conditional Probabilities ^{[1]}. **

For our approach, we *drop* Nash’s assumption that players’ probabilities are mutually independent. This allows our problem (1) to be more general and obtain more solutions that satisfy the expected utility hypothesis.

Let EP(A | Ai) and EP(B | Bj) be the expected payoffs^{[2]},^{[3]} of A and B respectively given that A plays Ai and B plays Bj, for i, j = 1, 2:

Let us begin by proving an *elementary “Bayesian” theorem of games *which demonstrates the equivalence of our approach to the VNM formulation:

**Theorem 1**^{[5]}**.** Problems (3) below are equivalent to problems (1)^{[6]}:

The maximum^{[7]} of the above equation is pA(A1) = 1 (i.e., play strategy A1) if EP(A | A1) ≥ EP(A | A2), or pA(A1) = 0 (i.e., play strategy A2) if EP(A | A1) _{≤} EP(A | A2). Hence, (3) holds for player A. A similar argument holds for player B. Q.E.D.

**VNM Regions. **

Define the VNM regions A1 and A2 to be the convex polytopes:

As shown below, A should play strategy A1 if it expects B to be in region A1. Otherwise, A should play A2. The equilibrium line

separates the probability space into the two regions and provides a visually helpful means of analyzing the situation^{[8]}.

Importance of the Regions: The two regions are important practically, as now one need not estimate the probability precisely, but only determine which of two regions it is in. Frequently, it will be seen that the prior probability is likely to be in one region, and identification of that region is sufficient information to suggest the appropriate play of the game. For example, suppose region A1 is considerably larger than the other, so the probability is quite likely to be in that region A1. This provides compelling information that player A will likely play A1.

Analogously for B:

The VNM regions are dependent on the players’ prior probability distributions, often simply called the priors (Jaynes [13]; Harsanyi [10]; Kadane and Larkey [11]), which are the players’ expression of beliefs about the probability distribution of their opponent. ^{[9]}

**Corollary 2.** Given (3), A plays strategy A1 if and only if it expects player B to be in VNM region A1. Else, A plays strategy A2. Similarly, B plays strategy B1 if and only if it expects player A to be in VNM region B1. Else, B plays strategy B2.

**Proof. **EP(A | A1) ≥ EP(A | A2) if and only if A1_{1} pA(B1 | A1) + A1_{2} pA(B2 | A1) ≥ A2_{1} pA(B1 | A2) + A2_{2} pA(B2 | A2) if and only if (A11 – A12) pA(B1 | A1) + (A21 – A22) pA(B2 | A2) + A12 – A21 ≥ 0.

Similarly, EP(B | B1) ≥ EP(B | B2) if and only if B1_{1} pB(A1 | B1) + B2_{1} pB(A2 | B1) ≥ B1_{2} pB(A1 | B2)

+ B2_{2} pB(A2 | B2) if and only if (B11 – B21) pB(A1 | B1) + (B12 – B22) pB(A2 | B2) + B21 – B12 ≥ 0. Q.E.D.

From Theorem 1 and Corollary 2, for points in the regions (5) and (7), the expected utility hypothesis holds, i.e., the VNM regions define the general solution to the 2-Person game^{[10]}.

** **

**Nash Equilibrium. **

** **

If the players’ probabilities are mutually independent, the VNM regions simplify to:

**Proposition 3.** Suppose a Nash equilibrium (p(A1), p(B1)) is in VNM region Ai and VNM region Bj respectively, for some i, j = 1, 2. Then, player A will play strategy Ai and player B will play strategy

Bj.

**Proof. **Nash’s equilibrium problem is problem (1), where pA(Ai and Bj) = pB(Ai and Bj) = p(Ai) p(Bj), or problem (3), where pA(Bj | Ai) = p(Bj) and pB(Ai | Bj) = p(Ai), for i, j = 1, 2. Thus, Corollary 2 holds, where VNM regions are defined by (8), for pA(B1) = p(B1) and pB(A1) = p(A1). Q.E.D.

Recall that the equilibrium equations

separate the VNM regions, thereby yielding the general solution to any game. These same equilibrium equations, where pB(A1) = p(A1) and pA(B1) = p(B1), yield the mixed Nash equilibrium11, as we show in the table below.

**Proposition 4.** Given any game A = [[A11, A1_{2}], [A21, A2_{2}]] and B = [[B11, B1_{2}], [B21, B2_{2}]], the Nash equilibria for the game are calculated from the applicable row of Table 112.

**Proof.** Observe that (i, j) is a pure Nash equilibrium if and only if sgn(2i – 1) * (A11 – A21) > 0 and sgn(2j – 1) * (B11 – B12) > 0, for i, j = 0, 1. Using this fact, for each row in Table 1, we list all the pairs (i, j) that are pure Nash equilibria.

Finally, for the pair (a, b) defined by (9) to be a mixed Nash equilibrium, we need only show that 0 < a < 1 and 0 < b < 1. But note that for rows 6, 7, 10 and 11 of Table 1, the numerator and denominator of a, 1 – a, b or 1 – b are both positive or both negative; hence a, 1 – a, b, 1 – b are all greater than 0. Q.E.D.

** **

**Iterated Dominance Example ^{[13]}. **

Let A = [[2, 2], [3, 1]] and B = [[0, 1], [0, 2]]. “Play A1 & B2” is the Nash equilibrium.

**Proposition 5.** Given A = [[2, 2], [3, 1]] and B = [[0, 1], [0, 2]], Then player A will play A1 and player B will play B2.

**Proof.** VNM region A1 is: pA(B2 | A2) ≥ 1 / 2, and VNM region B2 is: pB(A2 | B2) ≥ -1. Hence, player B will play B2. Player A also knows this to be the case, hence pA(B2 | A2) = 1. Since pA(B2 | A2) = 1 is a point in VNM region A1, player A plays A1. Q.E.D.

**Coordination Example. **

Let A = B = [[2, 0], [0, 1]]. There are 3 Nash equilibrium points: “play A1 & B1”, “play A2 & B2”, and “play A1 (or B1) with probability 1 / 3”. VNM region A1 is: 2pA(B1 | A1) ≥ pA(B2 | A2) and VNM region B1 is: 2pB(A1 | B1) ≥ pB(A2 | B2). By analyzing these VNM regions visually, A and B will likely choose strategies A1 and B1 respectively.

**Proposition 6.** Given A = B = [[2, 0], [0, 1]], if the players’ probabilities are mutually independent, then play the first strategy if you believe that the opponent’s probability of playing its first strategy is at least 1 / 3, else play the second strategy.

**Proof.** VNM region A1 is: pA(B1) ≥ 1 / 3 and VNM region B1 is: pB(A1) ≥ 1 / 3. Q.E.D.

** **

**Battle of the Sexes Example. **

Let A = [[3, 1], [1, 2]] and B = [[2, 1], [1, 3]]. There are 3 Nash equilibrium points: “play A1 & B1”, “play A2 & B2”, and “play A1 with probability 2 / 3, play B1 with probability 1 / 3”. VNM region A1 is: 2pA(B1 | A1) ≥ pA(B2 | A2) and VNM region B1 is: pB(A1 | B1) ≥ 2pB(A2 | B2). A would rather choose A1 and B would rather choose B2.

**Proposition 7.** Given A = [[3, 1], [1, 2]] and B = [[2, 1], [1, 3]], if the players’ probabilities are mutually independent, then: play A1 if pA(B1) ≥ 1 / 3, else play A2; play B1 if pB(A1) ≥ 2 / 3, else play B2.

**Proof.** The VNM region A1 is: pA(B1) ≥ 1 / 3 and VNM region B1 is: pB(A1) ≥ 2 / 3. Q.E.D.

** **

**Matching Pennies Example. **

Let A = [[1, -1], [-1, 1]] and B = [[-1, 1], [1, -1]]. This zero-sum game has a mixed Nash equilibrium: “play A1 with probability 1 / 2, play B1 with probability 1 / 2”.

**Proposition 8.** Given A = [[1, -1], [-1, 1]] and B = [[-1, 1], [1, -1]], if the players’ probabilities are mutually independent, then: play A1 if pA(B1) ≥ 1 / 2, else play A2; play B1 if pB(A1) _{≤} 1 / 2, else play B2^{[14]}.

**Proof.** The VNM region A1 is: pA(B1) ≥ 1 / 2 and VNM region B1 is: pB(A1) _{≤} 1 / 2. Q.E.D.

** **

**Chicken Game Example (Sugden [19]). **

Let A = [[0, -1], [1, -10]] and B = [[0, 1], [-1, -10]]. The Nash equilibria are “play A1 (swerve) & B2 (go straight)”, “play A2 (go straight) & B1 (swerve)” and “play A1 (B1) with probability 0.9”.

**Proposition 9.** In the chicken game, if the players’ probabilities are mutually independent, then: swerve if you believe that the opponent will swerve with probability of at most 90%, else go straight.

** **

**Proof.** The VNM region A1 is: pA(B1) + 11pA(B2) ≥ 2, or pA(B1) ≤ 9 / 10. Similarly, the VNM region B1 is: pB(A1) ≤ 9 / 10. Q.E.D.

Observe that if your opponent shows too much enthusiasm (at least 90%) to swerve, then you ought to go straight.

Preferred scenario: The players are more likely to swerve than to go straight.

Chicken scenario: Suppose pA(B1) = pB(A1) = 0. Both players expect the other player to go straight. Both will swerve.

Catastrophe scenario: Suppose pA(B1) = pB(A1) = 1. Both players expect the other player to swerve. Both will go straight^{[15]}.

Nash equilibria scenario: Suppose pA(B1) = 1 – pB(A1), and pB(A1) = 0 or 1. The player who expects the other player to go straight will swerve, and the player who expects the other player to swerve will go straight.

**Arms Race Example. **

In Proposition 9, let A = [[0, -x], [1, -10x]], B = [[0, 1], [-y, -10y]], for x, y ≥ 0. Let A1 or B1 be “seek peace” and A2 or B2 be “nuclear attack”. The values x and y denote the arms stockpile of B and A respectively.

Country A seeks peace if the probability that country B attacks is greater than 1 / (9x + 1); otherwise A attacks. The probability curve pA(B1) = 1 / (9x + 1) drops quickly, e.g., pA(B1) = 1 / 2 at x = 1 / 9, but soon dramatically flattens: B must quickly stockpile initially, but as the curve flattens, there will be little benefit to B for stockpiling arms.

And similarly for country B.

In summary, each country initially stockpiles arms lest it be attacked. But rapidly diminishing returns on stockpiling arms materialize, opening an opportunity for seeking a peace treaty.

** **

As illustration, consider the 2018 estimated global nuclear stockpile^{[16]} of Table 2.

Based on the payoffs above and Table 2, a rational North Korea ought to seek a peace treaty with the United States and Russia.

**Skyrms [16]). **

Let A = [[4, 1], [3, 2]] and B = [[4, 3], [1, 2]]. The Nash equilibria are “play A1 (Stag) & B1 (Stag)”, “play A2 (Hare) & B2 (Hare)” and “play A1 (B1) with probability 0.5”.

**Proposition 10.** In the stag hunt, if the players’ probabilities are mutually independent, then: hunt stag if you believe that the opponent will hunt stag with probability of at least 50%, else hunt hare.

** **

**Proof.** The VNM region A1 is: 3pA(B1) + pA(B2) ≥ 2, or pA(B1) ≥ 1 / 2. Similarly, the VNM region B1 is: pB(A1) ≥ 1 / 2. Q.E.D.

** **

**Prisoner’s Dilemma ^{[17]}. **

Let A12 < A22 < A11 < A21, and let B equal the transpose of A. Since A11 < A21 and A12 < A22, the use of the dominance principle yields the Nash equilibrium, namely the non-cooperative solution “play A2 (defect) and B2 (defect)”. But since A22 < A11, A and B are better off if they both play the cooperative solution “play A1 (silence) and B1 (silence)”.

**Proposition 11.** In the prisoner’s dilemma, if the players’ probabilities are mutually independent, then players play non-cooperatively^{[18]}.

**Proof.** Consider the left hand side of VNM region A1:

(A11 – A12 – A21 + A22) p^{A}(B1) + A12 – A22.

If A11 – A12 – A21 + A22 ≤ 0, then (A11 – A12 – A21 + A22) p^{A}(B1) + A12 – A22 ≤ A12 – A22 < 0. On the other hand, if A11 – A12 – A21 + A22 > 0, then (A11 – A12 – A21 + A22) p^{A}(B1) + A12 – A22 ≤ (A11 – A12 – A21 + A22) + A12 – A22 = A11 – A21 < 0. Thus, for any prior for player A, VNM region A1 is the null set, hence it must play strategy 2.

Similarly, player B must play strategy 2. Q.E.D.

** **

Proposition 11 clearly shows that the assumption of independence restricts us to the non-cooperative solution.

** **

**Classical Prisoner’s Dilemma Example. **

In the classical prisoner’s dilemma, A = [[-1, -3], [0, -2]] and B = [[-1, 0], [-3, -2]].

** **

**Proposition 12.** In the classical prisoner’s dilemma, if the players’ priors are: pA(B1 | A1) + pA(B2 | A2) ≥ 3 / 2, pB(A1 | B1) + pB(A2 | B2) ≥ 3 / 2, then the players will play the cooperative solution19.

**Proof.** The VNM region A1 is: pA(B1 | A1) + pA(B2 | A2) ≥ 3 / 2, and the VNM region B1 is: pB(A1 | B1) + pB(A2 | B2) ≥ 3 / 2. Hence, for the given priors, players A and B must play the cooperative solution. Q.E.D.

In Proposition 12, note the high bar required for playing the cooperative solution. The players would rather choose to play the non-cooperative solution.

**An Instance Where the Nash Approach Fails to Consider Playing the Cooperative Strategy. **

Consider the prisoner’s dilemma where A11 – A12 = A21 – A22, A2_{1} = A11 + m and A2_{2} = A11 – M, where m > 0 is small and M > 0 is very large. For example, A = [[100, -3], [101, -2]]. Recall from Proposition 11 that if the players’ probabilities are mutually independent, then players will play non-cooperatively.

Evidently, it would be foolish for the players to not even consider playing strategy 1 since if a player plays 2, the chance that the other player also plays 2 would produce a significant loss, so why risk it. Clearly, the Nash approach fails to consider playing the cooperative solution even when it is the obvious solution to play – a very important point in say, discussions of market breakdowns in general economic equilibrium models.

On the other hand, as the next proposition shows, by dropping the assumption of independence, our approach will play the cooperative solution rather than the non-cooperative solution.

The black line is the indifference line for the classical prisoner’s dilemma. A player is more likely to play strategy 2 because of the unlikely probability of being in the region for playing strategy

1.

The green line is the indifference line for this instance of the prisoner’s dilemma: pA(B1|A1) + pA(B2|A2) = 1 + m / (M + m). Here, the size of the probability region for strategy 1 is almost that for strategy 2. Our approach is advising the players to consider playing strategy 1.

**Proposition 13.** Given a prisoner’s dilemma where A11 – A12 = A21 – A22, A2_{1} = A11 + m and A2_{2} = A11 – M, where m > 0 is small and M > 0 is very large, players A and B will play the cooperative solution20.

- Therefore, players will not play the non-cooperative solution.
- Currently, to reach the cooperative solution, assumptions are added, e.g., bounded rationality, incomplete information (Aumann and Maschler [2]; Acevedo and Krueger [4]; Daley Given A’s expected joint probabilities pA(Ai and Bj), A concludes that pA(A1 and B1) must be near 1. That is because A and B are likely to play strategy 1, where their payoffs are quite high and only m units less than maximal.

Therefore, pA(B1 | A1) = pA(A1 and B1) / pA(A1) must also be near 1.

A also concludes that pA(A2 and B2) _{≥} pA(A2 and B1) since B is likelier to play strategy 2 if A is playing strategy 2. Hence pA(B2 | A2) = pA(A2 and B2) / (pA(A2 and B1) + pA(A2 and B2)) _{≥} 1 / 2. A concludes, using Fig. 1, that B is sufficiently inside VNM region A1. Similarly, B will play strategy 1. Q.E.D.

**Newcomb’s Paradox as a Version of the Prisoner’s Dilemma. **

In the famous Newcomb’s paradox (Wolpert and Benford [21]), there is a predictor B, a player A and a box X. The player A is given the choice of taking the box X or the box X plus $1,000. Before A makes its selection, B predicts what A will do, and B’s predictions are almost certain. If B predicts that A will take only box X, then B puts $1,000,000 in box X. In this case, as the box has a $1,000,000 in it, A will receive $1,000,000 or $1,001,000 depending on whether A chooses box X or X plus $1,000. On the other hand, if B predicts that A will take box X plus $1,000, then B places nothing in box X. In this case, depending on its choice, A either receives $1,000 or nothing.

Newcomb’s paradox is that two perfectly rational analyses give conflicting answers to player A’s optimization problem: under the expected utility hypothesis, player A should take only box X, since the expected payoff of taking X is much higher. On the other hand, under the dominance principle, player A should take box X plus $1,000.

The paradox is best understood by a passage in (Wolpert and Benford [21]): “…Newcomb said that he would just take X; why fight a God-like being? However, Nozick said, ‘To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly.’…”.

Wolpert and Benford resolve the paradox by showing that Newcomb’s problem actually represents two different games with different probabilistic outcomes.

In this section, we will resolve the paradox by posing Newcomb’s problem as a prisoner’s dilemma. In so doing, the solution to Newcomb’s problem can be arrived at in two ways: as the non-cooperative solution (take box X plus $1,000) using the dominance principle, or as the cooperative solution (take only box X) using the expected utility hypothesis.

Suppose there is a rich benefactor who promises to fund a payoff matrix for predictor B, yielding the following game: A = [[$1,000,000, 0], [$1,001,000, $1,000]] and B = [[$1,000,000, $1,001,000], [0, $1,000]].

If B predicts correctly, B gets what player A gets. But if B predicts wrongly, B gets $1,001,000 minus what A gets21.

From Proposition 13, players A and B will play cooperatively in this game.

If like Nash, the player solves the problem using the dominance principle, so does the predictor. Both predictor and player will be at the non-cooperative solution: take X plus $1,000. If the player solves the problem using the expected utility hypothesis, so does the predictor, and both predictor and player will be at the cooperative solution: take only X. In either case, the predictor’s prediction is

and Sadowski [6]) or new methods are described, e.g., tit-for-tat, correlated equilibria (Axelrod [3]; Aumann [1]).

21 Note that by posing Newcomb’s problem as a PD problem, the predictor is given a personal incentive that is absent in Newcomb’s problem.

certain. Since from Proposition 13, players will not play the non-cooperative solution, we agree with Newcomb that cooperation is the obvious strategy to take.

Note in Fig. 1, however, the region for cooperation is negligibly smaller than that for non-cooperation. It is then not surprising to us if people divide evenly on which strategy to take.

**A Generalization of the Prisoner’s Dilemma to M-Persons. **

In order to better understand how the Nash solution might break down in general economic equilibrium models, let us generalize the prisoner’s dilemma to M-Persons, with each player having 2 strategies, for M _{≥} 2.

Let us describe the M-Person game via binary trees.

Fig. 2 is the prisoner’s dilemma payoff for player A. Tree(2, 1) is the binary tree with player B (player 2) as parent, and player A (player 1) as child. To obtain the payoff for player B, simply switch the roles of parent and child to Tree(1, 2). Recall that for the prisoner’s dilemma, A12 < A22 < A11 < A21.

Next, suppose Tree(M – 1, M – 2,…, 2, 1) denotes player A’s payoff for an (M – 1)-Person game, for M _{≥} 3. Construct player A’s payoff Tree(M, M – 1,…, 2, 1) for an M-Person game by letting player A’s Tree(M – 1, M – 2,…, 2, 1) be the sub trees on both branches of parent player M.

The numerical values of the payoff on the right sub tree are assumed different from those on the left sub tree, as long as the relationship A12 < A22 < A11 < A21 is maintained everywhere in the tree.

Finally, given Tree(M, M – 1, …, 2, 1) for player A, create Tree(1, M, M – 1, …, 3, 2) for player B (player 2) by making 1 the highest parent; Tree(1, 2, M, M – 1, …, 4, 3) for player 3 by making 2 the second highest parent, …, Tree(1, 2, 3,…, M – 2, M, M – 1) for player M – 1 by making M – 2 the third lowest child, Tree (1, 2, 3, …, M – 1, M) for player M by making M – 1 the second lowest child.

This completes the description of the players’ payoffs for an M-Person prisoner’s dilemma game, with each player having 2 strategies.

**Theorem 14.** For the M-Person prisoner’s dilemma, M _{≥} 2, using the dominance principle, the Nash solution is for the players to play strategy 2.

**Proof.** We already know that the theorem holds for M = 2. Assume by induction that the theorem holds for M – 1, for M _{≥} 3. Let us show that the theorem holds for M.

Given Tree(M, M – 1, …, 2, 1) for player A, recall that by construction, the sub trees on the left and right branches are of the form Tree(M – 1, M – 2,…, 2, 1) for player 1, Tree(M, M – 1,…, 2) for player 2, Tree(2, M, M – 1, …, 4, 3) for player 3, …, Tree(2, …, M – 2, M, M – 1) for player M – 1. These sub trees are identical for players 1, 2,…, M – 1, except for the labelling on the parents’ nodes. Note that each player’s strategy 2 dominates its strategy 1 under any condition. By induction, using the dominance principle, players 1 to M – 1 will play strategy 2 .

Therefore, given Tree(1, 2, …, M – 1, M) for player M, if M plays 1, the payoff for player M is b (the second rightmost node of the tree) whereas if M plays 2, the payoff for player M is A22 (the rightmost node of the tree). By the dominance principle, since A12 < A22, player M will also play strategy 2. Q.E.D.

Now suppose any payoff of the type A11 is much larger than any payoff of the type A22; and that A21 = A11 + m, where payoffs A11 and A21 are in adjacent nodes.

Clearly, the Nash approach fails to consider playing the cooperative solution “play strategy 1” even when it is the obvious solution to play.

Following the inductive argument of Theorem 14, we can also conclude that, since the sub trees on the left and right branches are of the form Tree(M – 1, M – 2,…, 2, 1) for player 1, Tree(M – 1, M – 2,…, 2) for player 2, Tree(2, M, M – 1, …, 4, 3) for player 3, …, Tree(2, …, M – 2, M, M – 1) for player M – 1, by induction, using the expected utility hypothesis, players 1 to M – 1 will play strategy 1 where the payoff is of the type A11.

Therefore, given Tree(1, 2, …, M – 1, M) for player M, if M plays 1, the payoff for player M is a (the leftmost node of the tree) whereas if M plays 2, the payoff for player M is A21 = A11 + m (the second leftmost node of the tree). Since A11 < A21, player M may be tempted to play strategy 2. But why risk playing strategy 2 for m units more than A11, when it could lead to a payoff of the type A22, a payoff significantly less than A11?

By the expected utility hypothesis, player M must also play strategy 1.

**General M-person Games. **

Finally, we generalize Theorem 1 for general M-person games.

Let there be M players, where each player i has ni possible strategies for each i = 1, 2,…, M. Given the strategy vector (j1, j2,…, jM), let the payoff to player i be Ai_{j1j2…jM}. Let xi be a mixed strategy for player i, i.e., a strategy xi where _{Σ}j xi_{j} = 1, xi_{j} _{≥} 0, all j, and let x = (xi, x-i) denote the strategies of all players. Nash’s problem is:* *

where EP(i | x-i) is the expected payoff to player i given x-i and where the summation is over all jk and all k.

A strategy x* is a Nash equilibrium if xi* is a solution to player i’s problem above, given x-i*.

For our approach, let pi_{j1,j2,…,jM} be player i’s expected probability that player k plays jk, for all jk and all k. The Von Neumann–Morgenstern expected utility theory says that player i’s goal is to maximize its expected payoff:

where -i plays j-_{i} means that player k plays jk and where the summation is over all jk, for all k _{≠} i.

** **

**Theorem 15.** Problems (13) below are equivalent to problems (11):

where the summation is over all rk, for any k _{≠} i.

The denominator of (14) is the probability pi(i plays ji). Hence,

Since _{Σ} pi(i plays ji) = 1 and pi(i plays ji) _{≥} 0 for all ji, it follows that player i plays strategy [arg maxji EP(i | i plays ji)]. Q.E.D.

A method for finding the best strategy for player i is as follows: For any pair of strategies for player i, say strategy r and strategy s, calculate the locus of points where i’s expected payoffs conditional on player i playing either r or s are equal. This defines an indifference surface that divides the conditional probability space into 2 VNM regions. One VNM region is labelled r because the strategy of choice is r, and the other VNM region is labelled s because the strategy of choice is s. * *

After the calculations above, every VNM region will have been labeled as many times as there are distinct pairs of strategies. For any given VNM region, take any two of the multiple labels and eliminate one of them based on the indifference surface created by this pair of labels. The process ends when every VNM region has only one label.

**General 2-person Games. **

Let player A have strategies Ai, i = 1, 2, … n1 and player B have strategies Bj, j = 1, 2, … n2. Assume that the players’ probabilities are mutually independent. Problem (13) is:

As can be observed in (16), finding the solution set to a general 2-person game is straightforward. For example, consider the over two thousand years old Rock-Paper-Scissors game, where the Nash equilibrium is: play any strategy with 33% probability:

Strategy A1 or B1 (rock) loses to strategy A2 or B2 (paper) loses to strategy A3 or B3 (scissors) loses to rock.

For player A, in general we have, where 0 _{≤} pA(Bj) _{≤} 1,

And similarly for player B.

What appears to be a new strategy for this ancient game is: play rock if you believe that your opponent will play paper with probability of at most 33% and scissors with probability of at least 33%; play paper if you believe that your opponent will play scissors with probability of at most 33% and rock with probability of at least 33%; else play scissors22.

**3-person Games Where Each Person Has 2 Strategies. **

Let us apply Theorem 15 for finding the solution set to a 3-person game, where each player A, B, and C has 2 strategies Ai, Bi, Ci, for i = 1, 2 respectively.

Assume that the players’ probabilities are mutually independent. For player A, equation (13) is

If the player is at home, its payoff is 1; if the player is alone at the bar, its payoff is 0; if the player is at the bar with another person, its payoff is 2; else, its payoff is -1.

We have: A111 – A211 = -2, A112 – A212 = A121 – A221 = 1, A122 – A222 = -1, hence VNM region A1 is the region -3pA(B1) pA(C1) + 2pA(B1) + 2pA(C1) – 1 ≥ 0, or equivalently the region^{[22]} pA(B1) ≥ (1 – 2pA(C1)) / (2 – 3pA(C1)). Similarly, VNM region B1 is the region pB(A1) ≥ (1 – 2pB(C1)) / (2 – 3pB(C1)) and VNM region C1 is the region pC(B1) ≥ (1 – 2pC(A1)) / (2 – 3pC(A1)). The Nash equilibria are p(A) = p(B) = p(C) = 1 and p(A) = p(B) = p(C) = 1 / 3.

** **

**Acknowledgement. **

We would like to thank Al Roth and Todd Davies for their invaluable advice and guidance in the preparation of this paper.

**Footnotes**

[1] For simplicity, we make the common assumption that utility is a linear function of the payoff (Starmer [18]). Hence, maximizing expected utility is the same as maximizing expected payoff.

[2] Our Bayesian approach for games differs from prior Bayesian work (for examples, Acevedo and Krueger [4]; Aumann [1]; Daley and Sadowski [6]; McKelvey and Palfrey [12]; Quattrone and Tversky [15]) in that, unlike the other approaches, our approach tethers conditional probabilities unequivocally to the expected utility hypothesis, which our solution always satisfies.

[3] A critic states “rational players do not and should not consider conditional probabilities…Imagine an agent who knows that the probability of rain is p. Your ‘solution’ seems to be that the agent should take an umbrella with him if it rains and leave the umbrella if it doesn’t rain”.

Theorem 1 shows that the former criticism is unwarranted. With respect to the latter criticism, let EP(agent | bring an umbrella) = p, and EP(agent | don’t bring an umbrella) = 1 – p. Our solution would then be: to bring an umbrella if p ≥ 1 /2; don’t bring an umbrella if p ≤ 1 / 2.

[4] The conditional probabilities of (2) do not violate the principle in Spohn [17]: “Any adequate quantitative decision model must not explicitly or implicitly contain any subjective probabilities for acts…” A player’s conditional probabilities are subjective probabilities for the opponent’s strategies, not for its own strategies.

[5] This theorem will be generalized to one for M-person games.

[6] There is no signaling between the players.

[7] The independent variables pA(B1 | A1) and pA(B2 | A2) are assumed given in the maximization problem, a simplification that avoids the problem of infinite regress (similar to Nash’s assumption that p(B1) is given for player A in the formulation of his maximization problem).

[8] Inequality (5) is the (discovered) solution to the problem (1) in the same manner that the quadratic formula is the solution to a general quadratic equation.

[9] The player’s priors may be dependent on partially observable random events, such as the weather. For the use of priors in games with incomplete information played by Bayesian players, please refer to (Harsanyi [10]).

[10] This general solution contains the Nash equilibria as particular solutions. In contrast to the descriptive Nash solutions, our solution is a pair of prescriptive rational-expectations pure strategies. Moreover, if by mistake, player A is in VNM region A1 and plays A2, Corollary 2 states that player A will get a lower expected payoff.

[11] It is interesting to note that at a Nash mixed equilibrium, a player’s strategy is dependent on knowing the other player’s payoff function.

[12] Zero signs are ignored in the table, since these cases are degenerate: a player is unable to choose between its two strategies. Also, it is interesting to note that each Nash equilibrium appears in exactly four rows.

[13] The next 3 examples are adapted from (Davies [7]) in a manner that might serve as a pedagogical technique for students in game theory. Table 1 may be used to quickly find the Nash equilibria for all the 2-person game examples described herein.

[14] A’s actions do not impact B’s choice of actions. This is because A’s beliefs are uncorrelated to B’s beliefs. On the other hand, if beliefs are correlated, then both players’ probabilities must equal 50%, otherwise, if say the players’ probabilities are both > 50%, A knows that B will play strategy 2 (tails), hence playing strategy 1 (heads) cannot be a correct prescription for A. If say, A’s probability is > 50% and B’s probability is < 50%, B knows that A will play heads, hence playing heads cannot be a correct prescription for A. Etc. The unique solution is therefore the Nash equilibrium: play randomly for both.

[15] Note that pA(B1) =pB(A1) = 0 or 1 is an equilibrium scenario: both players swerve (or both go straight) if both players expect the other player to go straight (or swerve). In contrast, p(A1) = p(B1) = 0 or 1 cannot be a Nash equilibrium: if B goes straight (or swerve), A will swerve (or go straight).

[16] Sources: Arms Control Association, Federation of American Scientists, International Panel on Fissile Materials, U.S. Department of Defense, U.S. Department of State and Stockholm International Peace Research Institute.

[17] Since Flood and Dresher’s original paper, thousands of articles have been published regarding it. A Google Scholar’s search for “prisoner’s dilemma” yields 104,000 results as of this writing. Please confer (Kuhn [14]).

[18] Therefore, players will not play the cooperative solution.

[19] If your opponent plays non-randomly, your prior may be influenced by your opponent’s previous plays of this game.

[20] The formula can be extended to M-persons, for M > 3.

[21] This game is based on the El Farol bar problem (Arthur [5]).

[22] The locus of indifference is a quadratic curve passing through the points (pA(C1), pA(B1)) = (0.5, 0), (0.33, 0.33), (0, 0.5).

**References**

[1] Aumann RJ (1974) Subjectivity and Correlation in Randomized Strategies. Journal of Mathematical Economics 1:67-96

[2] Aumann RJ, Maschler M (1995) Repeated Games with Incomplete Information. MIT Press, Cambridge London

[3] Axelrod R (1984) The Evolution of Cooperation. Basic Books

[4] Acevedo M, Krueger JI (2005) Evidential Reasoning in the Prisoner’s Dilemma. The American Journal of Psychology 118:431-457

[5] Arthur WB (1994) Inductive Reasoning and Bounded Rationality. American Economic Review 84:406-411

[6] Daley B, Sadowski P (2017) Magical Thinking: A Representation Result. Theoretical Economics 12:909-956 24 This game is based on the El Farol bar problem (Arthur [5]). 25 The locus of indifference is a quadratic curve passing through the points (pA(C1), pA(B1)) = (0.5, 0), (0.33, 0.33), (0, 0.5).

[7] Davies T (2004) Utility Theory and Game Theory. Lecture Notes

[8] Garcia CB, Zangwill WI (2017) A New Approach to War or Peace. Working paper

[9] Garcia CB, Zangwill WI (2018) Dominance, Expected Utility and the Prisoner’s Dilemma. Working paper

[10] Harsanyi J (1967) Games With Incomplete Information Played by “Bayesian” Players I – III. J. Management Science 14 (3):159-182

[11] Kadane JB, Larkey PD (1982) Subjective Probability and the Theory of Games. Management Science 28 (2):113-120

[12] McKelvey RD, Palfrey TR (1995) Quantal Response Equilibria for Normal Form Games. Games and Economic Behavior 10:6-38

[13] Jaynes ET (1968) Prior Probabilities. IEEE Transactions on Systems Science and Cybernetics 4 (3):227-241

[14] Kuhn S (2017) Prisoner’s Dilemma. The Stanford Encyclopedia of Philosophy

[15] Quattrone GA, Tversky A (1984) Causal Versus Diagnostic Contingencies: On Self-deception and on the Voter’s Illusion. Journal of Personality and Social Psychology 46:237-248

[16] Skyrms B (2004) The Stag Hunt and the Evolution of Social Structure. Cambridge University Press, Cambridge

[17] Spohn W (1977) Where Luce and Krantz Do Really Generalize Savage’s Decision Model. Erkenntnis 11: 113-134

[18] Starmer C (2000) Developments in non-expected utility theory: the hunt for a descriptive theory of choice under risk. Journal of Economic Literature 38:332-382

[19] Sugden R (2005) The Economics of Rights, Cooperation and Welfare. Palgrave MacMillan, 2 edition:132

[20] Von Neumann J, Morgenstern O (1953) Theory of Games and Economic Behavior. Princeton University Press, New Jersey

[21] Wolpert DH, Benford G (2011) The Lesson of Newcomb’s Paradox. Synthese 190:1637-164

## Leave a Reply