110 78 4MB
English Pages 268 Year 2020
Manfred J. Holler Barbara Klose-Ullmann
Scissors and Rock Game Theory for Those Who Manage
Scissors and Rock
Manfred J. Holler · Barbara Klose-Ullmann
Scissors and Rock Game Theory for Those Who Manage
Manfred J. Holler University of Hamburg Hamburg, Germany
Barbara Klose-Ullmann Center of Conflict Resolution Munich, Germany
ISBN 978-3-030-44822-6 ISBN 978-3-030-44823-3 (eBook) https://doi.org/10.1007/978-3-030-44823-3 © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface: Introduction and Warnings
Analytical statistics claim that there are two ways to make wrong decisions: A correct hypothesis is rejected or, alternatively, an incorrect hypothesis is accepted. In this book, you will learn about a third type of wrong decision and how to handle it. The essence of this type of failure is that decision makers either ignore that the results of their decisions depend on the decisions of others or that they cannot deal with this interdependency. The reason for the latter could be the complexity of the decision situation. However, it could also be the result of a lack of tools. Game theory is such a tool. It helps to understand the complexity of research decisions, and in many cases, it filters out inadequate decisions. International politics, parlor games like Chess, and the schoolyard game Rock-Scissors-Paper exhibit decision situations in which the results of decision making depend on the choice of more than one decision maker. The managing of game theory can support the managing of decision situations when decisions are interdependent and strategic reasoning is required, i.e., putting oneself into the shoes of the other. It is also of help in the designing and redesigning of decision situations, i.e., “changing the game,” known more formally as mechanism design. The design of auctions is just one example; the writing of a constitution is another one. Obviously, mechanism design is an important instrument for politicians and business managers. However, it is also relevant for everybody who manages decision situations—which includes most of us. Game theory is the key. This is the focus of the present book. The book has three major heroes: Niccolò Machiavelli, Adam Smith, and George Washington. In fact, Washington accomplished what Adam Smith suggested in the last page of his Wealth of Nations: v
vi Preface: Introduction and Warnings
“If any of the provinces of the British empire cannot be made to contribute toward the support of the whole empire, it is surely time that Great Britain should free herself from the expence of defending those provinces in time of war, and of supporting any part of their civil or military establishments in time of peace, and accommodate her future views and design to the real mediocrity of her circumstances” (Smith 1981[1776/77]: 947).
King George III and his government did not follow Smith’s recommendation, and much of the American colonies became independent after the War of Independence. We will not discuss George Washington any further in this book, but he is our prototype of “the man who managed.” Much of what follows can be applied to his life and career. In 1740, Voltaire arranged for the publication “The Refutation of Machiavelli’s Prince or Anti-Machiavel” written by Frederick of Prussia, probably the most prominent Machiavelli critic. Prince Royal Frederick developed a model of an enlightened prince who considered himself a “first servant” to his State and a reliable agent in the interplay with fellow princes. However, when, in 1740, he succeeded his father as King of Prussia, his actual behavior was heavily influenced by the recipes suggested in Machiavelli’s Il Principe. He may have been Machiavelli’s most successful student and ardent follower. There are a number of other heroes in this book: Johann Wolfgang von Goethe who applied the Vickrey’s auction scheme when selling his manuscript of “Hermann and Dorothea” to the publisher Hans Friedrich Vieweg in Berlin; Joseph Heller and Peter Handke who contributed “strategic inspiration” by their novels Catch-22 and The Goalie’s Anxiety at the Penalty Kick, respectively; the widely quoted Chinese military strategist Sun Tzu who suggested that “to a surrounded enemy you must leave a way of escape″; Napoleon who studied Machiavelli’s Il Principe and sent his troops to Moscow where they died of hunger and cold; Émile Borel who was possibly the first to define the game of strategy “in which the winnings depend simultaneously on chance and the skill of the player”—who also proposed a thought experiment that entered popular culture under the name “infinite monkey theorem”; John von Neumann who proved the Minimax Theorem and thereby initiated the birth of game theory; and, of course, John Nash whose outstanding contributions to game theory not only earned him a Nobel Prize but also triggered a biography and, most prominently, a movie with the title Beautiful Mind. There is a long list of Nobel Prize winners who have been celebrated because of their work in game theory, and there is even
Preface: Introduction and Warnings vii
a much longer list of scholars who contributed to game theory’s development and application—and thus to its popularity. Specifically, this book is about strategic mistakes and how to avoid them. A first technique is: We have to think strategically; the right approach would be using game theory, the theory of strategic thinking in order to get a better understanding of the decision situation. However, to apply game theory, you have to learn it. This book will give you a well-structured introduction in game-theoretical thinking and basic methods and concepts. Every decision maker should study the basic concepts offered in this book. They are highly relevant not only in cases of conflict but also in cases of cooperation—and of coordination problems. A better understanding of strategic problems and knowledge of possible solutions is extremely important to identify social or political conflicts, irrespective of whether the conflicts are between nations or family members, and to avoid them. While the German version of this book (“Spieltheorie für Manager”1) focused on introducing game theory as a tool kit for solving strategic decision problems, the present version emphasizes the role of game theory as a means to identify the complexity of decision situations and to thereby obtain a better understanding of the world we live in and of the decisions we have to make. Of course, the latter does not exclude learning about tools which help to solve problems that involve strategic thinking. Needless to say: This book is not a literal translation of the German version. In his The Picture of Dorian Gray, Oscar Wilde (1997[1890]: 30) characterized Sir Thomas Bordon, a radical member of the Parliament, with the notorious observation: “Like all people who try to exhaust a subject, he exhausted his listeners.” In this book, we do not want to exhaust the subject as we do not want to exhaust our readers. We are sure that readers with some knowledge of game theory can easily find important issues that are missing in our text. We strongly suggest to the advanced reader studying what is offered here and then to verify whether he or she has learned something from it. However, readers who have so far been protected against game theory can sit down, enjoy the text, and get nervous about the thought experiments with which they will be confronted. Unfortunately, you have to manage game theory when you want to apply it.
1Holler
and Klose-Ullmann (2007), Spieltheorie für Manager: Handbuch für Strategen, 2nd edition, Munich: Verlag Franz Vahlen. Material in the present book also derives from Holler et al. (2019), Einführung in die Spieltheorie, 8th edition, Berlin: SpringerGabler.
viii Preface: Introduction and Warnings
Of course, we will not conclude our preface without illustrating the concept of strategic thinking and give an explanation to the title of this book: “Scissors and Rock.” It is not unlikely that you played R ock-Scissors-Paper2 in the schoolyard. It is a two-person game played with hands. Players have to choose whether they want to show a closed fist, representing a “rock”; a fist with two fingers sticking out forming a V, representing “scissors”; or a flat hand, representing “paper.” The “rock” spoils the “scissors”; the “scissors” cut the “paper”; the “paper” wraps the “rock.” Each alternative has the potential to “beat” another one, but is in danger of being defeated by a third alternative. These relations define losing and winning. If players choose identical alternatives, the particular round ends in a draw. What alternative will you choose if choices are simultaneous and you want to win? Of course, if you find out that your opponent chooses “paper” more often than the two other alternatives, you will choose “scissors” more often than “paper” or “rock.” If you decide to choose “scissors” all the time, the opponent will realize his own bias and perhaps switch to “rock” more often than you expected. If you do not want to be exploited by your opponent, try to choose all three alternatives with equal probability. (The strategic decision problem is rather similar to the Penalty-kick game analyzed in Sect. 10.8.) In the equilibrium, both players choose each of the three alternatives with probability one-third. But there is Clever Mary who invites Sweet Paul to choose his alternative first and then she will choose hers. This is how “Scissors and Rock” prevailed. In fact, no matter what Sweet Paul chooses, Clever Mary always has a winning alternative—obviously, there is a second-mover advantage if the game is played sequentially. This is the reason why we see this game played simultaneously in schoolyards. Outside of schoolyards, again and again, decision makers try to slip into the role of Clever Mary and invite a Sweet Paul for a first move. It is not always a case of politeness, if somebody invites you to go first. The example shows the possible power inherent in designing a game. A second move is not always to the disadvantage of the first mover. If Sweet Paul succeeds to reduce the set of alternatives to two elements, e.g., Scissor and Rock, then there is a first-mover advantage. Paul will choose Rock and win. If, different from Rock-Scissors-Paper, the game does not contain conflicting interests, a sequential structure may help to choose a successful solution to a coordination problem and implement an efficient outcome. Then, 2For
details, see Sect. 10.9.
Preface: Introduction and Warnings ix
in general, the order of moves does not matter and—irrespective of whether a third or fourth move may exist—a cooperative outcome prevails. Given this, we would like to thank Gregor Berz, Andreas Diekmann, Gudrun Keintzel-Schön, Norbert Leudemann, Hannu Nurmi, Florian Rupp, and Ernst Strouhal for their valuable support—and the inspiration which we received from them. We are grateful to Raymond Russ at the University of Maine who read the complete text and made very valuable propositions. Of course, many others inspired us while writing this text. Thank you very much! Hamburg, Germany Munich, Germany
Manfred J.Holler Barbara Klose-Ullmann
References Holler, M. J., Illing, G., & Napel S. (2019). Einführung in die Spieltheorie (8th ed.). Berlin: SpringerGabler. Holler, M. J., & Klose-Ullmann, B. (2007). Spieltheorie für Manager. Handbuch für Strategen (2nd ed.). Munich: Vahlen. Smith, A. (1981 [1776/77]). In R. H. Campbell & A. S. Skinner (Eds.), An inquiry into the nature and causes of the wealth of nations. Indianapolis: Liberty Press. Wilde, O. (1997[1890]). Collected work of Oscar Wilde. Ware: Wordsworth Edition.
Contents
1
Playing for Susan 1 1.1 Thinking Strategically 2 1.2 Why not Learn Game Theory? 4 1.3 The Working of the Invisible Hand 6 1.4 The Real World and Its Models 10 1.5 Winner-Takes-It-All and the Chicken Game 12 1.6 The Essence of Game Theory, the Brain, and Empathy 15 1.7 Strategic Thinking that Failed—Perhaps 18 References 20
2
No Mathematics 23 2.1 Historical Note I: The Pioneers 23 2.2 The Concept of Sets 27 2.3 Prices and Quantities 30 2.4 From Set to Mapping and Function 31 2.5 Utilities, Payoff Functions, and Strategy Vectors 33 2.6 Monkeys Write Shakespeare, but Where Is Hamlet? 35 References 38
3
The Prisoners’ Dilemma, but Who Are the Players? 39 3.1 From Game Form to Payoff Matrix 39 3.2 Equilibrium in Dominant Strategies 44 3.3 Catch-22 and Other Social Traps 45 3.4 Ways Out of the Dilemma 47 3.5 Who Are the Players? 49 xi
xii Contents
3.6 Then Strike 52 3.7 Tosca’s Dominant Strategy 55 References 57 4
The Nash Equilibrium 59 4.1 On the Definition of the Nash Equilibrium 59 4.2 Historical Note II: Nash and the Nash Equilibrium 62 4.3 Nash Equilibria and Chicken Game 63 4.4 Inefficient Equilibria in the QWERTY-DSK Game 67 4.5 Who Are the Players in the QWERTY-DSK Game? 70 4.6 Nash Equilibria in Kamasutra Games 72 References 73
5
Sequence of Moves and the Extensive Form 75 5.1 The Shrinking of the Event Matrix 75 5.2 Sequential Structure and Chicken Game 76 5.3 Extensive Form and Game Tree 78 5.4 Information: Perfect, Imperfect, Complete, and Incomplete 79 5.5 Perfect Recall Missing 82 5.6 The Battle of the Sexes 86 5.7 What Is a Strategy? 89 5.8 Sharing a Cake 91 5.9 Theory of Moves 92 References 95
6
Chaos, Too Many and Too Few 97 6.1 The El Farol Problem or “Too Many People at the Same Spot” 98 6.2 Self-referential Systems 100 6.3 Solutions to the El Farol Problem 101 6.4 Market Congestion Game 103 6.5 Viruses for Macintosh 104 6.6 The Volunteer’s Dilemma 106 References 112
7
Which Strategy to Choose? 113 7.1 Nash Equilibrium and Optimal Strategy 114 7.2 Equilibrium Choice and Trembling Hand 116 7.3 Trembling Hand Perfection and Market Congestion 118
Contents xiii
7.4 Rationalizable Strategies 121 References 123 8
Step-by-Step: The Subgame-Perfect Equilibrium 125 8.1 Market Entry Game with Monopoly 126 8.2 Backward Induction and Optimal Strategies 127 8.3 The Ultimatum Game 130 8.4 Social Trust and the Stag Hunt Game 133 8.5 How Reciprocity Works 136 References 139
9
Forever and a Day 141 9.1 The Competition Trap Closes 143 9.2 Iterated Prisoners’ Dilemma and the “Ravages of Time” 145 9.3 The Competition Trap Breaking Down 148 9.4 Robert Axelrod’s “Tournament of Strategies” 152 9.5 “The True Egoist Cooperates.”—Yes, but Why? 155 9.6 The Folk Theorem and “What We Have Always Known” 158 References 163
10 Mixed Strategies and Expected Utility 165 10.1 From Lottery to Expected Utility 166 10.2 The Allais Paradox and Kahneman-Tversky 169 10.3 Optimal Inspection in Mixed Strategies 172 10.4 Maximin Solution and the Inspection Game 176 10.5 Chicken Game Equilibria and Maximin Solution 179 10.6 Miller’s Crucible and the Stag Hunt Game 180 10.7 Zero-Sum Games and Minimax Theorem 183 10.8 The Goalie’s Anxiety at the Penalty Kick 188 10.9 Scissors and Rock 191 References 193 11 More Than Two Players 195 11.1 The Value of Coalitions 196 11.2 The Core 197 11.3 Network Games 199 11.4 Epilogue to the Core and Other Bargaining Solutions 204 11.5 Competition and Cooperation in the Triad 207 References 211
xiv Contents
12 Bargaining and Bargaining Games 213 12.1 The Bargaining Problem and the Solution 214 12.2 Rubinstein Game and the Shrinking Pie 219 12.3 Binding Agreements and the Nash Solution 225 12.4 Properties, Extensions, and the Nash Program 231 References 236 13 Goethe’s Price Games, Auctions, and Other Surprises 237 13.1 The Story of a Second-Price Auction 238 13.2 The Price-Setting Goethe 242 13.3 Optimal Strategies in Auctions and the Revenue Equivalence Theorem 245 13.4 All-Pay Auction, Attrition, and Pyrrhic Victory 250 13.5 Who Likes to Pay High Prices? 252 References 254 Index 255
1 Playing for Susan
In the town hall of the German city of Augsburg, founded originally as Augusta Vindelicorum in the year 15 BC,1 the ceiling of the central hall is decorated with a painting that shows Sapientia, the goddess of wisdom, in the center seated on a throne. A banner next to her, carried by some vassals, announces “per me reges regnant ”—loosely translated, “it is through me that the kings rule.” This book will demonstrate that it is not always easy to accomplish what Sapientia suggests. We will learn about the limits of her suggestions, but we will also see that the knowledge of game theory can extend the domain of Ratio, the enlightened companion of Sapientia. In general, there are several competing, more or less convincing stories that explain an event, an outcome, or a fact—whether they are of today or of 500 years ago. Of course, we want to know why, say, a particular result prevailed, and how. What are the forces that produced this result, and not another? We want to learn from the story either to satisfy our natural curiosity or to avoid failures in our future actions. In fact, curiosity supports the learning of tools to avoid the traps waiting for us. Curious people can handle surprises much better than those who know all they want to know.
1Norbert
Leudemann informed us that the original name of Augsburg is “Augusta Vindelicum.” In 15 BC, it was an army camp while the first civil settlement dated to 40 AC. The official name of the provincial capital was “Municipium Aelium Augustum,” abbreviated as “Aelia Augusta.”
© Springer Nature Switzerland AG 2020 M. J. Holler and B. Klose-Ullmann, Scissors and Rock, https://doi.org/10.1007/978-3-030-44823-3_1
1
2 M. J. Holler and B. Klose-Ullmann
1.1 Thinking Strategically After reading Adam Smith’s “The History of Astronomy,” an article which comes as a surprise itself (Smith 1982 [1758]), we realize how dangerous surprises can be. The message is: We are involved in research and try to understand things in order to minimize surprises. Thinking strategically, putting oneself into the shoes of the other, helps to understand social interaction and resulting social situations. For many such situations, a reliable theory and the understanding that derives from it reduce the likelihood of surprises. If decision making is strategic, then, typically, we can only hypothesize about the motivation, information, and reasoning producing the results that we see and want to explain to ourselves and, perhaps, to others. In the standard case, each decision maker can select one action only from a large set of alternatives without knowing what other decision makers will choose now or in the future, or, quite often, what they have chosen in the past. However, these choices specify the outcome that our decision maker wants to determine as he is likely to suffer or benefit from them. Sometimes we see the choices, and not the alternatives. Often, we only see the outcome—and nothing else—and we have to guess the choices and actions that caused it— as well as those involved in the decision, and their motivations. Of course, in these cases, we have to seek refuge in very strong hypotheses about human behavior; typically, this entails the rationality hypothesis and some degree of selfishness that characterizes the homo economicus, which have become the trademark of modern microeconomics and of the sciences invaded by it: sociology, philosophy, psychology, etc. In general, rationality and selfishness have to be further qualified to allow for deducting an explanation. In his “Essays: Moral, Political and Literary,” David Hume (1985 [1777]: 42) recommended that “in convincing any system of government…every man ought to be supposed to be a knave and to have no other end, in all his actions, than private interest. By this interest we must govern him, and, by means of it, make him, notwithstanding his insatiable avarice and ambition, co-operate to public good.” Are all men and women knaves or does this quotation merely imply that a successful government should be based on this assumption? Shall we imitate the government? As for the government, according to Machiavelli (1952 [1532]: 92), it is laudable…for a prince to keep faith and live with integrity, and not with astuteness, everyone knows. Still the experience of our times shows those princes to have done great things who have little regard for good faith, and
1 Playing for Susan 3
have been able by astuteness to confuse men’s brains, and who have ultimately overcome those who made loyalty their foundation.” He observes that for the prince “it is well to seem merciful, faithful, humane, sincere, religious, and also to be so,” but the prince must have the mind so disposed that when it is needful to be otherwise you may be able to change to the opposite qualities,” concluding that “it is not, therefore, necessary for a prince to have all the above-named qualities, but it is very necessary to seem to have them (Machiavelli 1952 [1532]: 93).
The shaping of expectations is essential to Machiavelli, even when it comes to architecture. How to build a fortress? In his The Art of War, he writes that he “would make the walls strong, and ditches…that everyone should understand that if the walls and the ditch were lost, the entire fortress would be lost” (Machiavelli 1882 [1521], Seventh Book). In the first step, it seems that walls have to be strong and enforced by ditches in order to motivate the spirit of those defending the fortress behind the walls. The next step, in Machiavelli’s reasoning, is that those who attack strong walls have to expect a spirit of defense. But this spirit was sometimes lacking, and, as Machiavelli observed, people relied on strong walls and reduced their efforts of defense. Therefore, strong walls were not an unambiguous signal and a reliable solution for keeping the enemy away, as Machiavelli himself noted (Machiavelli 1882 [1521], Seventh Book). A game-theoretical analysis could help to clarify this case. The history of game theory tells us that its success is, to a large extent, the result of its application to war and war-like situations. But if you are a pacifist, do not stop reading here. Strategic thinking is ubiquitous: It is an essential ingredient of “love and fear,” but also of less dramatic core functions of life such as consumption. A large share of consumption is directed not to pleasure and satisfaction, but to create “social distance” by impressing others. In The Theory of the Leisure Class, Thorstein Veblen’s world showed us an elite citizenry engaged in conspicuous consumption and honorific expenditures in search of pecuniary decency. A means to achieve this goal was to invest in delicate women, racing horses, and subduing dogs—and in Renaissance Art. The latter was thought to be most prestigious when it was transferred at large sums from an old English castle, owned by a semi-bankrupt lord, with the help of the most prestigious art dealer Joseph Duveen, who became himself a lord toward the end of his life. We told this story in detail in our “Art Goes America” article (Holler and Klose-Ullmann 2010). In order to create and satisfy standards of excellence, to capture a shadow of aristocracy, and to impress their fellow citizens,
4 M. J. Holler and B. Klose-Ullmann
the American leisure class tried to imitate their British upper-class models. Veblen (1979 [1899]: 145) observed that the “English seat, and the peculiarly stressing gait which has made an awkward seat necessary, are a survival from the time when the English roads were so bad with mire and mud as to be virtually impassable for a horse traveling at a more comfortable gait; so that a person of decorous tastes in horsemanship to-day rides a punch with a cocked tail, in an uncomfortable posture and a distressing gait.” In art and architecture, American rusticity was not yet popular among the rich when Veblen published his leisure-class book in 1899. The rich may still try to buy a Raffaello out of some lord’s castle. But soon they will demonstrate that, without social discounting, they can afford to show nineteenth-century American landscape painting of the Hudson River School, a group of artists around Thomas Cole and his student Frederic Edwin Church, in their prairie house homes. Of course, this counter-snobbery was meant to impress the snobs (Steiner and Weiss 1951), but it made identification rather complex as long as American paintings were at a low price and the butcher could buy them as well. Fortunately, due to the additional demand, prices went up. Consequently, counter-snobbery had to find new ways to manifest itself.
1.2 Why not Learn Game Theory? As already said in the Introduction, we are sure that readers with some knowledge in game theory can easily find important issues that are missing in our text. However, readers who have so far been fully protected against game theory can sit down, enjoy the text, and get nervous with the thought experiments with which they are confronted. We strongly suggest that readers after having studied what is offered here ask themselves whether they have learned something from it—something that gives insights, something they can apply. Unfortunately, you have to learn game theory when you want to apply it. In general, it does not pay to hire a game theorist to do the job of strategic decision making for you. He or she does not know how much you like to win the battle and how strong your battleships, i.e., your resources, are. It is quite likely that, on the one hand, you cannot express your preferences and, on the other, you want to keep information concerning your resources as a secret. However, both items, your evaluations and your resources, are extremely important to model a game situation and to find a solution.
1 Playing for Susan 5
More specifically, let’s put ourselves into the shoes of the head of the sales department of a large company who wants to apply game theory to outsmart the competitor. We were told that we have to know game theory if we want to apply it. This statement appears trivial at first sight. However, reflecting on the activities of the sales manager, it becomes evident that he2 uses many skills in which he has not been formally trained—and which thus can hardly be reconstructed by an outsider. He continuously adopts results from the analyses of others, confiding in their reflections without being familiar with their principles. Why is it so? Many management skills are almost impossible to learn. A great number of those skills are based on intuition or they are the result of a socially evolutionary process. For instance, future managers who conform to a certain behavioral codex have better prospects of attaining an executive position within a company than those candidates who show behavior that deviates from this codex and, therefore, are less successful in the given business culture. On the other hand, there are various problems which the manager expects to solve with the help of experts without understanding the methods applied in detail. Think about operation research analysis or the application of econometric models. If the manager applied identical methods and based his work on identical data, he would achieve the same results as the expert, although probably with a greater effort. This is likely to hold, e.g., for the prediction of economic growth or of the development of interest and exchange rates. However, this does not apply to forecasting the effect of a price reduction that a manager envisions for his company, especially if the company operates in a market with one or just a few competitors. Under these circumstances, decision making is, in general, much too complex to use an analytical (numeric) approach—not because of a shortage of data but due to the small number of competitors. In this instance, decision making is of a strategic nature: Competitors are likely to react to price reductions. But how do they react? Game theory could provide an answer if the decision maker could interpret the market correctly. Therefore, the manager should not leave the application of game theory to a third party, although there are cases which can be molded in a more general framework. In principle, the manager must
2Not
all managers are men as exemplified by one of the authors. We apologize exclusively using “he” in this text.
6 M. J. Holler and B. Klose-Ullmann
evaluate the market conditions himself and do his own analysis. Knowing game theory can be of help—especially when you have to explain your decision to others.
1.3 The Working of the Invisible Hand In what follows, we illustrate the need for strategic reasoning of the manager using an example of a toy store which is meant to capture the stylized facts of some real-world markets. In the case of only two suppliers on the market, A and B, the effect of a price reduction by firm A is determined by the behavior of rival firm B and the demand of the customers. Thus, its effect depends on how B reacts to A’s decision to reduce the price. The objective of A’s price cut is to increase the demand for its own product. However, if the competitor reduces the price as well, the possible price cut effect is likely to be undermined. As a consequence, sales will not increase as much as expected and profits may even decrease. The decision on the price reduction by A will therefore be determined by A’s expectation of B’s reaction. B’s reaction in turn will be affected by B’s expectation of A’s reaction. In order for A to predict B’s reaction, A must take B’s expectation in relation to its own behavior into account. Thus, B’s expectation will depend on the expectations formed by both companies, A and B. The structure of this dependency is extremely complex. As a result, both companies will face a severe problem. The managers have to develop some idea of the competitor’s prices if they want to maximize their profits or to achieve a related goal (e.g., revenue maximization or increasing market shares). Moreover, in general, there is uncertainty about how the buyers will react to prices per se and also whether there is a potential entrant to the market. Let’s abstract from such intricacies for the moment and use our toy example. For further simplification, we assume that the two suppliers to the market have just two modes of behavior: to choose a high or a low price. In the language of game theory, the modes of behavior are called strategies: They label the set of plans from which the decision makers can choose. As the decision situation is characterized by strategic interaction inasmuch as the outcome depends on the choices of both agents and, as assumed, the two agents know about it, it constitutes a game situation. As a consequence, the decision makers can be viewed as players.
1 Playing for Susan 7
Matrix 1.1 The competitive trap 3OD\HU % $
KLJK
/RZ
KLJK
ORZ
The strategic interaction is obvious when we look at its representation by means of Matrix 1.1. If both players, A and B, choose strategy “high,” then the matrix says that A and B will achieve payoffs of 800 and 200, respectively. In principle, the payoff numbers represent utility values, but for the given example profits seem to be good proxies. If player A chooses “high” and player B chooses “low,” then the profits are represented by the payoff pair (250, 300). Is it better to choose low prices? If both sellers choose “low,” the corresponding payoff pair is (500, 150). Obviously, it is not profitable for A to choose “high” when B chooses “low.” Is it profitable for A to choose “high” when B chooses “high”? No! Irrespective of whether B chooses “high” or “low,” it is always better for A to choose “low.” The strategy “low” is a strictly dominant strategy for A. By a similar reasoning, we will find out that “low” is also a strictly dominant strategy for B: Irrespective of which strategy A chooses, it is always better for B to choose “low” instead of “high.” To answer the question above, it seems that is better to choose low prices instead of high prices. But is this answer correct? If both players choose “high,” the payoffs are (800, 200), whereas if they choose “low” payoffs are (500, 150). Obviously, “low” prices are not profitable as both sellers are better off by choosing high prices. But above we have argued that low prices represent strictly dominant strategies for each player; that is, they are preferable irrespective of what the other player chooses. It seems that our players are trapped in a contradiction. Can we help them? Matrix 1.1 does not illustrate a logical contradiction, but a trap called competition. The fact that payoffs (800, 200) result, if both players choose “high,” is only of anecdotal value for an individual player, if he is solely interested in maximizing his own payoff. The latter objective suggests that he should choose his strictly dominant strategy: This is the individual
8 M. J. Holler and B. Klose-Ullmann
rational mode of behavior for both players in the decision situation described by Matrix 1.1; it results in the payoff pair (500, 150). This behavior is in conflict with collectively rational behavior—also labeled Pareto efficient behavior—that leads to the payoff pair (800, 200). But note that this behavior and its outcome are only efficient with respect to the sellers. From the point of view of the buyers, we should be happy about the low prices that result from the individual rational behavior of the sellers. The game described by Matrix 1.1 constitutes a Prisoners’ Dilemma—the most popular decision situation in game theory. In Chap. 3, we will learn why this game carries this name. It reflects a conflict between individual rational behavior and social efficiency (or collective rationality). But as the story goes, we should not feel sorry if the invisible hand of the competition works and drives the prices down. The merits of the invisible hand were already quoted by Adam Smith in his “Inquiry into the Nature and Causes of the Wealth of Nations,” first published in 1776/77. In Book IV, Chapter II, we read that, in general, every individual …neither intends to promote the public interest, nor knows how much he is promoting it. By preferring the support of domestic to that of foreign industry, he intends only his own security; and by directing that industry in such a manner as its produce may be of the greatest value, he intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention. Nor is it always the worse for the society that it was no part of it. By pursuing his own interest he frequently promotes that of the society more effectually then when he really intends to promote it (Smith 1981 [1976/1977]: 456).
Please note the word “frequently.” Adam Smith was quite aware that the invisible hand does not always work, because of cartels, or did not work properly because of institutional shortcomings—see his discussion of the banking sector—and the potential of free-riding in the provision of public goods. The emergence of externalities is another factor that makes the invisible hand tremble. From a game-theoretical point of view cartels are perhaps the most interesting handicap that hinders the successful working of the invisible hand. Adam Smith is very explicit that such cartels exist, for instance, on the labor market where wages depend on contracts, the parties’ “…interests are by no means the same. The workmen desire to get as much, the masters to give as little as possible.” Given this rather plain observation, Adam Smith (1981 [1776/1777], p 83) concludes, “The former are disposed to combine in order to raise, the latter in order to lower the wages of labour.” And he goes on to reason:
1 Playing for Susan 9
It is not, however, difficult to foresee which of the two parties must, upon all ordinary occasions, have the advantage in the dispute, and force the other into a compliance with their terms. The masters, being fewer in number, can combine much more easily; and the law, besides, authorises, or at least does not prohibit their combinations, while it prohibits those of the workmen. We have no acts of parliament against combining to lower the price of work; but many against combining to raise it. In all such disputes the masters can hold out much longer. A landlord, a farmer, a master manufacturer, or merchant, though they did not employ a single workman, could generally live a year or two upon the stocks which they have already acquired. Many workmen could not subsist a week, few could subsist a month, and scarce any a year without employment. In the long–run the workman may be as necessary to his master as his master is to him; but the necessity is not so immediate (Smith 1981 [1776/77]: 83f ).
But do these combinations really form? It seems that, in general, they are not made public and Adam Smith had to convince his readership that such combinations exist. We rarely hear, it has been said, of the combinations of masters; though frequently of those of workmen. But whoever imagines, upon this account, that masters rarely combine, is as ignorant of the world as of the subject. Masters are always and every where in a sort of tacit, but constant and uniform combination, not to raise the wages of labour above their actual rate. To violate this combination is every where a most unpopular action, and a sort of reproach to a master among his neighbours and equals (Smith 1981 [1776/77]: 84).
Here, we have some interesting observations which we discuss in detail in Chap. 9: Agreements can be tacit and enforced by social and perhaps economic pressure. The decision situation described by Adam Smith seems to imply a Prisoners’ Dilemma with respect to the cooperation of the masters, as an individual master that deviates from the tacit contract could benefit by paying higher wages and thereby attracting better skilled “workmen”—if there were not the threat “of reproach to a master among his neighbours and equals.” The relationship between the masters and the workmen constitutes a multi-person bargaining game which is, however, reduced to a market situation that has one agent, the “combination of masters,” representing demand and many individual workmen on the other supply side as, by law, workmen were not allowed to collude. Economists call such market situation demand-side monopoly or, in a more sophisticated manner, monopsony.
10 M. J. Holler and B. Klose-Ullmann
In the modern language of game theory, combinations are called coalitions. They describe situations of conflict and coordination and are especially relevant for games with more than two players. In the course of this book, we will learn how they emerge and how the coalition surplus will be shared between its members.
1.4 The Real World and Its Models From the interpretation of Matrix 1.1, we learned that a two-person Prisoners’ Dilemma game is characterized by two features: (a) The two players have strictly dominant strategies, i.e., each player has a best strategy irrespective of the strategy choice of the other players. (b) The result, determined by the equilibrium in dominant strategies, is socially inefficient with respect to the players inasmuch as both players are better off if they either find a mode of cooperation or if cooperation is forced upon them. Do such decision situations exist? Probably not in the abstract form as summarized by (a) and (b)! However, starting from the toy model described by Matrix 1.1, we can think of two gas stations that are close to each other on the same side of a highway. Their products are hardly differentiated. As a consequence, buyers will steer their car to the gas station with the lower price if prices differ. Similarly, many customers do not think that there is a quality difference between Coca-Cola and Pepsi Cola and buy the cheaper one if there is a choice at all. Often, the store decided already for the customer and offers either Coca-Cola or Pepsi Cola, but not both of them. Of course, the reasoning of the store manager is much more complicated because for him, in general, variables other than prices are relevant as well. Although there might be only negligible differences in the taste of the two drinks, the two suppliers can have very different marketing strategies directed to store managers that lead to a degree of monopolization inasmuch as a particular store only offers the brand that seems favorable to the manager. To get an understanding of such more complex cases, let us describe the decision problem in way typical for a game-theoretical analysis. Let’s assume we are one of the players and face a strategic decision situation. In order to manage such a situation, we have two basic concerns: (a) to find an adequate
1 Playing for Susan 11
description of the situation and (b) to find a solution to our decision problem. There are three steps to help us in this project. Step One: Identification of a decision situation as a game-theoretical problem. A decision situation is strategic if (a) the outcome is the result of the decisions of more than one decision maker, (b) each decision maker is aware of this interdependency, (c) each decision maker assumes that the other decision makers are aware of this interdependency, and (d) each decision maker takes (a), (b), and (c) into consideration. Of course, this only makes sense if the number of players is small such that the interdependency can be considered as relevant and being handled accordingly. However, what is a small number? In a way, this is defined by our behavior in such a decision situation. If we take (a), (b), (c), and (d) into consideration, then we think that the number of agents is small enough— and we see ourselves in a strategic decision situation. Step Two: Formulation of the adequate game model. A game consists of the following building blocs: (a) Decision makers, agents, etc., called players. (b) Strategy sets: Each player chooses his or her strategy out of a corresponding set of strategies that are given by the resources and defined by the rules of the game. (c) Payoffs—are utilities that the players assign to the possible outcomes determined by corresponding choices on strategies. Note that the outcomes (or events) do not show up in the game, but their evaluations in the form of payoffs do. In Matrix 1.1, we assumed that profits are a good proxy for payoffs and did not distinguish between the two concepts which is the regular approach procedure in standard microeconomics with respect to firms. But how shall we proceed if the outcomes are apples, pears, and bananas? We evaluate them in accordance with our preferences and assume that the other players will do the same. Of course, the problem is that, in general, we can only guess the other players’ preferences. To give our preferences in the form of numbers can be difficult enough as modern utility theory, referring to introspection and experiments, tells us. In Chap. 10, we will discuss some extreme cases of “misrepresentation.” With respect to strategies, we should keep in mind that they represent plans often in the form of a sequence of moves. Moves can be contingent in the form of “If player A does x, I will choose y; if A does z, I will choose v.” Think about chess, which is a popular illustration of a game, but note that the strategies of the game are certainly numerous. Nobody can formulate a plan that lists the suggested moves from the beginning to the end of the game. Still, the example may help us to understand that the set of strategies
12 M. J. Holler and B. Klose-Ullmann
depends on the rules of the game. Outside the game arena, such rules are often given by laws and public regulations, but also by behavioral standards. If we violate them, we may be eliminated from the standard games of the society we live in. Step Three: Selection of the solution concept. Applying a particular solution concept or, in short, a solution to a game is meant to determine the strategies that the players are expected to choose, and thus determine the outcome and the corresponding payoffs of the players. Often, the selected outcome is not unique, and for some solution concepts and a particular game, an outcome may not even exist. In the Prisoners’ Dilemma game, the solution concept, i.e., equilibrium in dominant strategies, is defined by the strategies that the players are expected to choose. Alternatively, we may define the set of Pareto efficient outcomes as a solution which corresponds to the payoff pairs (800, 200), (250, 300), and (1500, 100) in Matrix 1.1. Note that given one of these payoff pairs no player can be made better off without making the other worse off. Given this set, of course, we have to discuss how one of its elements can be achieved, given the game situation and self-interested players. A favorite answer has recourse to altruism. However, if we introduce Adam Smith’s “fellow-feelings,” proposed on the first page of his Theory of Moral Sentiments (Smith 1982 [1759]), into Matrix 1.1 and these fellow-feelings are strong enough so that at least one of the players has no strictly dominant strategy available, then the game is no longer a Prisoners’ Dilemma. Moreover, fellow-feelings among the managers of gas stations are not very likely. If we see that they choose high prices and thus deviate from the equilibrium of dominant strategies, we have to look for another explanation. Chap. 9 offers such an answer to this problem.
1.5 Winner-Takes-It-All and the Chicken Game Now let us apply our just developed scheme to a real-world case, but described in terms of its stylized characteristics. Let us take a historical case: the Browser War between Microsoft, on the one side, and Netscape, on the other. Time Magazine of September 16, 1996 (p. 53ff.), reported that a dramatic battle between Microsoft and Netscape developed. Each of the two suppliers of browser programs wanted to help us find our way on the Internet. The winner of this battle could expect to earn billions of dollars,
1 Playing for Susan 13
while the loser would become marginalist on the market and perhaps would even have to close down the business. This looked like a winner-takes-it-all game.3 An important component of any strategy in this battle was the compatibility of a particular browser program. In the beginning, the older program of the two, Netscape’s Navigator had the advantage to be widely used, and therefore, its net effects were larger than the net effects of the newcomer’s program. Netscape could expect that the users of Navigator would be loyal to their browser program. It seems that there was a strong first-mover advantage embedded in the net effects—and the routine of the users. However, this was challenged by the fact that the Internet Explorer of Microsoft was easier to handle for newcomers and it was offered for free. Of course, with a zero price, Microsoft could not expect to make profits out of the sale of browser programs. But it was expected that “buyers” of the Microsoft browser would also buy other programs and services supplied by Microsoft, and this is what happened. As a result of the zero-price policy of Microsoft, Netscape’s Navigator vanished from the market. To describe the set of intertemporal strategies that were available to the players is rather difficult in this case. Moreover, the decisions were driven by expectations, and we have as yet not the instruments to deal with expected values.4 So far we simply do not have the capacity to represent this situation adequately. However, we can look at a toy model of this case that nevertheless might be useful to illustrate the decision problem and to derive some preliminary conclusions. Let us start with Matrix 1.1. We identify Microsoft and Netscape by the players A and B, respectively. The entries in the cells of the matrix represent expected profits for A and B. So, if A chooses “high” and B chooses “low,” the payoffs will be 250 for A and 300 for B. However, Matrix 1.2 assumes the payoffs (−50, −100) for the case that both players choose low prices, while Matrix 1.1 assumed the payoffs (500, 150). Obviously, the underlying decision situations are different and the payoff pair (−50, −100) suggests that an ongoing price war will be hazardous to both suppliers in the long run.
3From The Winner Takes It All lyrics by ABBA: “The winner takes it all/The loser’s standing small/Beside the victory/That’s her destiny.” 4For expected values, see Chap. 10.
14 M. J. Holler and B. Klose-Ullmann
Would you choose a high or a low price if you were player A? Matrix 1.2 The Chicken Game 3OD\HU % $
KLJK
/RZ
KLJK
ORZ
A comparison of the games in Matrices 1.1 and 1.2 shows that a perhaps minor change in the payoffs can have tremendous consequences for the decision situation. In Matrix 1.2, none of the players has a strictly dominant strategy. Therefore, the solution concept of an equilibrium in strictly dominant strategies does not apply. Whether a strategy is a good choice for A depends on which strategy B chooses, and vice versa. If we assume that players choose their strategies simultaneously, so that A does not know the strategy which B selects and B does not know the strategy which A chooses, then we see that the decision problem of the two players is nontrivial. In the course of this game, we will learn several solution concepts that should help players A and B to make rational choices—and to help us understand decisions made in game situations. Without going into detail, we see that the strategy pairs (high, low) and (low, high) are characterized by some stability as neither player is motivated to revise his or her strategy, given the strategy of the other player. In Chap. 3, we will learn that this property defines a Nash equilibrium. However, the strategy pairs (high, low) and (low, high) cannot be satisfied at the same time; that is, they are alternatives that exclude each other. Even though it would be beneficial for player A to see the strategy pair (low, high) put into reality, A cannot force B to choose a high price. Note that in a strategic decision situation a player cannot choose an outcome, independent of what the other player does—in fact, a player chooses a strategy and not an outcome. The game in Matrix 1.2 is known as Chicken Game. Different from the Prisoners’ Dilemma game in Matrix 1.1, it represents a rather complex decision situation as we will see in Chap. 4. In Chap. 4, we will hear of James Dean and learn why this game is called a “chicken” and how this game can be applied to analyze the lovers’ battle in the Kamasutra.
1 Playing for Susan 15
1.6 The Essence of Game Theory, the Brain, and Empathy The essence of game theory is to form expectations about the behavior of the other agents, when the situation is strategic, and then to choose one’s best reply in accordance with these expectations. “The better these expectations, the better the choices,” one should hope. However, we will come across situations where sophisticated reasoning about the choices of others does not help to improve one’s fate. Game theory will teach us the characteristics of some of these situations. Of course, putting oneself into the shoes of others necessitates some knowledge about others. Often specific knowledge is not available, but general knowledge about human behavior can be of help, too. In fact, the study of culture and institutions can help to form expectations even when the personality of the other decision makers is quite alien to us. In any case, our capacity for putting ourselves into the shoes of others is limited. In many cases, it assumes a rather complex thought process. In his review of V. S. Ramachandran’s “The Tell-Tale Brain: A Neuroscientist’s Quest for What Makes Us Human,” McGinn (2011: 32) discusses the author’s thesis that studying the brain is “a good way to understand the mind.” Here, “mind” describes our capacity of thinking and thus of decision making. The hypothesis implies that, in order to understand thinking, we should look at the corresponding parts of the brain and their specialization. The “mirror neurons,” discovered in 1990, are of special interest to strategic thinking as they serve as the mechanism of imitation in our brain and as a source of empathy.5 As a consequence, when you are watching someone performing an action these neurons “fire” sympathetically and you perform the same action in your brain, but sometimes also physically “as when your arm swings slightly when you watch someone hit a ball with a bat.” This reaction “runs by means of mirroring neurons an internal simulation of the other’s intended action.” If so, “we need special inhibitory mechanisms in order to keep our mirror neurons under control – or else we would be in danger of doing everything we see and losing our sense of personal identity. We are, in effect, constantly impersonating others at a subconscious level, as 5“Mirror
neurons are a particular class of visuomotor neurons, originally discovered in area F5 of the monkey premotor cortex, that discharge both when the monkey does a particular action and when it observes another individual (monkey or human) doing similar action.” The authors “present evidence that a mirror-neuron system similar to that of the monkey exists in humans” (Rizzolatti and Craighero 2004: 169).
16 M. J. Holler and B. Klose-Ullmann
our hyperactive mirror neurons issue their sympathetic reactions” (McGinn 2011: 34). An understanding of strategic decision situations may help us to bring this process to the conscious level and support preserving our personal identity. According to Ramachandran, autism can be understood as a deficiency in the mirror neuron system. “The autistic child cannot adopt the point of view of another person, and fails properly to grasp the self-other distinction … The brain signature of empathy is…absent in autistics” (McGinn 2011: 34). Can we conclude that autistics are unable to think strategically and game theory is an alien to them? Empathy corresponds to Adam Smith’s notion of sympathy that is responsible for the impartial-spectator concept being the cornerstone of his The Theory of Moral Sentiments, first published in 1759. The impartial spectator is “the man within the breast” (Smith 1982 [1759]: 132)—an illustration for one’s conscience. Sympathy is the ability to transcend oneself and see a situation from another’s point of view. The impartial spectator is derived from sympathy. But it is not the passion of others that puts our sympathy in motion, but our hypothetical experience of being in the other person’s position. Sympathy…does not arise so much from the view of the passion, as from that of the situation which excites it. We sometimes feel for another, a passion of which he himself seems to be altogether incapable; because, we put ourselves in his case, that passion arises in our breast from the imagination, though it does not in his from the reality. We blush for the impudence and rudeness of another, though he himself appears to have no sense of the impropriety of his own behaviour; because we cannot help feeling with what confusion we ourselves should be covered, had we behaved in so absurd a manner (Smith 1982 [1759]: 12).
We are back to the mirror neurons and to V. D. Ramachandran and his “neuroscientist’s quest for what makes us human.” Interestingly, Ramachandran also stresses the social dimension of our mind: “Culture consists of massive collections of complex skills and knowledge which are transferred from person to person through two core mediums, language and imitation. We could be nothing without our savant-like ability to imitate others.”6
6V.
S. Ramachandran, quoted by McGinn (2011:32).
1 Playing for Susan 17
McGinn (2011: 35) concludes that mirror neurons are an interesting discovery but raises the question of whether they really are “the explanation of empathy and imitation.” And he asks further: “What about the ability to analyze an observed action, not merely repeat it?” This question is of great importance when we relate game-theoretical thinking to the physiology of our mind. However, in what follows, we will not go deeper into the neuroscience of decision making. Important as this dimension seems to be, there are enough problems with the conceptual framework of strategic decision making to fill libraries. We will try to select the most important issues. By its very nature, and building on strategic thinking, game theory points to the “small world” in which agents are assumed to put “oneself into the shoes of others.” However, game-theoretical results are also interpreted for larger worlds in which the potential strategic behavior is limited or even inadequate—or restricted to a partial analysis. Some examples have been provided above. However, one should be very careful when leaving the “small world.”7 A “small world” seems to be a prerequisite for the knowledge and information that game theory assumes for the players. In this text, it is assumed that the players know the game, i.e., the set of players, their strategy sets, and their preferences with respect to the possible outcomes. This implies that player 1 knows the preferences of player 2 and player 2 knows player 1’s preferences. If these rather strong assumptions hold, the game is characterized by complete information; if not, we are talking of a game with incomplete information (see Sect. 5.5 for further details). Even stronger assumptions, often implicit to game-theoretical analysis, are common knowledge of rationality (CKR) paired with assuming consistent-aligned beliefs (CAB)—proposing “that everybody’s beliefs are consistent with everybody else’s” (Heap and Varoufakis 1995: 25). More specifically, every player believes that the other players behave rationally, maximizing their own payoffs, and that every player makes this assumption of every other player and decides accordingly. However, below we will discuss game situations in which the forming of such beliefs is impossible (or implausible), even in cases of complete information and assuming CKR and CAB.
7See
Binmore (2017) for a discussion of the problems that result from applying the “small world decision theory” à la Leonard Savage to a “large world.” Rationality, defined for a small world, might not apply in a larger one.
18 M. J. Holler and B. Klose-Ullmann
1.7 Strategic Thinking that Failed—Perhaps It has been said that the German Chancellor Angela Merkel wanted to see Axel Weber as successor of Jean-Claude Trichet whose term as President of the European Central Bank, i.e., the ECB, ended on November 1, 2011. We assume that in order to overcome expected resistance to the nomination of Axel Weber, Mrs. Merkel strongly supported the nomination of the Portuguese Vitor Constancio for the position of Vice President, and he was nominated. This choice reduced the chance of Mario Draghi, the Head of the Italian Central Bank, to become Trichet’s successor, to zero. Mario Draghi ranked as major competitor to Axel Weber for the presidency of the ECB. But the Euro-Europe was not ready to accept two “southern European citizens” for the representation of its central monetary institution. If Angela Merkel had the plan to gain the ECB presidency for Germany, then her strategy had so far been quite successful. However, on February 11, 2011, rumors spread that Axel Weber had decided neither to serve as President of the ECB nor be a candidate for a second term as Head of the German Federal Bank. Rumors said that he might desire to become CEO of the Deutsche Bank, which is by far Germany’s largest private bank. However, concerning the second rumor, there was discussion whether such a “trading places” or “switching chairs” would be possible, given that the Head of the German Federal Bank is one of the supervisors of the private banking system of Germany and thus also of the Deutsche Bank. The Frankfurt Stock Exchange was “irritated” and prices dropped, but only to recover the very same day. Newspapers said that Chancellor Angela Merkel was also irritated, not to say that she was annoyed. An alternative interpretation says Axel Weber was not willing to be made responsible for an ECB policy which he had not supported as President of the German National Bank. It is said that he might have been afraid of being the scapegoat if other Euro countries asked for a bailout or if the Euro ran into further trouble and Angela Merkel needed a sacrificial pawn in order to please the French President Sarkozy. Of course, these are rumors. Niccolò Machiavelli reports in The Prince how Cesare Borgia made use of his Minister Messer Remirro de Orco to gain power and to please the people. When he [Cesare Borgia] took the Romagna, it had previously been governed by weak rulers, who had rather despoiled their subjects than governed them, and given them more cause for disunion than for union, so that the province was a prey to robbery, assaults, and every kind of disorder. He, therefore,
1 Playing for Susan 19
judged it necessary to give them a good government in order to make them peaceful and obedient to his rule. For this purpose, he appointed Messer Remirro de Orco, a cruel and able man, to whom he gave the fullest authority. This man, in a short time, was highly successful, whereupon the duke, not deeming such excessive authority expedient, lest it should become hateful, appointed a civil court of justice in the centre of the province under an excellent president, to which each city appointed its own advocate. And as he knew that the hardness of the past had engendered some amount of hatred, in order to purge the minds of the people and to win them over completely, he resolved to show that if any cruelty had taken place it was not by his orders, but through the harsh disposition of his minister. And having found the opportunity he had him cut in half and placed one morning in the public square at Cesena with a piece of wood and blood-stained knife by his side. The ferocity of this spectacle caused the people both satisfaction and amazement (Machiavelli 1952 [1532]: 55).
Note that Cesare Borgia used the law and the camouflage of a legal procedure to sacrifice his loyal minister and to gain the applause of the people. His use of cruelty and deceit was a successful solution to the strategic problem: how to bring order to the Romagna, unite it, and make peace and create fealty, without being made responsible for the necessary cruelties. The episode demonstrates that the power of Cesare Borgia depended on his skills of strategic thinking, his willingness to inflict cruelties on people who trusted and worked for him, and, one has to say, on the naivety of his minister. Messer Remirro de Orco could have concluded that the Duke would exploit his capacity, and, in the very end, this capacity included that he himself had to serve as a sacrifice to the people who had suffered cruelties before. Perhaps Messer Remirro de Orco saw himself and the Duke in a different context and the game that reflected this context did not propose the trial and his death as an optimal alternative to the Duke. Obviously, the misfortune of Messer Remirro de Orco was that the Duke ’s game was based on offering an “officer” to console the people. History demonstrated that his choice of strategy was successful. It seems reasonable here to think in game-theoretic terms. Strategic thinking is a dominant feature in Machiavelli’s writings. He could well be considered a pioneer of modern game theory. It does not come as a surprise that game-theoretical language straightforwardly applies to the core of Machiavelli’s analysis. There is no evidence that Donald Trump, Boris Johnson, or Angela Merkel ever read The Prince. Quite likely Putin read it. It is said that Napoleon read it and that a copy of this book was found in Hitler’s library.
20 M. J. Holler and B. Klose-Ullmann
Still, when Napoleon led the Grande Armée to Moscow, he was convinced that his troops dominated in the technology of weapons and strategic skills. He was a military genius but he forgot to consider one possible strategy of his enemy: the strategy of scorched earth. Its application caused enormous damage to the Russian population; however, it destroyed Napoleon’s troops. Perhaps reading Machiavelli’s The Art of War, in addition to studying The Prince, could have helped—perhaps it may have even kept war in check. Why did Julius Cesar go to the Senate on March 15, despite all the hints and warnings? He knew that many Romans wanted to see him dead and some were planning that this would happen. He knew his countrymen, but he did not expect that they would decide to coordinate on a collective murder. Obviously, he saw the problem but misinterpreted the decision situation. When Michelin, the French producer of tires, opened a production plant in the USA to facilitate its access to the American market, the US producer of tires, Goodyear, entered the French market. On April 8, 2002, the FAZ, a leading German newspaper, commented on this result: “If the managers of Michelin had done some game theory, they would have spared their firm considerable problems.”
References Binmore, K. (2017). On the foundations of decision theory. Homo Oeconomicus, 34, 259–273. Heap, S. P. H., & Varoufakis, Y. (1995). Game theory: A critical introduction. New York: Routledge. Holler, M. J., & Klose-Ullmann, B. (2010). Art goes America. Journal of Economic Issues, 44, 89–112. Hume, D. (1985 [1777]). Essays: Moral, political, and literary, Indianapolis: LibertyClassics. Machiavelli, N. (1952 [1532]). The prince, New York: Mentor Books. Machiavelli, N. (1882 [1521]). The art of war, in the historical, political, and diplomatic writings of Niccolò Machiavelli (C. E. Detmold, Trans.). (vol. 4). Boston: James R. Osgood and Co. McGinn, C. (2011). Can the brain explain your mind? New York Review of Books (March 24) 58: 32–35. Rizzolatti, G., & Craighero, L. (2004). The mirror-neuron system. Annual Review of Neuroscience, 27, 169–192. Smith, A. (1981 [1776/77]), An Inquiry into the Nature and Causes of the Wealth of Nations, ed. by R. H. Campbell and A. S. Skinner, Indianapolis: Liberty Press.
1 Playing for Susan 21
Smith, A. (1982 [1759]). The theory of moral sentiments, ed. by D. D. Raphael & A. L. Macfie, Indianapolis: Liberty Press. Smith, A. (1982 [1758]). History of astronomy. In W. P. D. Wightman & J.C. Bryce (Eds.), Essays on philosophical subjects (pp. 33–105). Indianapolis: LibertyClassics. Steiner, R., & Weiss, J. (1951). Veblen revised in the light of countersnobbery. Journal of Aesthetics and Art Criticism, 9, 263–268. Veblen, T. (1979 [1899]). The theory of the leisure class. New York: Penguin Books.
2 No Mathematics
No mathematics is necessary to understand the following text except the basic operations of summing up, subtracting, multiplying, and dividing. However, it could be helpful to get familiar with some basic concepts such as sets, functions, and vectors so that the reader can make use of other game-theoretical literature. Moreover, these concepts help to make the message of the text more concise and easier to structure. Readers with a basic training in mathematics or with some confidence in the associative capacity of their reasoning can skip this chapter in the first round of their reading and, hopefully, have never to come back to it. But you should read the note in the next section if you are interested in the history of game theory. In fact, we should be interested in it because we can learn a lot from it and it is exciting—not only because a larger part of it is a “child of war.”
2.1 Historical Note I: The Pioneers In 1926, John von Neumann, then called Johann, presented the Minimax Theorem in Hilbert’s seminar at the University of Göttingen. In 1928, a written version of this presentation appeared in the form of a scientific article titled “Zur Theorie der Gesellschaftsspiele” (published as “On the Theory of Games of Strategy” in 1959). In this article, von Neumann gave a definition of a game of strategy and the various components that are needed to define a game. He also delivered solution concepts for such games. Most prominently, he proved the Minimax Theorem for two-person zero-sum games. © Springer Nature Switzerland AG 2020 M. J. Holler and B. Klose-Ullmann, Scissors and Rock, https://doi.org/10.1007/978-3-030-44823-3_2
23
24 M. J. Holler and B. Klose-Ullmann
He also drafted a solution for games with more than two players for which coalition formation matters. In fact, the solution concept he proposed in this paper is very similar to the Core, a concept that is rather popular today. Later, however, in “The Theory of Games and Economic Behavior” (Von Neumann and Morgenstern 1944), he subscribed to a different solution concept which is less popular today. John von Neumann was born Neumann János Lajos in 1903 in Budapest and died in 1957 in Washington, DC. At the age of six, he could tell jokes in Classical Greek, memorize telephone directories, and was able to divide two 8-digit numbers in his head. His first mathematics paper, written jointly with Michael Fekete,1 then assistant at the University of Budapest who had been tutoring him, was published in 1922. At the age of 25, he had already published ten major papers in mathematics. In 1926, he received his Ph.D. in mathematics from the University of Budapest and a diploma in chemical engineering from the ETH Zurich. He taught as a Privatdozent at the University of Berlin2 from 1926 to 1929 and at the University of Hamburg from 1929 to 1930. In 1930, he became a visiting lecturer at Princeton University. In 1931, he was appointed professor there. In 1933, he became professor of mathematics at the newly founded Institute for Advanced Study in Princeton—a position he kept for the remainder of his life. Morgenstern (1976) reports that, on February 1, 1939, when he gave an after-luncheon talk on business cycles at the Nassau Club, he had a first chance to talk to John von Neumann about games. Over the years, their discussion and friendship progressed. In 1944, their “Theory of Games and Economic Behavior” (TGEB) was published, a volume of more than 600 pages, that became the cornerstone and reference point of the game-theoretical research from 1945 to 1955. In his Von Neumann, Morgenstern, and the Creation of Game Theory: Chess to Social Sciences, 1900–1960, Leonard (2010) describes not only the historical background of the TGEB, but also its impact on the strategic thinking of the postwar period during the Cold War. This was highlighted by the research that was
1Fekete succeeded the famous mathematicians Edmund Landau and Abraham Fraenkel in heading the Institute of Mathematics at the Hebrew University of Jerusalem. 2The University of Berlin was founded in 1810. From 1828 to 1946, it was named Friedrich-Wilhelms-Universität in honor of its royal founder. In 1949, situated in East Berlin and thus in the former German Democratic Republic, its name was altered into Humboldt-Universität zu Berlin and was maintained after the reunification of Germany in 1990. Humboldt is a good name when it comes to science.
2 No Mathematics 25
encapsulated in the activities of the RAND Corporation at Santa Monica which was and, most likely, still is engaged in research under contract with the United States Air Force. It is said that the founding of this institute in March 1946 was partly initiated by von Neumann. Even if this might not be true, von Neumann was an extremely important and frequent visitor at RAND. Amadae (2003: 10) claims that it “… is no exaggeration to say that virtually all the roads to rational choice theory lead from RAND. This observation draws attention to its role as a quintessential American Cold War institution, and in turn to the Cold War motives that underlay much of the impetus propagating rational choice theory.” During the years of collaboration with Morgenstern and after World War II, von Neumann served as a consultant to the armed forces. In 1940, he became a member of the Scientific Advisory Committee at the Ballistic Research Laboratories and in 1941 a member of the Navy Bureau of Ordinance. He was a consultant to the Los Alamos Scientific Laboratory from 1943 to 1955. In this function, he was a leading contributor to the development of the nuclear and, along with Edward Teller and Stanisław Ulam, of the hydrogen bomb. In 1956, he received the Presidential Medal for Freedom being America’s highest civilian award, recognizing exceptional meritorious service. Von Neumann died of bone cancer on February 8, 1957, “after much suffering” (Morgenstern 1976: 814). Obviously, von Neumann’s scientific interest strongly focused on mathematics and its applications. He has delivered substantial contributions to quantum physics, functional analysis, set theory, topology, numerical analysis, cellular automata, and computer science, and his work in economics looks just like another field of application of mathematics. However, quite surprisingly, Strathem (2001) devoted a full chapter to von Neumann in his “Brief History of Economic Genius,” of nearly the same length as the presentations of Adam Smith, Marx, and Keynes, on the basis of his game-theoretical work, not mentioning his general equilibrium paper (von Neumann 1945 [1937]). The latter paper is, however, unknown to most economists and generally seen as an exercise in mathematics. What should you expect from a paper with the title “Über ein ökonomisches Gleichungssystem und eine Verallgemeinerung des Brouwerschen Fixpunktsatzes”—and with a single reference: a book titled “Topologie”? Von Neumann’s work in game theory consists of the 1928 article and the joint publication of “Theory of Games and Economic Behavior.” This is an outstanding work and quintessential for the further development of game theory. Yet, the claim that von Neumann is the “founding father of
26 M. J. Holler and B. Klose-Ullmann
game theory” has been challenged by friends and disciples of Emile Borel, a French mathematician and politician. Félix Édouard Justin Émile Borel (1871–1956) was born in Saint-Affrique, Départment Aveyron, France. He was not only a first-class mathematician, but from 1924 to 1936 he was a member of the French National Assembly. In 1925, he became Minister of Marine. During World War II, he joined the French Resistance. His political vision, however, was a united Europe. In mathematics, along with René-Louis Baire and his student Henri Lebesgue, he was among the pioneers of measure theory and its application to probability theory. One of his books on probability introduced the amusing thought experiment that entered popular culture under the name Infinite Monkey Theorem. The last section of this chapter contains an illustration. Between 1921 and 1927, Borel published a series of papers on game theory and became “perhaps” the first defining the game of strategy “in which the winnings depend simultaneously on chance and the skill of the player” (Borel quoted by Rives 1975: 559). The focus on probability theory also carried over into his pioneering work in game theory. Combining the idea of strategic behavior with probability theory leads Borel to suggest the concept of mixed strategies: the potential that players choose their strategies with probabilities smaller than one. In fact, he came very close to the understanding of the mixed-strategy equilibrium and its somewhat paradoxical consequences.3 We will come back to this concept in Chap. 9. To our knowledge, Borel did not, however, prove that, given this potential, an equilibrium in the form of a pair of strategies, possibly mixed, always exists if the interests of the players are strictly antagonistic, as in the zero-sum game. As already pointed out, von Neumann gave a proof of this result, known as Minimax Theorem, in his 1928 article. This may move one to ask how one pioneer of game theory copes with the other pioneer’s work. It seems that they tried to avoid reading the other’s material. We do not know whether Émile Borel read German, but von Neumann had the reputation that he could converse “in all major European languages,” including Classical Greek and Latin. On May 1928, von Neumann sent a note to Borel, as it seems in French, announcing that he had “proved a theorem two years previously … concerning the existence of a ‘best’ way to play in the general two-person, zero-sum case” (Leonard 2010: 62). Borel is hardly known for his contribution to game theory, but the concept of a Borel set is named in his honor. Moreover, we find Borel’s paradox, 3See
Leonard (2010: 60).
2 No Mathematics 27
the Heine–Borel theorem, and the Borel–Cantelli lemma in the literature as well as concepts like Borel algebra, Borel measure, and Borel space. In addition, one of the many craters on the moon carries his name. There is, however, a von Neumann crater as well on the moon. But let us come back to the idea of sets. It is helpful to understand this concept before going into the basics of game theory.
2.2 The Concept of Sets When set theory entered the curriculum of elementary schools, it was met with disapproval and refutation by the parents. Many parents felt it difficult to accept that 1 + 1 may be equal to 1. We do not want to comment why set theory should be taught at elementary schools, but simply summarize the basics referred to in this volume. A preliminary, but very useful definition is to think of a set as a collection of objects that are different from each other. The apple and pears in Paul Cézannes’ fruit basket (Fig. 2.1) form a set if we consider each apple different from another apple and of course different from a pear and consider each pear different from another pear and different from an apple. The members of the US Senate and the directors of Microsoft are sets. The natural numbers 1, 2, 3, 4, … can be described by a set, although this is an infinite set as there is always a number greater than each number we pick from this set. In fact, there are two conventions for the set of natural numbers: either the set of positive integers {1, 2, 3, …} or the set of non-negative integers {0, 1, 2, …} which includes zero. (What is natural about zero?) Sometimes the natural numbers together with zero are referred to as whole numbers. However, this term is also used for the set of positive and negative integers, including zero. In general, it does not make much sense to “add up” apples to pears if the difference between them is considered essential—for example, if John is allergic to apples. But to sum them up as fruits could make sense if Jean is on a fruit diet and apples and pears are considered equivalent from the diet’s point of view. Whether it makes sense to distinguish objects that may form a set or not depend on whether it makes practical sense to do so. If we distinguish the objects but relate them to form a set—for instance, put them into a basket—then these objects are called elements of the set. Thus, number 3 is an element of the set of natural numbers. The players of the game that is represented by Matrix 1.1 form a set, the set of players, and player A is an element of this set.
28 M. J. Holler and B. Klose-Ullmann
Fig. 2.1 Basket with apples and pears à la Paul Cézanne (The original material to this piece was produced by Raphael Braham, Hamburg. We would like to thank him and his parents for letting us have this material)
There is the convention to use capital letters for labeling sets: A, B, C, …, X, Y, Z. If we label the set of natural numbers by A, then 3 is an element of A. Often, we use lower case letters for elements and curly brackets to lump them into a set: For example, Z = {a, b, c, d} is the set of the elements a, b, c, and d and T = {1, 2, 3, 4, 5, 6, 7, 8, 9} is the set of single-figure numbers. However, as the sequence of elements within a set does not matter {1, 2, 3, 4, 5, 6, 7, 8, 9} = {9, 2, 1, 3, 4, 5, 6, 7, 8}. That is: 1. Two sets A and B are identical if their elements are identical. 2. A set does not include two or more identical elements. Set X is a subset of set Y, if all elements of X are elements of Y. We write X ⊆ Y. Set X is a proper (or strict ) subset of Y if it is a subset of Y and Y contains elements which are not in X. We write X ⊂ Y. The apples in Cézanne’s basket, depicted in Fig. 2.1, are a proper subset of the fruits in the basket, i.e., the set of apples and pears.
2 No Mathematics 29
$ F
D E
%
H
G
J
I
Fig. 2.2 Union (of sets) A ∪ B
$ F
D E
%
H
G I
J
Fig. 2.3 Intersection (of sets) A ∩ B
If we take the squares of the natural numbers, we can think of them of an infinite set. However, this set will be a proper subset of the set of natural numbers, despite its infinity. If a set X contains no elements, then it is an empty set and we write X = ∅. If a set X contains only one element, say i, then it is singleton, and we have X = {i}. If two sets A and B form a union, then the union set contains all elements of A and B. For the union, we write A ∪ B. However, identical elements in A and B will be “listed” only once. The union of A = {a, b, c, d} and B = {c, d, e, f, g}, thus, is A ∪ B = {a, b, c, d, e, f, g} (see Fig. 2.2). It should be obvious that X ∪ Y = Y if X ⊂ Y. It should also be obvious that A ∪ A = A. Due to lack of mathematical symbols on the typewriter or “pure laziness,” this expression is sometimes written as A + A = A. Is this how we get 1 + 1 = 1? Figure 2.3 illustrates the intersection of the two sets A and B, i.e., A ∩ B, where A = {a, b, c, d} and B = {c, d, e, f, g}. We get A ∩ B = {c, d} for the intersection of A and B. Of course, X ⊂ Y implies the intersection X ∩ Y = X. Obviously, A ∩ A = A.
30 M. J. Holler and B. Klose-Ullmann
There is another important concept used in game theory, especially when we look at coalition formation: Set A is the complement of Ac if A ∩ Ac = ∅ and A ∪ Ac = Ω. Here, Ω is the set of all elements under consideration. In Fig. 2.1, Ω is represented by all apples and pears in the basket. The set of apples is the complement of the set of pears, and the set of pears is the complement of the set of all pears.
2.3 Prices and Quantities The set of real numbers consists of whole numbers, rational numbers, and irrational numbers like π and √2. Rational numbers can be written as a ratio of two whole numbers (of course, we are not allowed to divide by zero). In economics, we refer to the set of all non-negative real numbers to describe the set of (possible) prices. Figure 2.4 illustrates this set. Therefore, the set of prices P is the set of all real numbers p, given p ≥ 0, such that any p is a particular price, i.e., an element of P. If we assume that goods (commodities or services) can be split up into units so that any (non-negative) quantity is feasible and can be described by a real number, then we get the set of quantities Q. Figure 2.5 illustrates Q. Therefore, if q is an element of Q, then q is a quantity. Of course, we have q ≥ 0. If we combine the sets of Figs. 2.4 and 2.5, we get a price-quantity diagram. The diagram illustrates the set (P, Q) defined by all pairs (p, q) that result in a combination of P and Q. The pairs (p1, q1), (p2, q2), (p3, q3), (p4, q4), and (p5, q5) are elements of the set (P, Q). The price-quantity pairs have been chosen so that they are consistent with a negatively sloped demand curve. Economists like this type of curve, but it has very little relevance for game theory. Here, it serves to introduce the idea of a function: more precisely, the graph of a function. S
Fig. 2.4 The set of prices P
T Fig. 2.5 The set of quantities
2 No Mathematics 31 SL
. .
.
.
. TL
Fig. 2.6 A price-quantity diagram
2.4 From Set to Mapping and Function The representation of prices and quantities by real numbers implies that both are measurable. Consequently, the set (P, Q) is a subset of (two-dimensional) space. If, instead of prices and quantities, the two dimensions represent the profits of agents A and B, respectively, then we get a profit space as illustrated in Fig. 2.7. GA and GB represent the profits of A and B, respectively. Line gg represents the maximal profits of B, given alternative levels of GA. Of course, gg also shows the maximal levels of profits of A, given alternative levels of GB. The line GG captures the maximum of total profit G* such that GA + GB = G* holds for GG. Figure 2.7 demonstrates that G* can be achieved only if G* is shared according to GA* and GB*. Other pairs of (GA, GB) represent sums G° = GA + GB, smaller than G*. The line gg implies that the set of possible profits given in Fig. 2.7 is convex. A convex set Z is characterized by the fact that all elements of the connecting line segment of any two “points” that represent elements of Z are elements of Z. Figure 2.8a represents a convex set Y. However, Y is only weakly convex, as there are connecting lines that are subsets of the border of Y. The set X of Fig. 2.8b is non-convex: For instance, the connecting line of elements a and b has elements which are not contained in X. The profit space in Fig. 2.7 assumes that each element (“point”) in this space represents a pair of profits (GA, GB). Correspondingly, the pair (800, 200) assigns profits of 800 and 200 to agents A and B, alternatively (see Matrix 1.1). We should expect that A would not be very happy if the
32 M. J. Holler and B. Klose-Ullmann *% *
J
*%
*$
J
*$
*
Fig. 2.7 A convex profit space
D
E D
ui(e°). The preference relation “e* better than e°” will be transformed into the space of real numbers. So far, the utility function is only a shorthand representation for the preference relations. Decision makers have preferences. They prefer e* to e°, or they are indifferent. In general, only the analyst assigns numbers to represent preferences. However, when applying game theory, the decision maker assigns numbers to represent his or her preferences as well as the preferences of the other players. The mapping if “e∗ better than e◦ ” follows “ui e∗ > ui e◦ ” (OU) constrains the numbers expressing ui(e*) > ui(e°) to relative measures only. We are free to specify this relationship by any pair of numbers as long as ui(e*) > ui(e°) is satisfied. Thus ui(e*) = 10 and ui(e°) = 5 do not imply that e* is twice as good as e°; it only says that e* is better than e°. The numbers only reflect an ordinal ranking. Analogously, the values ui(e) = 4 and uj(e) = 2 do not imply that the preference of i concerning outcome e is twice as strong as the preference of j for the same event. We cannot even specify ui(e) > uj(e), comparing a utility value of i to a utility value of j, as interpersonal comparison of utility is excluded so far. In Chap. 10, cardinality of utility will be introduced in the form of a von Neumann-Morgenstern utility function. Such a utility function is called a payoff function. Note that payoffs are utilities, and no money. If money values are the events evaluated in the game, then most game theorists speak of monetary payoffs. (However, not all game theorists submit to this standard.) But this does not exclude that we may take money values as proxies of utility ( D E
Fig. 2.10 Mapping utilities
5 [
2 No Mathematics 35
values. In some cases, money seems to be an adequate proxy—like in the case of business profits. The pioneering work on game theory of von Neumann (1959 [1928]) and von Neumann and Morgenstern (1944) focuses on zero-sum games which means that, for each particular event e°, the utilities of players 1 and 2 sum to zero or, equivalently, u1(e°) = −u2(e°) for all e° which are elements of E. This assumes cardinality, but also interpersonal comparison of utility. These are rather strong assumptions; modern game theory, starting with Nash (1950), tries to avoid them. (For a historical interpretation, see Holler (2016).) The mapping represented by (U) defines the utility function with respect to events or outcomes. But how do we get these events? Where do they come from? An event is the result of the specification and selection of the strategies of the players. A strategy choice is an element (s1, …, sn−1, sn) of the strategy space S. S is the Cartesian product S = S1 × S2 × … × Sn − 1 × Sn where Si is the strategy set of player i. Thus, a strategy choice determines for each player i with strategy set Si a particular strategy si which is an element of Si. The event function (or outcome function ) e is a mapping e: S → E.
(E)
Now we can combine (U) and (E) so that we get the utility function in the form of a direct relationship of the strategy space S and the set of real numbers R: u: S → R.
(U*)
As Si contains mixed strategies, i.e., probability distributions over pure strategies, u has to deal with expected value. This necessitates a cardinal interpretation of u (which will be introduced in Chap. 10).
2.6 Monkeys Write Shakespeare, but Where Is Hamlet?4 Probabilities can be a challenge, and not only in game theory. Here is an illustration based on Émile Borel’s Infinite Monkey Theorem. It also serves to demonstrate the power and weakness of infinity when it comes to real-world decision making. Obviously, there are capacity constraints. Let a monkey 4We
have written this section two years ago and cannot recover the references.
36 M. J. Holler and B. Klose-Ullmann
use a typewriter and let us assume that he randomly presses the keys for an infinite time. Then, there is a probability of 1 that he will write a piece that is identical with Shakespeare’s Hamlet. As he has typed many other pieces, many ill-defined and not identifiable, the problem will be how to find Hamlet in this infinitely large garbage pile of letters and words. The problem will become lucid if we study the structure of the argument of the proof. Probability theory tells us that we have to multiply the probabilities of two events if the two events are independent and we want to know that both events occur. If the probability of rain is 0.5 and the probability that you do not take your umbrella with you is 0.2, then the probability that you get wet is 0.5 times 0.2 which is 0.1. However, one might argue that the two events are not independent of each other. There are people who claim that it never rains when they take their umbrella with them. In general, these people bring forward this claim when it is raining and they left their umbrella at home. So, let us believe in independency, and let us assume that a monkey strikes each key of a typewriter with independent probabilities. If there are fifty keys, then there is a probability of 1/50 that a monkey will strike the key for “H” in his first attempt and, because of independency, again there is a probability of 1/50 that he will strike the key for “A” in his second attempt. Thus, there is a probability of 1/50 times 1/50 equal to 1/2500 to find “HA” on the paper. This probability can be written as 1/502. In other words, the probability that we do not find “HA” written is 1 − 1/502 which is quite large but its difference from 1 is still within the domain of our visual capacity and daily experience. But what is the probability that we see “HAMLET” typed by Mr. Monkey? It is 1/506 and thus the probability that we do not see “HAMLET” typed is 1 − 1/506. For all practical purposes, this number is close to 1. The conclusion is that we will not see “HAMLET” unless the monkey tries again and again. Let him type another six strokes. Again, the probability that we find “HAMLET” on the paper will be 1/506. Therefore, the probability that we do not see “HAMLET” is again 1 − 1/506. We will not see “HAMLET” with the probability p = (1 − 1/506) (1 − 1/506) = (1 − 1/506)2 in two trials of six letters, but obviously the probability of failure has decreased as (1 − 1/506)2 is smaller than 1 − 1/506. (Take any positive number x smaller than 1 and multiply it with itself then you will see that x2 is smaller than x.) Now let us assume that the monkey takes, instead of 2, n sequences of six letters, one after another, then the probability for not typing “HAMLET” will be p = (1 − 1/506)n. The bad message is that if n equals a million the
2 No Mathematics 37
probability p is still 0.999. However, for an n that equals 10 billion, the p will be 0.53, and for n = 100 billion, the probability will be p = 0.17. Obviously, for a very large n, the probability p approaches 0 and the monkey will have typed “HAMLET” has a probability close to 1. Of course, the monkey will accomplish this job much faster when we allow for having, for example, “HAML” in sequence n − 1 and “ET” in sequence n. Still, it is not very likely that any monkey will live long enough to write “HAMLET” under this condition, not to speak about the time it needs for a monkey to write the full play: A monkey is a poor substitute for William Shakespeare. But if the number of monkeys approaches infinity, then they can even accomplish the text of the Hamlet play plus the text of Homer’s Iliad. You see, if 100 billion monkeys punch a sequence of six letters, then there is a probability of 1 − 0.17 = 0.83 that one of them typed “HAMLET.” Of course, one might wonder from where we can recruit 100 billion monkeys and how we can feed them. Even to arrange for 100 billion typewriters might be a problem as the experiment does not promise to be profitable. The next problem is how to find “HAMLET” if each of the 100 billion monkeys has produced a sequence of 6 letters and a probability of 0.83 that one of the monkeys typed “HAMLET.” There will be mountains of paper that need to check for “HAMLET.” However, perhaps this issue is not relevant at all. In 2003, it was reported by the Zoo of Paignton, in relation to the University of Plymouth in Devon, England, that a keyboard was put into a cage with six macaques for one month. Some members of the macaque family live on the Rock of Gibraltar and enjoy the Mediterranean view, but the six specimens in the cage were expected to write. During this time, the monkeys produced five pages of writing that mainly consisted of the letter “S”—which is unfortunate if one expects the monkeys would produce “HAMLET.” Moreover, the monkeys attacked the keyboard with stones and urinated on it. In other words, the probability model that we assumed above obviously did not apply. On the other hand, if we took human beings instead of monkeys, then we would hope that most of them would be able, given the proper incentives, to write the word “HAMLET” in six strokes only; the independence of probabilities no longer holds. But there are not too many people who can write a Shakespeare play without copying it, irrespective of whether it is Hamlet, Romeo and Juliet or As You Like It. Instead of challenging this hypothesis, we now start our adventure into game theory.
38 M. J. Holler and B. Klose-Ullmann
References Amadae, S. M. (2003). Rationalizing capitalist democracy: The Cold War origins of rational choice liberalism. Chicago: The University of Chicago Press. Holler, M. J. (2016). John von Neumann (1903–1957). In G. Faccarello & H. D. Kurz (Eds.), Handbook on the history of economic analysis, Volume I: Great economists since Petty and Boisguilbert (pp. 581–586). Cheltenham and Northampton: Edward Elgar. Leonard, R. (2010). Von Neumann, Morgenstern, and the creation of game theory: Chess to social sciences, 1900–1960. Cambridge: Cambridge University Press. Morgenstern, O. (1976). The collaboration between Oskar Morgenstern and John von Neumann on the theory of games. Journal of Economic Literature, 14, 805–816. Nash, J. F. (1950). Equilibrium points in N-person games. Proceedings of the National Academy of Sciences, 36, 48–49. Rives, N. W., Jr. (1975). On the history of the mathematical theory of games. History of Political Economy, 7, 549–565. Strathem, P. (2001). Dr. Strangelove’s game. A brief history of economic genius. London: Hamish Hamilton. Von Neumann, J. (1959 [1928]). On the theory of games of strategy. In A. W. Tucker & R. D. Luce (Eds.), Contributions to the theory of games, Volume 4. Translation of „Zur Theorie der Gesellschaftsspiele,“ Mathematische Annalen (Vol. 100, pp. 295–320). Von Neumann, J. (1945 [1937]). A model of general economic equilibrium. Review of Economic Studies, 13, 1–9. Translation of Über ein ökonomisches Gleichungssystem und eine Verallgemeinerung des Brouwerschen Fixpunktsatzes. In K. Menger (Ed.), Ergebnisse eines Mathematischen Seminars (Vol. 8). Vienna. Von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior. Princeton: Princeton University Press.
3 The Prisoners’ Dilemma, but Who Are the Players?
Two suspects are put in solitary confinement. The prosecutor is convinced that they committed a serious crime, but does not have sufficient proof to convict them in a court trial. He points out to each of them separately that they have two possibilities: to confess or not to confess. In case both do not confess he will prosecute both of them because of a few minor delinquencies, such as illegal ownership of weapons, and each will be given a sentence of two years. If both confess, they will be prosecuted together and the prosecutor will request a penalty of ten years. If only one of them confesses, the confessor will serve only one year while the other will have to reckon with the maximum penalty of twenty years.
3.1 From Game Form to Payoff Matrix In their now classical volume “Games and Decisions,” Luce and Raiffa (1957: 95) were among the first to tell the above story of the Prisoners’ Dilemma, probably the most popular tale in game theory writings.1 The tale describes a problem of whistle-blowing for those who want to save their skin when facing a court trial. The strategic problems of potential state witnesses can be illustrated by the event matrix 3.1.
1In
Luce and Raiffa (1957: 95) we still read “Prisoner’s Dilemma”: The game situation is a dilemma for every single player who is involved. Today “Prisoners’ Dilemma” is preferred—perhaps because it needs at least two agents to be in such a dilemma.
© Springer Nature Switzerland AG 2020 M. J. Holler and B. Klose-Ullmann, Scissors and Rock, https://doi.org/10.1007/978-3-030-44823-3_3
39
40 M. J. Holler and B. Klose-Ullmann
Matrix 3.1 Event matrix of a Prisoners’ Dilemma 3OD\HU
QRWFRQIHVV
&RQIHVV
QRWFRQIHVV
\HDUV \HDUV
\HDUV\HDU
FRQIHVV
\HDU\HDUV
\HDUV\HDUV
The suspects are called players 1 and 2. Their set of strategies is described by the elements (i.e., the pure strategies) “confess” or not “confess.” The potential prison terms of both delinquents, expressed in years, constitute the events that correspond to the strategy choices of players 1 and 2: If player 1 chooses the strategy “confess” and player 2 chooses “not confess,” player 1 receives a penalty of 1 year and player 2 receives a penalty of 20 years. The specification of the players, the set of strategies and the events define the game form of a game, i.e., the event matrix in the case of a matrix game. In the course of this text, we will get to know alternative ways of describing a game form, e.g., the game tree. In this chapter, we are looking at games and game forms in matrix form only. This representation is also called the normal form or strategic form of a game. In order to capture the decision problem that a player is facing, we have to enrich the game form by the players’ evaluation of the outcomes. This is being done as follows: Each player assigns utility values to the various outcomes. He does so not only from his own point of view but also with regard to how he or she thinks his or her fellow player would evaluate the events. It seems appropriate to assume that the suspects assign the higher utility value to a small punishment. If we want to introduce a formal concept in order to rank these utility values then, as already proposed in Sect. 2.5, we can define a function ui(.) such that if “e∗ better than e◦ ” follows “ui e∗ larger than ui e◦ ” (OU) However, all we really need is the ordinal ordering expressed by “better than,” on the one hand, and “larger than,” on the other. We may thus insert the values shown in Matrix 3.2. Note that if we multiply these numbers by 3 or divide them by 2.75, or any other real number, we get an equivalent representation. Only the ordinal relationships “larger than” and “smaller than”
3 The Prisoners’ Dilemma, but Who Are the Players? 41
matter in this case. As long as larger numbers are larger and smaller numbers are smaller, we have the same game: Any monotonic transformation of the utility values will give the same strategic decision problem. Matrix 3.2 represents the game matrix or payoff matrix of the Prisoners’ Dilemma game, in short, the Prisoners’ Dilemma. As already said in Chap. 2, payoffs are utilities—not money—although they are sometimes treated like money. In more traditional game theory books, in fact, payoff is identified with money. Today, if we speak of money proper we talk about money or monetary payoffs. In this book, we will speak of money when we mean money and of payoffs when we mean the evaluation of an event, i.e., utilities. For the analysis of the game in Matrix 3.2, we just need to know that the player prefers the results of larger numbers to the results of smaller numbers: The higher the value of numbers, the better! Matrix 3.2 Payoff matrix of the Prisoners’ Dilemma 3OD\HU
QRWFRQIHVV
FRQIHVV
QRWFRQIHVV
FRQIHVV
When looking at Matrix 3.2, should we confess or not confess if we were player 1? What would you recommend to the players if you were their lawyer? If we multiply all numbers in Matrix 3.2 by 10, we come to Matrix 1.1 in Chap. 1 that shows a competitive trap: cooperation among the duopolists becomes impossible; the described decision situation is a Prisoners’ Dilemma. This seems bad, but how would the demand side interpret this decision situation and the results? If we consider the buyers additional players, the outcome is no longer inefficient. Then, Matrix 1.1 tells only part of the story, which demonstrates that not all Prisoners’ Dilemma situations are bad. The evaluation depends on the players taken care of. It is important to clarify: “Who are the players?” Matrices 3.2 and 1.1 represent a game of the same type: They describe equivalent strategic decision situations. The payoffs (or utilities) in both matrices are chosen randomly, of course, taking care of the rather intuitive
42 M. J. Holler and B. Klose-Ullmann
restriction given by condition (OU) which relates preferences and utilities. Therefore, we can substitute for the game in Matrix 3.2 a much more lucid version of the game as shown in Matrix 3.3. The matrices are absolutely equivalent when it comes to game theory. Moreover, the symmetry in the payoffs of Matrix 3.3 has no meaning whatsoever. From property (OU), it follows that player 1’s utility may not be compared to the utility of player 2. Matrix 3.3 The Prisoners’ Dilemma 3OD\HU
QRWFRQIHVV
FRQIHVV
QRWFRQIHVV
FRQIHVV
In the civil law systems of Continental Europe, the state (or crown) witness regulation is somewhat out of place and infrequently applied. In Germany, it was introduced to fight the terrorist group RAF (Rote-Armee-Fraktion, also called Baader-Meinhof-Gruppe), without much success. However, it is integral part of the US legal system which follows common law principles. In the investigation, launched by the US Department of Justice in 2002 to look into alleged price-fixing in the DRAM computer chip market, the executives from Micron received immunity from criminal charges because they agreed to testify as state witnesses against the other parties in the price cartel. The investigation resulted in fines to the chip manufacturers Samsung, Infineon, Hynix and Elpida Memory totaling $731 million for illegal price-fixing, and violating antitrust law.2 However, in 2006, the willingness of Micron executives to act as state witnesses did not give them a carte blanche in the antitrust lawsuit against seven computer chip-makers, among them Micron Technology Inc., filed by 34 state prosecutors in the US District Court for the Northern District of
2Süddeutsche Zeitung, 15./16., July 2006, p. 26. Also “Discovery Lessons” by Bartko and Bunzel in The Recorder, Autumn (2008), www.callaw.com, and “Micron Settles Class-Action Lawsuit Alleging Price Fixing,” The Associated Press, published 9:36 PM ET Wed, 10 Jan 2007 updated 4:13 PM ET Thu, 5 Aug 2010.
3 The Prisoners’ Dilemma, but Who Are the Players? 43
California, charging them with conspiring to inflate prices. US law provides for a fine as high as triple the damages. In April 2002, Sotheby’s chairman, Alfred Taubman, and its chief executive, Diana Brooks, were found guilty of conspiring with Christie’s to fix commissions. Mr. Taubman served ten months of a one-year prison sentence; Ms. Brooks was given six months’ house arrest, a $350,000 fine and 1000 h of community service. During the time of house arrest, she was allowed to leave her 12-room, $5 million apartment for two hours each Friday to go grocery shopping at any store selling food or products related to food preparation, said James T. Blackford, a probation supervisor who oversaw Ms. Brooks’ case for the Federal District Court. Ms. Brooks had pleaded guilty to price-fixing and then testified against her boss Alfred Taubman. No one was charged at Christie’s, which had blown the whistle on the commission-fixing. To understand the different types of behavior, it might be useful to take into consideration the differences in the ownership structure of the two auction houses. In a special report on the art market, published in the Economist (November 26, 2009), we read that “Sotheby’s is a quoted company whereas Christie’s, once listed, was taken private in 1999 by its current owner, Mr. François Pinault. Christie’s business has since hugely expanded, partly thanks to Mr. Pinault’s pivotal position in the art world.” As a consequence, “the company can pick and choose what information it wants to reveal.” Perhaps it comes as a surprise that “it has in fact become more open over the past ten years.” As for the fixing-price-scandal, it may seem that Mr. Pinault wanted to cripple Sotheby’s. However, as pointed out to us by Professor Isidoro Mazza (University of Catania), another interpretation is that Mr. Bernard Arnault, at the time owner of the auction house Phillips de Pury & Company, was the main beneficiary. He could have reinforced his position against the two rivals by buying Sotheby’s shares with a 50% discount. As it was impossible to take over Christie’s, Sotheby’s was the right target for Mr. Arnault. However, he did not buy Sotheby’s and sold Phillips when he lost the momentum to fill the gap with the other two auction houses. In the described case, there are at least two intertwined w histle-blowers: Ms. Diana Brooks and Christie’s. This makes the analysis of the decision making, even when looking backward, rather difficult. Moreover, whistle-blowing is not always related to a Prisoners’ Dilemma game. In the case of the Micron executives and Ms. Brooks, whistle-blowing was perhaps the dominating strategy, while the players on the other side had no such option. If we introduce the law enforcers as an additional player,
44 M. J. Holler and B. Klose-Ullmann
this whistle-blowing case is of course no longer a Prisoners’ Dilemma— the outcome is no longer inefficient. Whether Christie’s whistle-blowing was a dominant strategy is not that obvious. We will look into the case of Christie’s and Sotheby’s again.
3.2 Equilibrium in Dominant Strategies Matrix 3.3 shows that both suspects are profiting from the strategy “confess,” independent of the other player’s strategy. “Confess” is a strictly dominant strategy for each player as it is the best strategy of each player, irrespective of what strategy the other player chooses. The solution of the game is an equilibrium in strictly dominant strategies. However, the outcome, which corresponds to the utility pair (1, 1), is inefficient because both players could improve their results if they choose the strategy “not confess” instead of “confess.” Then they would enjoy the utility pair (2, 2) instead of (1, 1). But the individual player has no incentive to deviate from the strictly dominant strategy “confess,” as this is the best strategy one can choose, whatever strategy the other player selects, i.e., even when the other player chooses “not confess.” Of course, we assumed that a player is strictly interested in maximizing his or her own utility and not the sum of the utilities of both players. To sum up, the characteristics of a Prisoners’ Dilemma are: (a) Each player has a strictly dominant strategy. (b) The equilibrium in strictly dominant strategies leads to an inefficient outcome. As to property (a), the following question suggests itself: Is it important that both suspects are put in different cells and cannot communicate with each other? The answer is straightforward: If the players can assume that the general attorney, i.e., the prosecutor, told both suspects the same story, the answer is “no.” Player 1 would choose the strictly dominant strategy “confess” even if he knew that player 2 knows his (player 1’s) strategy decision before making his own decision. Player 2 reasons and chooses, respectively. From a player’s point of view, a strictly dominant decision is always the best decision and knowing the other player’s strategy has therefore no impact on one’s decision. The first sentence of the Prisoners’ Dilemma story (“Two suspects are put in solitary confinement.”) at the beginning of this chapter has led to many misinterpretations. Readers often forget that rational players will choose
3 The Prisoners’ Dilemma, but Who Are the Players? 45
strictly dominant strategies, and therefore it does not make any difference for the (game-theoretical) outcome whether the players are kept segregated or whether they can communicate with each other. However, it is important that the players cannot make any binding agreements and third parties (e.g., the Mafia) do not intervene. Another misinterpretation of the Prisoners’ Dilemma results from a tacit assumption that player 1 can decide not only for himself but also for player 2 to choose “not confess.” However, it is the nature and essence of a game that players are in a position to control their own strategies, but not the strategies of the other players.
3.3 Catch-22 and Other Social Traps “From now on I’am thinking only of me.” Major Danby replied indulgently with a superior smile: “But, Yossarian, suppose everyone felt that way.” “Then” said Yossarian, “I’d certainly be a dammed fool to feel any other way, wouldn’t I?” If we look at the logical structure of this conversation taken from Heller’s (1961) notorious novel “Catch-22,” we cannot but agree with Yossarian. Acting for one’s own benefit is a dominant strategy in the situation discussed between Major Danby and Yossarian and describes individually rational behavior. If the result of this behavior is socially not desirable, as we can conclude from the argument of Major Danby, then we are facing a social trap that, in general, constitutes a Prisoners’ Dilemma. The Prisoners’ Dilemma has often been interpreted as description of a decision situation that implies a fundamental conflict between individually rational behavior and socially desirable, i.e., collectively rational behavior. In general, the latter presupposes a behavior that leads to an efficient outcome such that no individual member of the society under consideration can be made better off without reducing the benefits of another member.3 However, a cooperative behavior leading to an efficient outcome is not always achieved when it is in conflict with individual rationality. We can achieve such generalization of the Prisoners’ Dilemma in terms of cooperative and non-cooperative behavior by renaming the strategies. A comparison of Matrices 3.3 and 3.4 illustrates this generalization. The strategy “not
3This
is called Pareto optimality or Pareto efficiency, honoring the contributions of Vilfredo Pareto (1848–1923) to modern economics.
46 M. J. Holler and B. Klose-Ullmann
confess” is interpreted as “cooperate,” and “confess” is replaced by “not cooperate.” Obviously, Matrix 3.4 represents a Prisoners’ Dilemma if a > b > c > d applies. Matrix 3.4 The generalized Prisoners’ Dilemma 3OD\HU
FRRSHUDWH
QRWFRRSHUDWH
FRRSHUDWH
EE
GD
QRWFRRSHUDWH
DG
FF
Matrix 3.4 represents a social dilemma which “is defined as a situation of strategic interdependence in which the deci-sions of individually rational actors lead to an inferior outcome for all or some parties than the decisions of ‘collectively rational’ actors” (Diekmann and Przepiorka 2016: 1311). Whether we are dealing with cartel agreements as in the game in Matrix 1.1 or with financing public or collective goods, the socially desirable outcome of (cooperate, cooperate) is not reached because it contrasts with individually rational behavior. This is the essence of the dilemma. The strategy “not cooperate” is often identified as free-riding. A free-rider uses a collective good without giving something in return—like traveling on a public bus without paying for the ticket. The free-riding analogy also applies to the polluter who dumps his old tires in the woods, to the power station that channels its cooling water in the Danube river, or to fishing boats using drag nets. It applies just as well to so-called tax evaders, to people obtaining subsidies by false pretenses, and, more generally, to users of public goods that either do not contribute properly to their provision and maintenance, or to exploit those goods to a disproportionate much larger extent than they are entitled to from a legal or moral point of view. These individually rational actors have in common that their own utility from exploiting the collective good “environment” or “state” is larger than the costs to be allotted to them as members of the society which they are “exploiting.” In case everybody acts like this, the environment will be destroyed and the state is reduced to a mere power game without any resources to provide public goods. From a social point of view, the outcome
3 The Prisoners’ Dilemma, but Who Are the Players? 47
is inefficient and is individually undesirable, and yet, no individual can improve his or her outcome by choosing a “socially desirable” behavior. This is also true for the game in Matrix 3.4: collective good “cooperation” is not provided although each individual player would be better off.
3.4 Ways Out of the Dilemma The possibility of free riding prevents collective goods being provided voluntarily. Therefore, many authors4 used the Prisoners’ Dilemma as illustration of the Hobbesian Jungle in which, as Thomas Hobbes (1588–1673) stated, life is “solitary, poor, brutish and short.” As a solution, he postulated an unrestricted state authority, establishing itself as Leviathan, to defeat anarchy (see Hobbes 1996 [1651]). Of course, the social traps which can be described by the Prisoners’ Dilemma have inspired many attempts to find efficient solutions. We, for instance, can show that cooperative behavior on a voluntary basis is possible if the Prisoners’ Dilemma situation is repeated infinitely or rather with an unforeseeable end, and the players do not discount their utility too much. Such a setting is called repeated Prisoners’ Dilemma game or Iterated Prisoners’ Dilemma. (We will further discuss it in Chap. 9.) Many real-world decision situations have features that can be approximated by such a setting. The standard objection to this solution is: If the game is played repeatedly with unforeseeable end, then, in fact, it is no longer a Prisoners’ Dilemma. In contrast, if there is a potential for confusion, the original game may be specified as one-shot Prisoners’ Dilemma. Similar objections hold when the bilateral relation reigning in the original Prisoners’ Dilemma is embedded in a social network, and information about cooperative and non-cooperative behavior is spread through this net. A one-time deviation from the “path of virtue” may lead to lasting ostracism. Social networks are often based on structures that minimize the degree of violating cooperation. In general, if violations are not sanctioned then the 4See,
for example, Taylor (1976) in “Anarchy and Cooperation,” and Heap and Varoufakis (1995: 148). However, the Hobbes scholar Tuck (1989: 65ff) argues that the source of the Hobbesian Jungle is not the result of conflicting interests but the fact that people have no language to communicate on moral values and thus to coordinate their behavior. Given this Hobbesian uncertainty, it is not easy to justify that we assign payoffs to a decision maker that make “not cooperate” a strictly dominant strategy, as in Matrix 3.4. If we follow Tuck, the Stag Hunt Game, however, with a low degree of social trust, seems to deliver a more adequate description of the Hobbesian Jungle (see Sect. 8.4).
48 M. J. Holler and B. Klose-Ullmann
networks are highly unstable and prone to dissolve. It is the threat of punishment, whether in social networks or in repeated social interaction, that may bring about cooperation if the one-shot decision situation constitutes a Prisoners’ Dilemma. In the case of the chip manufacturers, mentioned in 3.1 above, who were sentenced to a fine of 731 million dollars because of price agreements and which are now threatened with class action charges,5 the time dimension mixes in a very complex way with the network dimension. The notion of embeddedness in a network is not the same for all companies concerned. Apparently, Micron’s managers interpreted the first round of the proceedings as a one-shot Prisoners’ Dilemma, and therefore, in 2002, they acted as state witnesses. It would be interesting to see how many months and years these managers continued to run the company. Cooperative behavior on a voluntary basis is possible if both players expect that the fellow player will cooperate if he himself chooses the cooperative strategy—and that the fellow player chooses the non-cooperative strategy if he himself acts non-cooperatively. Under this hypothesis, the players follow conditional strategies that will motivate them—on the basis of individually rational behavior—to choose cooperative strategies. The outcome is an expectation equilibrium. However, in a (one-shot) Prisoners’ Dilemma the strategies are not conditional. Why should player 1 choose “cooperate” if this strategy is played by player 2? Because of mirror neurons? (see Sect. 1.6). Regarding the Prisoners’ Dilemma in Matrix 3.4, cooperative behavior on a voluntary basis might be the outcome if both players act altruistically and evaluate the other player’s utility positively, revising their own utility in the game accordingly. However, as long as a > b > c > d holds, Matrix 3.4 describes a Prisoners’ Dilemma, irrespective of any altruism, envy, or other type of fellow-feeling. But if a > b > c > d does not apply because of too much altruism or envy, then the game in Matrix 3.4 is not any longer a Prisoners’ Dilemma. A similar argument holds for the Mafia solution of the Prisoners’ Dilemma game. Here, the game has not been modified by an adequate framing of the players’ utilities but by “shaping” the events. If the Mafia punishes “confess” with killing the traitor, Matrix 3.5 represents a corresponding game form. 5A class action is a form of lawsuit in which a large group of people collectively brings a claim to court or in which a “class” of defendants is being sued. This form of collective lawsuit is rather common in the USA but also gets increasingly popular in the European legal systems.
3 The Prisoners’ Dilemma, but Who Are the Players? 49
Matrix 3.5 Event matrix of a Mafia solution 3OD\HU
QRWFRQIHVV
FRQIHVV
QRWFRQIHVV
\HDUV\HDUV
\HDUVGHDWK
FRQIHVV
GHDWK\HDUV
GHDWKGHDWK
Of course, the players evaluate the event “death” the worst, which is assumed by the Mafia’s intervention and “confess” is certainly not a dominant strategy. To the contrary, the game that corresponds to the game form in Matrix 3.5 is likely to have “not confess” as the dominant strategy if we assign plausible payoffs to the players. Mafia members, learning from history, experience, and introspection, can discover this solution without deeper knowledge of game theory.
3.5 Who Are the Players? Introducing the Mafia to the Prisoners’ Dilemma game with the ensuing changes in the utilities of the two suspects leads us to the question of who are the players in this game. Apparently, the two suspects would commit a serious mistake in their decisions if they neglected the Mafia. This is drastically shown when comparing the game forms in Matrices 3.1 and 3.5. The conclusion is: A careful analysis of the decision situation with game-theory tools must always start with the question: Who are the players? In many cases, the answer to this question is already an essential element of analyzing the decision situation. To formulate the decision model adequately is an indispensable prerequisite of a “good” decision. In many cases, the question “Who are the players?” is highly relevant: It gives a new dimension even to the story of the two suspects at the beginning of this chapter. Now it is no longer the story of the two suspects but it becomes the story of the prosecutor as well. The prosecutor needs a confession in order to convict the two suspects as criminals. We can only guess the motives for this perhaps a chance of being promoted or his personal involvement for law and
50 M. J. Holler and B. Klose-Ullmann
order is his driving force; we do not know. Yet we understand that he definitely wants a confession. We might even suspect that he would prefer that both suspects would confess. This is to be concluded from the design of the decision situation, described in Matrix 3.1, with which the two suspects are confronted. Yet this decision situation is only part of the problem. The prosecutor, as well, has to make a decision. Which event matrix should he present to the suspects? Thus, he should be seen as a player in a “larger game;” Matrix 3.1 represents only a subgame of it. Later in this text, we will learn more about subgames: We will see that it is strongly recommended taking subgames into account, especially when the sequential form of a game matters. Matrix 3.6 The Prisoners’ Dilemma as subgame 3OD\HU
QRWFRQIHVV
FRQIHVV
QRWFRQIHVV
FRQIHVV
Looking at the likely preferences of the prosecutor, we obtain for each of the events in Matrix 3.1, a triplet of payoffs. These triplets are written as entries of Matrix 3.6 in the form of three-dimensional vectors: The first and second digits are the values of the suspects (players 1 and 2) while the third digit is the value of the prosecutor. Vector (3, 0, 1) means that the event (confess, not confess) is the best outcome for player 1 and the worst for player 2 whereas the prosecutor evaluates this outcome better than (not confess/not confess) but worse than (confess/confess). The structural properties of the Prisoners’ Dilemma—strictly dominant strategies and inefficient outcome—are valid for the game in Matrix 3.6 but with a few restrictions. The two suspects still have strictly dominant strategies and the corresponding outcome (confess, confess) constitutes an equilibrium in dominant strategies. But this outcome is no longer inefficient since not all players, starting from the choice (confess, confess), can improve their situation. We see that for player 3, the prosecutor, every other outcome is worse than (confess, confess). Thus, (confess, confess) constitutes a Pareto efficient result: Nobody can get better without somebody being worse off.
3 The Prisoners’ Dilemma, but Who Are the Players? 51
By including the prosecutor, we can solve the conflict between individually rational and collectively rational behavior. Under the assumption that the preferences of the prosecutor reflect the preferences of society, and that the suspects—by their criminal behavior—position themselves outside of society, there is no social dilemma anymore. In fact, if we think that the suspects are no longer members of society, the outcome (confess, confess) would be the only efficient one. It gives maximal utility to the prosecutor and thus to the society which he represents. Our discussion about the relevant set of players leads us to the somewhat paradoxical but not really surprising result: The Prisoners’ Dilemma is no longer a Prisoners’ Dilemma if we include the prosecutor. By discussing the set of players N, we often find features which are essential for interpreting the decision situation. The definition of N is an important instrument of game theory, although often not specifically addressed. Even if the decision situation of a price-setting duopoly was a Prisoners’ Dilemma, as assumed in Matrix 1.1, is it really a social trap? Or, do consumers profit from more competition on this market due to lower prices, better quality, or both. Who are the relevant players? Who is the society? From a social point of view not every Prisoners’ Dilemma is bad! To a large extent, the interpretation depends on which society is relevant or who are the people defining the society by whose standards the result is evaluated. The Prisoners’ Dilemma story itself is an illustration that the relevant society and its value system matter. But it is not always that obvious what the value system of the society could be. In general, however, game theory deals with individuals and their decisions and the individuals’ preferences motivate their choices. If players represent social values, like the prosecutor, social preferences become relevant. These social preferences are held by individuals. Another type of social preference becomes relevant if the player is a social entity, e.g., a family, a firm, a government, a state, etc. Then there is the problem of aggregating the preferences of the decision makers within the entity in order to get the payoff function of the collective player. A related problem is to unravel the collective set of strategies, i.e., to determine what the collective entity can do, which plans are feasible and which plans are out of reach, etc. to capture the power of the entity. The aggregation of preferences and the unraveling of the strategy sets are likely to be games in themselves, played inside the collective decision body. Thus, “…intrapersonal strategic conflicts are transformed into interpersonal ones to which game theory can and should be applied” (Güth 1991: 405).
52 M. J. Holler and B. Klose-Ullmann
3.6 Then Strike The widely quoted Chinese military strategist Sun Tzu wrote: “To a surrounded enemy you must leave a way of escape.”6 Sun Tzu’s ninth-century commentator Tu Mu adds: “Show him there is a road to safety, and so create in his mind the idea that there is an alternative to death. Then strike” (Sun Tzu 1963: 110).7 The recommendation “Then strike” is sometimes omitted in the discussion but it is a rational consequence of “leaving a way of escape.” We will learn that backward induction should allow the surrounded party to deduct that this is the likely fate and therefore inspire them to “fight to death.” However, commanders do not always trust that this argument is understood. Again, Sun Tzu’s commentator Tu Mu is an inspiring source: “It is a military doctrine that an encircling force must leave a gap to show the surrounded troops there is a way out, so they will not be determined to fight to death. Then, taking advantage of this, strike. Now, if I am in encircled ground, and the enemy opens a road in order to tempt my troops to take it, I close this means of escape so that my officers and men will have to fight to death” (Sun Tzu 1963: 132f ). “Fighting to death” is a dominant strategy if the opening of a road of escape is just a trap laid out by the enemy to gain easy victory. The first information of Sun Tzu’s text came to Europe when Father J. J. M. Amiot, a Jesuit missionary to Peking, published his translation in Paris in 1772.8 We should keep this date in mind when we read the following quote taken from Niccolò Machiavelli’s Discourses (Book III, Chapter 12), published in 1531. The ancient commanders of armies, who well knew the powerful influence of necessity, and how it inspired the soldiers with the most desperate courage, neglected nothing to subject their men to such a pressure, whilst, on the other hand, they employed every device that ingenuity could suggest to relieve the enemy’s troops from the necessity of fighting. Thus they often opened the
6Sun Tzu (1963: 109). This quote corresponds with Dixit and Nalebuff (1991: 136) citing Sun Tzu as “When you surround an enemy, leave an outlet free.” We do not know when Sun Tzu lived and wrote his “The Art of War.” Most likely (if at all) he lived in the period of the Warring States (453–221 B.C.). See Samuel Griffith’s extensive introductory comments that accompany his translation of the Sun Tzu text. 7For a sophisticated game-theoretic interpretation of Sun Tzu’s “The Art of War,” see Niou and Ordeshook (1994). 8See Griffith in his preface to Sun Tzu’s text (Sun Tzu 1963: ix).
3 The Prisoners’ Dilemma, but Who Are the Players? 53
way for the enemy to retreat, which they might easily have barred; and closed it to their own soldiers for whom they could with ease have kept it open. (Machiavelli 1882 [1531]: 361)
This sounds very familiar after reading Sun Tzu and later interpretations. But Machiavelli also considers the other side of the fighting and its rationale. A skillful general, then, who has to besiege a city, can judge of the difficulties of its capture by knowing and considering to what extent the inhabitants are under the necessity of defending themselves. If he finds that to be very urgent, then he may deem his task in proportion difficult; but if the motive for resistance is feeble, then he may count upon an easy victory. (Machiavelli 1882 [1531]: 361)
Here “necessity” is the concept that explains why people will fight, attack, or surrender. Machiavelli, again following the ancient Roman historian Titus Livius, makes extensive use of it to explain the essence of successive military strategies when it comes to capture a city. …a captain who besieges a city should strive by every means in his power to relieve the besieged of the pressure of necessity, and thus diminish the obstinacy of their defence. He should promise them a full pardon if they fear punishment, and if they are apprehensive for their liberties he should assure them that he is not the enemy of the public good, but only of a few ambitious persons in the city who oppose it. Such a course will often facilitate the siege and capture of cities. (Machiavelli 1882 [1531]: 362)
Of course, often there are wise people among the besieged who understand that those who besiege a city are not necessarily friends of its inhabitants. It would be a stretch for them to believe that the attackers are only a threat to “a few ambitious persons in the city who oppose” the takeover. Note that the strength of the above promise puts a wedge between the besieged population. Those who warn and object, fall in the basket of the “few ambitious persons” who oppose the takeover. This implies that those who do not oppose expect a brighter future than those who do oppose. Then preaching resistance is not a strictly dominant strategy. Of course, “[A]rtifices of this kind are quickly appreciated by the wise, but the people are generally deceived by them. Blinded by their eager desire for present peace, they do not see the snares that are concealed under these liberal promises, and thus many cities have fallen into servitude” (Machiavelli 1882 [1531]: 362).
54 M. J. Holler and B. Klose-Ullmann
Machiavelli gives a number of cases which support this observation. Here is a historical example, which, however, shows that such a promise still might have an effect, despite its strategic “inconsistency” pointed out by game-theoretical thinking and Machiavelli’s reasoning. This was the case with Florence in our immediate times, and in ancient times with Crassus and his army. Crassus well knew that the promises of the Parthians were not to be trusted, and that they were made merely for the purpose of removing from the minds of the Roman soldiers the impression of the necessity of defending themselves. Yet so blinded were these by the offers of peace that had been made by the enemy, that Crassus could not induce them to make a vigorous resistance. (Machiavelli 1882 [1531]: 362)
However, the opening of a road of escape can be a credible strategy of the attacker when resistance is getting too strong and the siege is too costly. Of course, this strategy can only be successful if the besieged party is quite sure that the opening is not a trick and a deadly strike will follow. If the attackers are strong enough for a siege, but too weak for a battle, then this expectation seems to be justified. However, this is an unlikely case. Still, C. Manilius had led his army against the Veientes, and, a part of the troops of the latter having forced a passage into his intrenchments, Manilius rushed with a detachment to the support of his men, and closed up all the issues of his camp, so that the Veientes could not escape. Finding themselves thus shut in, they began to combat with such desperate fury that they killed Manilius, and would have destroyed the rest of the Roman army if one of the Tribunes had not the sagacity to open a way for them to escape. This shows that the Veientes, when constrained by necessity, fought with the most desperate valor; but when they saw the way open for their escape, they thought more of saving themselves than of fighting. (Machiavelli 1882 [1531]: 363)
Now that we are familiar with the strategy of rendering a road of escape and the possible counter-actions, and we find ample examples in politics and in the economy to which this structure applies. Often it turns out that both parties have a strictly dominant strategy and the outcome will be inefficient. The above examples demonstrate that often there is a third agent, a general or a tribune, who arranges the situation such that a strictly dominant strategy of “fighting to the end” prevails, just like the district attorney designed the decision situation of the two subjects such that “confess” became the dominant strategy.
3 The Prisoners’ Dilemma, but Who Are the Players? 55
In the news of April 22, 2017, an officer of the Iraqi army said that their troops attack the city of Mosul from three sides giving the IS fighters the possibility to take flight or to surrender.
3.7 Tosca’s Dominant Strategy The comparison of the quotes by Sun Tzu and by Niccolò Machiavelli shows an amazing similarity. If history of publication and translation holds, then Machiavelli should not have known Sun Tzu’s text. And yet, is the similarity really that amazing? Are the recipes given and conclusions presented of the two authors not straightforward, once we assume rational behavior for the various parties that are involved in war and war-like situations? If you have a strictly dominant strategy, then choose it, irrespective of what you expect the other agents do. The conclusion also extends to situations where only one of two players has a dominant strategy. However, a higher degree of rationality is required for the second player to identify the strictly dominant strategy of the first player and then choose a best reply to it. The second player has to assume that the first player is rational and can identify his or her strictly dominant strategy. Moreover, the second player has to pick his best reply strategy, given the dominant strategy of the other player. Matrix 3.7 illustrates a corresponding situation. Note that the decision situation does not represent a Prisoners’ Dilemma as only one of the two players has an unconditional strictly dominant strategy. Matrix 3.7 Defend or not defend? 3OD\HU
GHIHQG
QRWGHIHQG
DOORZHVFDSH
GRQRWDOORZ
Please check whether it is obvious that player 2 will choose strategy “defend.” Given rationality, and given that player 2 assumes that player 1
56 M. J. Holler and B. Klose-Ullmann
is rational, then this result should be obvious. Of course, we may doubt whether the payoffs of player 2 make sense. However, this is a question that cannot be answered by game theory. Numerous classroom tests which the first author carried out at the Universities of Aarhus, Catania, Hamburg, Helsinki, and Paris (Sorbonne) showed that almost all students who participated chose their dominant strategy when they had to decide from the perspective of the corresponding player, similar to player 1 choosing “allow escape” in Matrix 3.7. Yet, many students failed to assume that their counterpart would act accordingly when the latter had a strictly dominant strategy—but they themselves did not have one. They chose “not defend,” even after ten hours of introduction into game theory. Note that player 2 receives the smallest payoff for “not defend” when player 1 chooses the dominant strategy “allow escape.” A player 2 deciding in favor of “not defend” shows that he does not think strategically. Yet, if he put himself in the place of player 1, he would be aware of having the dominant strategy “allow escape.” Consequently, he would expect that player 1 chooses this strategy. Following such thinking, he would select the strategy “defend” for his best reply, instead of “not defend.” Many participants in these tests and, alas, also in game theory exams seem to be unable to put themselves in the place of their fellow players. Thus, they do not live up to what the application of game theory is all about. This is partly because they do not recognize the strategic decision problem represented by the payoff matrix. Of course, the game model, i.e., the game-theoretical mapping of reality, does not fully represent the decision situation as we find it in the real world. Every model is necessarily an abstraction: It represents reality only imperfectly—otherwise it would not be a model. Often, people who are not familiar with applying game theory do not leave aside real-world information and prejudices that support their intuition. Consequently, they misinterpret the game model and draw conclusions that are neither supported by the model nor by game theory. Heap and Varoufakis (1995: 147) describe a Prisoners’ Dilemma as depicted in Puccini’s opera Tosca. In the opera, the police chief, called Scarpia, lusts after Tosca. He has an opportunity to pursue his lust because Tosca’s lover has been arrested and condemned to death. This enables Scarpia to offer to fake the execution of Tosca’s lover if she will agree to submit to his advances. Tosca agrees and Scarpia orders blanks to be substituted for the bullets of the firing squad. However, as they embrace, Tosca stabs and kills Scarpia. Unfortunately, Scarpia has also defected on the arrangement as some bullets were real.
3 The Prisoners’ Dilemma, but Who Are the Players? 57
What could be socially beneficial—collectively rational—in this case? Tosca not stabbing Scarpia and Tosca’s lover surviving? Is stabbing Scarpia a dominant strategy? But if so, then Scarpia should have taken care that Tosca has no knife with her when he embraced her. It seems more obvious that killing Tosca’s lover is a dominant strategy and feasible. Tosca should have understood this and rejected the agreement—and should have stabbed the police chief at the first possible instance. However, perhaps there was no such instance.
References Bartko, J. E., & Bunzel, R. (2008). Discovery lessons. The Recorder (Autumn). www.callaw.com. Diekmann, A., & Przepiorka, W. (2016). “Take one for the team!” individual heterogeneity and the emergence of latent norms in a volunteer’s dilemma. Social Forces, 94:1309–1333. Dixit, A. K., & Nalebuff, B. J. (1991). Thinking strategically: The competitive edge in business, politics, and everyday life. New York and London: Norton. Economist. (2009, November 28). Suspended animation: A special report on the art market. The Economist, pp 1–16. Güth, W. (1991). Game theory’s basic question—Who is the player? Examples, concepts and their behavioral relevance. Journal of Theoretical Politics, 3, 403–435. Heap, S. P. H., & Varoufakis, Y. (1995). Game theory: A critical introduction. New York: Routledge. Heller, J. (1961). Catch-22. London: Gorgi Books. Hobbes, T. (1996 [1651]). Leviathan (R. Tuck, Ed., Revised Student ed.). Cambridge: Cambridge University Press. Luce, D. R., & Raiffa, H. (1957). Games and decisions. New York: Wiley. Machiavelli, N. (1882 [1531]). Discourses on the first ten books of Titus Livius. In The historical, political, and diplomatic writings of Niccolò Machiavelli (C. E. Detmold, Trans., 4 Vols.). Boston: James R. Osgood and Co. Niou, E., & Ordeshook, P. C. (1994). A game-theoretic interpretation of Sun Tzu’s The Art of War. Journal of Peace Research, 31, 161–174. Sun Tzu. (1963). The Art of War (S. B. Griffith, Trans.). Oxford: Oxford University Press. Taylor, M. (1976). Anarchy and cooperation. London: Wiley. Tuck, R. (1989). Hobbes: A very short introduction. Oxford: Oxford University Press.
4 The Nash Equilibrium
... is the general solution concept for non-cooperative games. By general we mean the following. Many game theorists postulate that for a non-cooperative game, only a Nash equilibrium is justified to define the outcome. This is because in a Nash equilibrium no player has an incentive to behave differently, i.e., to opt for another strategy than the one provided for in the equilibrium strategy vector. One should add: given the strategies of the other players. However, is this criterion sufficient to prefer the Nash equilibrium to other solution concepts (e.g., the Maximin Solution)? We will return to this question. Why are we confronted with the question of the appropriate solution concept in the first place? Not every game has an equilibrium in dominant strategies; the outcome is not always as convincing as in the Prisoners´ Dilemma. In the Chicken Game, for instance (cf. Sect 4.3), neither of the two players has a strictly dominant strategy, not even a weakly dominant one. In order to be able to evaluate the outcome of such a game and help the players in choosing appropriate strategies, we need another solution concept. The Nash equilibrium offers this.
4.1 On the Definition of the Nash Equilibrium In his Ph.D. thesis in mathematics, first published in 1950 on one page, John Nash proved that every game with a limited number of players, in which players have a limited number of pure strategies, has at least one equilibrium.1 1Nash
(1950a, 1951) contains two versions of the proof.
© Springer Nature Switzerland AG 2020 M. J. Holler and B. Klose-Ullmann, Scissors and Rock, https://doi.org/10.1007/978-3-030-44823-3_4
59
60 M. J. Holler and B. Klose-Ullmann
For his proof to hold, the number of players and strategies may be very large, but not infinite. More specifically, the proof applies to finite games only, i.e., to games in which each player has only a finite number of pure strategies. For instance, a game which defines the winner by calling the largest natural number is not finite, as the set of natural numbers is not bounded. But note, an infinite game can have an equilibrium, too. The equilibrium concept, for which Nash gave an existence proof, is now called the Nash equilibrium. It describes a strategy vector where no player is motivated to choose an alternative strategy, given the (equilibrium) strategy of the other players. Note that such strategy vector may also contain mixed strategies.2 The Nash equilibrium can be defined in a more formalized manner: A Nash equilibrium is a strategy vector s* = (s1*,…,si*,…,sn*) such that no player i has an incentive to choose an alternative strategy si, differing from si*, given the equilibrium strategies of the other n – 1 players. In the case of two players, the strategy pair (s1*, s2*) is a Nash equilibrium if neither player 1 wants to choose a strategy s1 that is different from s1*, given s2*, nor does player 2 want to choose a strategy s2 that is different from s2*, given s1*. Taking into account that we can express positive, negative, better, or worse incentives by utilities, the definition of the Nash equilibrium can be formalized by means of utility functions ui (.).. In the case of two players, the equilibrium conditions read like u1 s1 ∗ , s2 ∗ > u1 s1 , s2 ∗ for all s1 in S1 , u2 s1 ∗ , s2 ∗ > u2 s1 ∗ , s2 for all s2 in S2. S1 and S2 describe the strategy sets of players 1 and 2, respectively. Irrespective of the definition you choose, you come to the following conclusions: (i) An equilibrium strategy si* is the “right choice” with regard to the given strategies of the other players. The latter restriction is very important although not always taken into consideration when applying the Nash equilibrium concept.
2Chapter 10 describes at length the concept of mixed strategies. As we will see, a mixed strategy means that a player selects a pure strategy with a probability smaller than 1.
4 The Nash Equilibrium 61
Matrix 4.1 A game with two Nash equilibria 3OD\HU
8S
'RZQ
/HIW
5LJKW
The game in Matrix 4.1 has two Nash equilibria (in pure strategies), the strategy pair (L,U) and (R,D). The second equilibrium does not look very convincing as a description of an outcome, however, R is a “best reply” of player 1, if player 2 chooses D, and D is a “best reply” of player 2, if player 1 chooses R. The trouble is that U is also a best reply of player 2, if player 1 chooses R. Later in this book we will learn how game theory helps us to get rid of the less-convincing equilibrium (R,D). Talking about “best reply,” we deviate somewhat from common usage which provides for one best answer only. In Matrix 4.1, both strategies U and D are best answers to R. But the equilibrium (R,D) represents a pair of mutually best answers. R is no best answer to U; thus (R,U) is no Nash equilibrium. (ii) An equilibrium strategy si* is a best reply to the given strategies of the other players. Thus, we can define the Nash equilibrium as strategy vector containing mutually best replies for all players. (iii) The game in Matrix 4.1 shows that a best answer does not necessarily give higher utility than another strategy but it may not place the respective player in a worse situation. The conclusion is: A player may have more than one best reply to the strategies of the other players. However, all of them must give him the same utility, given the strategies of the others. (iv) From the conclusion (ii) above follows: Every equilibrium in dominant strategies is a Nash equilibrium. Reversing this sentence does not hold, as the equilibrium (R,D) shows. (However, L and U are weakly dominant strategies only.) (v) The Nash equilibrium satisfies the assumption of common knowledge of rationality (CKR) and of consistent-aligned beliefs (CAB) as suggested in Sect. 4.6.
62 M. J. Holler and B. Klose-Ullmann
4.2 Historical Note II: Nash and the Nash Equilibrium3 In addition to proving the existence of an equilibrium for finite games, Nash (1950b, 1953) gave a definition of the bargaining problem and suggested a solution to it—the solution is rather popular among economists. It is an essential contribution to cooperative game theory, i.e., when the rules of the game allow players to make binding agreements. Chap. 12 contains an introduction to it. Nash (1953) discussed the potential of non-cooperative games—more specifically, the Nash equilibrium—to determine the result proposed by cooperative games. Today this approach is known as the Nash program (see Sect. 12.4 for details). One of its applications is mechanism design: to create a set of rules (i.e., an institution, convention, etc.)—a game form?—such that self-interested rational behavior results in an outcome proposed by a cooperative game solution. In general, the latter implies the objective of social efficiency. The 2007 Nobel Prize for Economics was awarded for outstanding contributions to mechanism design theory. The winners were Leonid Hurwicz of the University of Minnesota, Eric Maskin of the Institute for Advanced Study at Princeton, and Roger Myerson, University of Chicago. The application of the Nash equilibrium was pivotal for their work. The Nash equilibrium is certainly the most significant solution concept of non-cooperative game theory. By now, it is part of the standard toolset of economics and is even used in political science, philosophy, and sociology, when game theory is applied. As to economics, this concept has become so common that some people have forgotten that its name is derived from a person. John Nash, born in 1928, had been suffering from a recurrent schizophrenia from 1959 onwards, however, in the early 1990s he recovered. His important research work on game theory was published in the early 1950s. Four decades are a long stretch of time; Nash as a person was hardly noticed anymore. Then, in 1994, he received the Nobel Prize for economics, together with John C. Harsanyi and Reinhard Selten, for his outstanding and very important contributions to the theory of non-cooperative games. Nash’s life was interpreted in the movie “Beautiful Mind,” based on a biography of the same name by Sylvia Nasar (published in 1998); he became a cult figure. Sadly, on May 23rd, 2015, Nash and his wife Alice were killed in a car accident on their way home to West Windsor Township, New Jersey. 3For
Historical Note I, see Sect. 2.1.
4 The Nash Equilibrium 63
This happened on the ride from the airport after a visit to Norway, where Nash had received the prestigious Abel Prize for his work in mathematics. Their taxi driver lost control of the vehicle and struck a guardrail. Nash and his wife were ejected from the car; it appeared neither passenger had been wearing a seatbelt. In modern textbooks on microeconomics, references to the Nash equilibrium are ubiquitous. Readers with basic knowledge of economic theory understand at once that the classic Cournot solution of an oligopoly can be interpreted as Nash equilibrium. There is even the term Cournot-Nash equilibrium. The Bertrand solution, Heinrich von Stackelberg’s asymmetry solution for a homogeneous duopoly, and Hotelling ’s result for price competition in a spatial duopoly all represent Nash equilibria. It is less common to describe the Walras equilibrium or the monopoly solution as Nash equilibria as these market situations are not characterized by a strategic decision situation in a game-theoretical sense. And yet, these solutions are Nash equilibria as defined above: none of the economic actors has an incentive to change his behavior given the assumed or observed behavior of the others.
4.3 Nash Equilibria and Chicken Game In order to illustrate the Nash equilibrium concept and its definitions, we discuss the Chicken Game as illustrated by Matrix 4.2. Here “def ” (defensive) means the strategy “getting out of the way” and “agg” (aggressive) for “not getting out of the way.” Matrix 4.2 The chicken game 3OD\HU %
GHI
DJJ
GHI
DJJ
$
64 M. J. Holler and B. Klose-Ullmann
The story of the Chicken Game derives from the movie “Rebel Without a Cause” starring James Dean. The film takes place in the 1950s showing the rivalry for leadership in a gang of teenagers. Two candidates face a test of courage.4 They climb into their big Buicks and drive toward each other at high speed. The one getting out of the other’s way will lose. He is the chicken, while the other guy will be the hero and future gang leader. If neither one makes way, the test ends with a crash and a possibly tragic result. If both get out of the way, there is neither a chicken nor a hero—and the group has no leader. Would you make way if you were player A or would you try to become a hero? The game in Matrix 4.2 illustrates many market situations. In general, they are characterized by the fact that at least one of two market sides has a small number of actors and thus represents a strategic decision situation. This holds for oligopolies or, rather, oligopolistic market forms. In Chap. 1, we mentioned the dramatic battle raging between Microsoft and Netscape in 1996. Each of the two suppliers of browser programs wanted to provide us with the entry into the Internet. The perspectives in this fight were: The winner would most likely earn billions of dollars while the loser would probably disappear from the market. However, if both want to win “at any price,” this might lead to a struggle with large losses for both suppliers. Looking at the (price) strategies chosen, such an outcome was possible. The Internet-Explorer-program was offered by Microsoft free of charge while Netscape was charging an amount of $79 for its more common program Navigator. This is far less than the toner for our laser printer cost in 1996. However, it seemed that the “battle for the internet” was not decided by prices but by the capacity of continuously investing, bringing ever more clever products to the market. Thus, the previous versions of the products, the own ones as well as the ones from the competitor, became quickly outdated. It goes without saying that the size of the development potential is determined by the prices which were paid for products at earlier stages. Let us come back to Matrix 4.2. The players have no dominant strategy. Applying the definition of the Nash equilibrium to this game, we see that both (agg,def ) and (def,agg) are Nash equilibria. In (agg, def ), player A chooses the strategy “agg” and player B chooses “def.” The corresponding payoffs are 3 (for A) and 1 (for B). This is the highest payoff which A can
4In the movie, the braggart Buzz provokes his rival Jim (James Dean) to a test of courage: The two drive at high speed toward a cliff. The one who gets out first is a chicken. Buzz crashes down and is dead. Jim survives. We find driving toward each other is more compelling as a duel.
4 The Nash Equilibrium 65
receive in this game. Thus, for A, it is out of question whether he or she could be better off by an alternative strategy choice. Regarding B, this question seems justified, as the payoff 1 does not look very high. However, if B chooses “agg” instead of “def,” given player A´s strategy “agg,” B will receive the payoff 0 only, i.e., B’s payoff shrinks. We conclude that the strategy pair (agg, def ) is a Nash equilibrium. Starting with (agg, def ), player B could reach a higher payoff than 1, that is a payoff of 2, if player A chose “def ” instead of “agg.” When checking whether (def, def ) is a Nash equilibrium, we are asking the following question: Has a player an incentive to change his other strategy, given the strategy of the other? Given (def, def ), we see that both players have an incentive to choose “agg,” given the strategy of the other player. The strategy pair (def, def ) does not imply a Nash equilibrium. On the other hand, if we test the strategy pair (def, agg), we see that it represents a Nash equilibrium: Neither of the players has an incentive to change his behavior, given the other player´s decision. It is not surprising that each of the two players has an incentive to change his behavior if (agg, agg) is the starting point, especially if the other player’s strategy is given. If Microsoft was sure that Netscape is aiming at succeeding with its browser “at any price,” Microsoft would certainly refrain from investing large amounts of money into its own browser; such sums would only be justified if there was a chance to gain market dominance. This result holds if the strategies and payoffs in Matrix 4.2 adequately reflect the strategic decision situation in which Microsoft and Netscape see themselves. Of course, it is advantageous for Netscape to convince Microsoft that it (Netscape) would accept whatever losses in order to gain market dominance. Microsoft, however, being aware of this, would not believe Netscape merely announcing to choose strategy “agg.” However, if Netscape succeeds in committing itself to the strategy (agg), i.e., reducing its set of strategies to this one, the decision situation changes considerably—in favor of Netscape. So, the reduction of strategies can be profitable. This is like burning one’s own ships or destroying bridges to block the possibility of withdrawal or even escape. We shall come back later to mechanisms of self-commitment or of binding oneself. If both players, on the basis of strategy pair (agg, agg), change their behavior, they get the strategy pair (def, def ) and the payoff pair (2,2) results, representing a considerable improvement, compared to payoffs (0,0) received for the strategy pair (agg, agg). But the strategy pair (def, def ) is no Nash equilibrium as we have already argued.
66 M. J. Holler and B. Klose-Ullmann
Let us summarize: The Chicken Game in Matrix 4.2 has two Nash equilibria, the strategy pairs (agg, def ) and (def, agg). We should add “in pure strategies” because for this game there exists also, as we shall see in Chap. 10, an equilibrium in mixed strategies. This holds for all games that have the strategic features of Matrix 4.2 and are therefore called the Chicken Game. The equilibria we discussed result from a thought experiment imputed to the players. The Chicken Game does not provide for the players to revise their strategies when they find out that they could be better off by choosing another strategy. At best, they could be angry about the missed chance to choose a “best reply” to the strategy of the other player, but this is not taken into consideration on the abstract level. (But there are decision models that refer to minimizing regret.) A Nash equilibrium typically shows an outcome where none of the players, given the other players’ strategies, has to regret his strategy choice. This means also that the expectation formed with regard to the decision of the other players is confirmed by the Nash equilibrium. One’s own strategy choice is also confirmed by the Nash equilibrium as it is the best response to the given expectations described by the equilibrium. This implies consistent expectations. However, these consistent-aligned beliefs might not satisfy Lord Henry. In Oscar Wilde’s The Picture of Dorian Gray, Lord Henry observes: “Faithfulness is to the emotional life what consistency is to the life of the intellect—simply a failure” (Wilde 1997[1890]: 37). This sounds rather hypocritical, but for the player who only receives a payoff of 1, it is hardly comforting that mutually best replies are chosen in the corresponding equilibrium. Besides, players cannot be sure that the resulting outcome concurs with equilibrium and the choices are best replies to each other. In general, when there are more than one equilibria, the coordination problem of strategy selection in simultaneous choices cannot be solved convincingly with reference to the Nash equilibria. Because of the coordination problem and in order to avoid (agg,agg) and the corresponding 0-payoffs, we might expect that both players choose “def.” This concurs with the Maximin Solution of the game which is, however, not a Nash equilibrium and does not satisfy common knowledge of rationality (CKR) and consistent-aligned beliefs (CAB). However, in Sect. 5.9 we will learn that the Theory of Moves supports this result. On the other hand, if, e.g., a rational player A, believes that player B will choose “def,” then A should choose “agg.” Again, this demonstrates that beliefs “rule the game.” Note that the outcomes (def, def ) and the two Nash equilibria in pure strategies are Pareto efficient. In this case, the efficiency criterion is no suitable device to solve the coordination problem that has its root in the
4 The Nash Equilibrium 67
autonomous decision making of individuals. The game-theoretical analysis of the Chicken Game demonstrates that this game describes a rather complex decision situation. We cannot suggest any particular outcome for this game. It is one of the merits of game theory that it accurately shows the complexity, if the real decision situation is burdened with complexity. It teaches us either to get familiar with contradictions or to create a less complex game situation, i.e., redesign the game.
4.4 Inefficient Equilibria in the QWERTY-DSK Game Standardization is a means of solving coordination problems. Standardization is ubiquitous; there are formal and informal standards. Driving on the right side of the road is such a standard, supported by the law; this is a formal standard. Let elderly people pass first was an informal standard implanted by convention. Similarly, there were times when men kept doors open for women—by convention—and women accepted this. There is a standardization industry involving national and international institutions such as DIN, AFNOR, BSI, ANSI, CEN, CENELEC, and ISO. Their output is numerous explicit standards. Most of them are optional. Their implementation is the result of the design of the social, economic, or political environment and of individual decisions. In order to illustrate the strategic problem of standardization, and to demonstrate the power and problems of the Nash equilibrium as a solution concept to strategic decision making, we will exploit a rather dated story which once stirred up a substantial discussion. Looking at your computer keyboard, you will notice that the line below the numbers starts with the letters Q, W, E, R, and T. On the German keyboard, the next letter is Z, on the English one it is Y. In the latter case, this reads QWERTY, and QWERTY has become the grammalogue for the standard of how the keys bearing so-called Latin letters have been arranged in almost 99 percent of all keyboards of typewriters and computers. For some time, QWERTY was used as symbol for inefficient standards. Again and again it was quoted both in daily newspapers as well as in scientific publications in order to demonstrate that the market can fail to coordinate supply and demand efficiently. In his seminal paper “Clio and the Economics of QWERTY,” Paul David (1985) argued that there are much better standards than QWERTY, e.g., the Dvorak Simplified Keyboard (DSK), which allow us to type much faster. To use such an alternative could be advantageous for everybody involved in typing, e.g., the typists and their employers.
68 M. J. Holler and B. Klose-Ullmann
David quoted from a study made by the US Navy which proved that only ten days of working with DSK were sufficient to have all training and retraining costs amortized. This holds under the assumption that the typist is always working at full capacity. On the basis of such figures, Liebowitz and Margolis (1990) came up with an estimated yield of 2200 percent for an investment in DSK. Why was DSK not offered as standard type? Why was this book written on a QWERTZ keyboard? The established answer is: We are trapped by the inefficient QWERTY standard. How should we learn to write on DSK when the keyboards are tied to QWERTY? Will a keyboard producer succeed by offering DSK, given that more or less all users are adapted to or were even trained on QWERTY? We know that especially semi-professional writers, like most professors of economics, who keep looking at the keys when typing, easily suffer from the slightest modification of the keyboard. In any case, it is more than doubtful whether a DSK producer would be successful. Most likely, the QWERTY-trained typist would resort to his or her familiar writing material. It seems that only by massive support from outside, e.g., by schools, government, and industry associations, would a change from QWERTY to DSK be feasible. Do you agree? The strategic decision problem which user and producer are facing, without assistance from outside, can be illustrated by the game in Matrix 4.3. Player A is to be a “user” and player B a “producer.” The payoffs given in the matrix (expressing the respective utilities) are only defined in relative terms to each other for each player. The absolute values have no meaning and it does not make any sense to compare the values of one player to the values of the other. Matrix 4.3 The QWERTY-DSK game 3OD\HU %
4:(57