Scissors and Rock: Game Theory for Those Who Manage 3030448223, 9783030448226

This book introduces readers to basic game theory as a tool to deal with strategic decision problems, helping them to un

117 73 8MB

English Pages 274 [268] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Scissors and Rock
Preface: Introduction and Warnings
Contents
1 Playing for Susan
1.1 Thinking Strategically
1.2 Why not Learn Game Theory?
1.3 The Working of the Invisible Hand
1.4 The Real World and Its Models
1.5 Winner-Takes-It-All and the Chicken Game
1.6 The Essence of Game Theory, the Brain, and Empathy
1.7 Strategic Thinking that Failed—Perhaps
References
2 No Mathematics
2.1 Historical Note I: The Pioneers
2.2 The Concept of Sets
2.3 Prices and Quantities
2.4 From Set to Mapping and Function
2.5 Utilities, Payoff Functions, and Strategy Vectors
2.6 Monkeys Write Shakespeare, but Where Is Hamlet?
References
3 The Prisoners’ Dilemma, but Who Are the Players?
3.1 From Game Form to Payoff Matrix
3.2 Equilibrium in Dominant Strategies
3.3 Catch-22 and Other Social Traps
3.4 Ways Out of the Dilemma
3.5 Who Are the Players?
3.6 Then Strike
3.7 Tosca’s Dominant Strategy
References
4 The Nash Equilibrium
4.1 On the Definition of the Nash Equilibrium
4.2 Historical Note II: Nash and the Nash Equilibrium
4.3 Nash Equilibria and Chicken Game
4.4 Inefficient Equilibria in the QWERTY-DSK Game
4.5 Who Are the Players in the QWERTY-DSK Game?
4.6 Nash Equilibria in Kamasutra Games
References
5Sequence of Moves and the Extensive Form
5.1 The Shrinking of the Event Matrix
5.2 Sequential Structure and Chicken Game
5.3 Extensive Form and Game Tree
5.4 Information: Perfect, Imperfect, Complete, and Incomplete
5.5 Perfect Recall Missing
5.6 The Battle of the Sexes
5.7 What Is a Strategy?
5.8 Sharing a Cake
5.9 Theory of Moves
References
6 Chaos, Too Many and Too Few
6.1 The El Farol Problem or “Too Many People at the Same Spot”
6.2 Self-referential Systems
6.3 Solutions to the El Farol Problem
6.4 Market Congestion Game
6.5 Viruses for Macintosh
6.6 The Volunteer’s Dilemma
References
7 Which Strategy to Choose?
7.1 Nash Equilibrium and Optimal Strategy
7.2 Equilibrium Choice and Trembling Hand
7.3 Trembling Hand Perfection and Market Congestion
7.4 Rationalizable Strategies
References
8 Step-by-Step: The Subgame-Perfect Equilibrium
8.1 Market Entry Game with Monopoly
8.2 Backward Induction and Optimal Strategies
8.3 The Ultimatum Game
8.4 Social Trust and the Stag Hunt Game
8.5 How Reciprocity Works
References
9 Forever and a Day
9.1 The Competition Trap Closes
9.2 Iterated Prisoners’ Dilemma and the “Ravages of Time”
9.3 The Competition Trap Breaking Down
9.4 Robert Axelrod’s “Tournament of Strategies”
9.5 “The True Egoist Cooperates.”—Yes, but Why?
9.6 The Folk Theorem and “What We Have Always Known”
References
10 Mixed Strategies and Expected Utility
10.1 From Lottery to Expected Utility
10.2 The Allais Paradox and Kahneman-Tversky
10.3 Optimal Inspection in Mixed Strategies
10.4 Maximin Solution and the Inspection Game
10.5 Chicken Game Equilibria and Maximin Solution
10.6 Miller’s Crucible and the Stag Hunt Game
10.7 Zero-Sum Games and Minimax Theorem
10.8 The Goalie’s Anxiety at the Penalty Kick
10.9 Scissors and Rock
References
11 More Than Two Players
11.1 The Value of Coalitions
11.2 The Core
11.3 Network Games
11.4 Epilogue to the Core and Other Bargaining Solutions
11.5 Competition and Cooperation in the Triad
References
12 Bargaining and Bargaining Games
12.1 The Bargaining Problem and the Solution
12.2 Rubinstein Game and the Shrinking Pie
12.3 Binding Agreements and the Nash Solution
12.4 Properties, Extensions, and the Nash Program
References
13 Goethe’s Price Games, Auctions, and Other Surprises
13.1 The Story of a Second-Price Auction
13.2 The Price-Setting Goethe
13.3 Optimal Strategies in Auctions and the Revenue Equivalence Theorem
13.4 All-Pay Auction, Attrition, and Pyrrhic Victory
13.5 Who Likes to Pay High Prices?
References
Index
Recommend Papers

Scissors and Rock: Game Theory for Those Who Manage
 3030448223, 9783030448226

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Manfred J. Holler Barbara Klose-Ullmann

Scissors and Rock Game Theory for Those Who Manage

Scissors and Rock

Manfred J. Holler · Barbara Klose-Ullmann

Scissors and Rock Game Theory for Those Who Manage

Manfred J. Holler University of Hamburg Hamburg, Germany

Barbara Klose-Ullmann Center of Conflict Resolution Munich, Germany

ISBN 978-3-030-44822-6 ISBN 978-3-030-44823-3  (eBook) https://doi.org/10.1007/978-3-030-44823-3 © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface: Introduction and Warnings

Analytical statistics claim that there are two ways to make wrong decisions: A correct hypothesis is rejected or, alternatively, an incorrect hypothesis is accepted. In this book, you will learn about a third type of wrong decision and how to handle it. The essence of this type of failure is that decision makers either ignore that the results of their decisions depend on the decisions of others or that they cannot deal with this interdependency. The reason for the latter could be the complexity of the decision situation. However, it could also be the result of a lack of tools. Game theory is such a tool. It helps to understand the complexity of research decisions, and in many cases, it filters out inadequate decisions. International politics, parlor games like Chess, and the schoolyard game Rock-Scissors-Paper exhibit decision situations in which the results of decision making depend on the choice of more than one decision maker. The managing of game theory can support the managing of decision situations when decisions are interdependent and strategic reasoning is required, i.e., putting oneself into the shoes of the other. It is also of help in the designing and redesigning of decision situations, i.e., “changing the game,” known more formally as mechanism design. The design of auctions is just one example; the writing of a constitution is another one. Obviously, mechanism design is an important instrument for politicians and business managers. However, it is also relevant for everybody who manages decision situations—which includes most of us. Game theory is the key. This is the focus of the present book. The book has three major heroes: Niccolò Machiavelli, Adam Smith, and George Washington. In fact, Washington accomplished what Adam Smith suggested in the last page of his Wealth of Nations: v

vi      Preface: Introduction and Warnings

“If any of the provinces of the British empire cannot be made to contribute toward the support of the whole empire, it is surely time that Great Britain should free herself from the expence of defending those provinces in time of war, and of supporting any part of their civil or military establishments in time of peace, and accommodate her future views and design to the real mediocrity of her circumstances” (Smith 1981[1776/77]: 947).

King George III and his government did not follow Smith’s recommendation, and much of the American colonies became independent after the War of Independence. We will not discuss George Washington any further in this book, but he is our prototype of “the man who managed.” Much of what follows can be applied to his life and career. In 1740, Voltaire arranged for the publication “The Refutation of Machiavelli’s Prince or Anti-Machiavel” written by Frederick of Prussia, probably the most prominent Machiavelli critic. Prince Royal Frederick developed a model of an enlightened prince who considered himself a “first servant” to his State and a reliable agent in the interplay with fellow princes. However, when, in 1740, he succeeded his father as King of Prussia, his actual behavior was heavily influenced by the recipes suggested in Machiavelli’s Il Principe. He may have been Machiavelli’s most successful student and ardent follower. There are a number of other heroes in this book: Johann Wolfgang von Goethe who applied the Vickrey’s auction scheme when selling his manuscript of “Hermann and Dorothea” to the publisher Hans Friedrich Vieweg in Berlin; Joseph Heller and Peter Handke who contributed “strategic inspiration” by their novels Catch-22 and The Goalie’s Anxiety at the Penalty Kick, respectively; the widely quoted Chinese military strategist Sun Tzu who suggested that “to a surrounded enemy you must leave a way of escape″; Napoleon who studied Machiavelli’s Il Principe and sent his troops to Moscow where they died of hunger and cold; Émile Borel who was possibly the first to define the game of strategy “in which the winnings depend simultaneously on chance and the skill of the player”—who also proposed a thought experiment that entered popular culture under the name “infinite monkey theorem”; John von Neumann who proved the Minimax Theorem and thereby initiated the birth of game theory; and, of course, John Nash whose outstanding contributions to game theory not only earned him a Nobel Prize but also triggered a biography and, most prominently, a movie with the title Beautiful Mind. There is a long list of Nobel Prize winners who have been celebrated because of their work in game theory, and there is even

Preface: Introduction and Warnings     vii

a much longer list of scholars who contributed to game theory’s development and application—and thus to its popularity. Specifically, this book is about strategic mistakes and how to avoid them. A first technique is: We have to think strategically; the right approach would be using game theory, the theory of strategic thinking in order to get a better understanding of the decision situation. However, to apply game theory, you have to learn it. This book will give you a well-structured introduction in game-theoretical thinking and basic methods and concepts. Every decision maker should study the basic concepts offered in this book. They are highly relevant not only in cases of conflict but also in cases of cooperation—and of coordination problems. A better understanding of strategic problems and knowledge of possible solutions is extremely important to identify social or political conflicts, irrespective of whether the conflicts are between nations or family members, and to avoid them. While the German version of this book (“Spieltheorie für Manager”1) focused on introducing game theory as a tool kit for solving strategic decision problems, the present version emphasizes the role of game theory as a means to identify the complexity of decision situations and to thereby obtain a better understanding of the world we live in and of the decisions we have to make. Of course, the latter does not exclude learning about tools which help to solve problems that involve strategic thinking. Needless to say: This book is not a literal translation of the German version. In his The Picture of Dorian Gray, Oscar Wilde (1997[1890]: 30) characterized Sir Thomas Bordon, a radical member of the Parliament, with the notorious observation: “Like all people who try to exhaust a subject, he exhausted his listeners.” In this book, we do not want to exhaust the subject as we do not want to exhaust our readers. We are sure that readers with some knowledge of game theory can easily find important issues that are missing in our text. We strongly suggest to the advanced reader studying what is offered here and then to verify whether he or she has learned something from it. However, readers who have so far been protected against game theory can sit down, enjoy the text, and get nervous about the thought experiments with which they will be confronted. Unfortunately, you have to manage game theory when you want to apply it.

1Holler

and Klose-Ullmann (2007), Spieltheorie für Manager: Handbuch für Strategen, 2nd edition, Munich: Verlag Franz Vahlen. Material in the present book also derives from Holler et al. (2019), Einführung in die Spieltheorie, 8th edition, Berlin: SpringerGabler.

viii      Preface: Introduction and Warnings

Of course, we will not conclude our preface without illustrating the concept of strategic thinking and give an explanation to the title of this book: “Scissors and Rock.” It is not unlikely that you played R ­ ock-Scissors-Paper2 in the schoolyard. It is a two-person game played with hands. Players have to choose whether they want to show a closed fist, representing a “rock”; a fist with two fingers sticking out forming a V, representing “scissors”; or a flat hand, representing “paper.” The “rock” spoils the “scissors”; the “scissors” cut the “paper”; the “paper” wraps the “rock.” Each alternative has the potential to “beat” another one, but is in danger of being defeated by a third alternative. These relations define losing and winning. If players choose identical alternatives, the particular round ends in a draw. What alternative will you choose if choices are simultaneous and you want to win? Of course, if you find out that your opponent chooses “paper” more often than the two other alternatives, you will choose “scissors” more often than “paper” or “rock.” If you decide to choose “scissors” all the time, the opponent will realize his own bias and perhaps switch to “rock” more often than you expected. If you do not want to be exploited by your opponent, try to choose all three alternatives with equal probability. (The strategic decision problem is rather similar to the Penalty-kick game analyzed in Sect. 10.8.) In the equilibrium, both players choose each of the three alternatives with probability one-third. But there is Clever Mary who invites Sweet Paul to choose his alternative first and then she will choose hers. This is how “Scissors and Rock” prevailed. In fact, no matter what Sweet Paul chooses, Clever Mary always has a winning alternative—obviously, there is a second-mover advantage if the game is played sequentially. This is the reason why we see this game played simultaneously in schoolyards. Outside of schoolyards, again and again, decision makers try to slip into the role of Clever Mary and invite a Sweet Paul for a first move. It is not always a case of politeness, if somebody invites you to go first. The example shows the possible power inherent in designing a game. A second move is not always to the disadvantage of the first mover. If Sweet Paul succeeds to reduce the set of alternatives to two elements, e.g., Scissor and Rock, then there is a first-mover advantage. Paul will choose Rock and win. If, different from Rock-Scissors-Paper, the game does not contain conflicting interests, a sequential structure may help to choose a successful solution to a coordination problem and implement an efficient outcome. Then, 2For

details, see Sect. 10.9.

Preface: Introduction and Warnings     ix

in general, the order of moves does not matter and—irrespective of whether a third or fourth move may exist—a cooperative outcome prevails. Given this, we would like to thank Gregor Berz, Andreas Diekmann, Gudrun Keintzel-Schön, Norbert Leudemann, Hannu Nurmi, Florian Rupp, and Ernst Strouhal for their valuable support—and the inspiration which we received from them. We are grateful to Raymond Russ at the University of Maine who read the complete text and made very valuable propositions. Of course, many others inspired us while writing this text. Thank you very much! Hamburg, Germany  Munich, Germany 

Manfred J.Holler Barbara Klose-Ullmann

References Holler, M. J., Illing, G., & Napel S. (2019). Einführung in die Spieltheorie (8th ed.). Berlin: SpringerGabler. Holler, M. J., & Klose-Ullmann, B. (2007). Spieltheorie für Manager. Handbuch für Strategen (2nd ed.). Munich: Vahlen. Smith, A. (1981 [1776/77]). In R. H. Campbell & A. S. Skinner (Eds.), An inquiry into the nature and causes of the wealth of nations. Indianapolis: Liberty Press. Wilde, O. (1997[1890]). Collected work of Oscar Wilde. Ware: Wordsworth Edition.

Contents

1

Playing for Susan 1 1.1 Thinking Strategically 2 1.2 Why not Learn Game Theory? 4 1.3 The Working of the Invisible Hand 6 1.4 The Real World and Its Models 10 1.5 Winner-Takes-It-All and the Chicken Game 12 1.6 The Essence of Game Theory, the Brain, and Empathy 15 1.7 Strategic Thinking that Failed—Perhaps 18 References 20

2

No Mathematics 23 2.1 Historical Note I: The Pioneers 23 2.2 The Concept of Sets 27 2.3 Prices and Quantities 30 2.4 From Set to Mapping and Function 31 2.5 Utilities, Payoff Functions, and Strategy Vectors 33 2.6 Monkeys Write Shakespeare, but Where Is Hamlet? 35 38 References

3

The Prisoners’ Dilemma, but Who Are the Players? 39 3.1 From Game Form to Payoff Matrix 39 3.2 Equilibrium in Dominant Strategies 44 3.3 Catch-22 and Other Social Traps 45 3.4 Ways Out of the Dilemma 47 3.5 Who Are the Players? 49 xi

xii      Contents

3.6 Then Strike 52 3.7 Tosca’s Dominant Strategy 55 References 57 4

The Nash Equilibrium 59 4.1 On the Definition of the Nash Equilibrium 59 4.2 Historical Note II: Nash and the Nash Equilibrium 62 4.3 Nash Equilibria and Chicken Game 63 4.4 Inefficient Equilibria in the QWERTY-DSK Game 67 4.5 Who Are the Players in the QWERTY-DSK Game? 70 4.6 Nash Equilibria in Kamasutra Games 72 References 73

5

Sequence of Moves and the Extensive Form 75 5.1 The Shrinking of the Event Matrix 75 5.2 Sequential Structure and Chicken Game 76 5.3 Extensive Form and Game Tree 78 5.4 Information: Perfect, Imperfect, Complete, and Incomplete 79 5.5 Perfect Recall Missing 82 5.6 The Battle of the Sexes 86 5.7 What Is a Strategy? 89 5.8 Sharing a Cake 91 5.9 Theory of Moves 92 References 95

6

Chaos, Too Many and Too Few 97 6.1 The El Farol Problem or “Too Many People 98 at the Same Spot” 6.2 Self-referential Systems 100 6.3 Solutions to the El Farol Problem 101 6.4 Market Congestion Game 103 6.5 Viruses for Macintosh 104 6.6 The Volunteer’s Dilemma 106 References 112

7

Which Strategy to Choose? 113 7.1 Nash Equilibrium and Optimal Strategy 114 7.2 Equilibrium Choice and Trembling Hand 116 7.3 Trembling Hand Perfection and Market Congestion 118

Contents     xiii

7.4 Rationalizable Strategies 121 References 123 8

Step-by-Step: The Subgame-Perfect Equilibrium 125 8.1 Market Entry Game with Monopoly 126 8.2 Backward Induction and Optimal Strategies 127 8.3 The Ultimatum Game 130 8.4 Social Trust and the Stag Hunt Game 133 8.5 How Reciprocity Works 136 References 139

9

Forever and a Day 141 9.1 The Competition Trap Closes 143 9.2 Iterated Prisoners’ Dilemma and the “Ravages of Time” 145 9.3 The Competition Trap Breaking Down 148 9.4 Robert Axelrod’s “Tournament of Strategies” 152 9.5 “The True Egoist Cooperates.”—Yes, but Why? 155 9.6 The Folk Theorem and “What We Have Always Known” 158 References 163

10 Mixed Strategies and Expected Utility 165 10.1 From Lottery to Expected Utility 166 10.2 The Allais Paradox and Kahneman-Tversky 169 10.3 Optimal Inspection in Mixed Strategies 172 10.4 Maximin Solution and the Inspection Game 176 10.5 Chicken Game Equilibria and Maximin Solution 179 10.6 Miller’s Crucible and the Stag Hunt Game 180 10.7 Zero-Sum Games and Minimax Theorem 183 10.8 The Goalie’s Anxiety at the Penalty Kick 188 10.9 Scissors and Rock 191 References 193

11 More Than Two Players 195 11.1 The Value of Coalitions 196 11.2 The Core 197 11.3 Network Games 199 11.4 Epilogue to the Core and Other Bargaining Solutions 204 11.5 Competition and Cooperation in the Triad 207 References 211

xiv      Contents

12 Bargaining and Bargaining Games 213 12.1 The Bargaining Problem and the Solution 214 12.2 Rubinstein Game and the Shrinking Pie 219 12.3 Binding Agreements and the Nash Solution 225 12.4 Properties, Extensions, and the Nash Program 231 236 References

13 Goethe’s Price Games, Auctions, and Other Surprises 237 13.1 The Story of a Second-Price Auction 238 13.2 The Price-Setting Goethe 242 13.3 Optimal Strategies in Auctions and the Revenue Equivalence Theorem 245 13.4 All-Pay Auction, Attrition, and Pyrrhic Victory 250 13.5 Who Likes to Pay High Prices? 252 References 254

Index 255

1 Playing for Susan

In the town hall of the German city of Augsburg, founded originally as Augusta Vindelicorum in the year 15 BC,1 the ceiling of the central hall is decorated with a painting that shows Sapientia, the goddess of wisdom, in the center seated on a throne. A banner next to her, carried by some vassals, announces “per me reges regnant ”—loosely translated, “it is through me that the kings rule.” This book will demonstrate that it is not always easy to accomplish what Sapientia suggests. We will learn about the limits of her suggestions, but we will also see that the knowledge of game theory can extend the domain of Ratio, the enlightened companion of Sapientia. In general, there are several competing, more or less convincing stories that explain an event, an outcome, or a fact—whether they are of today or of 500 years ago. Of course, we want to know why, say, a particular result prevailed, and how. What are the forces that produced this result, and not another? We want to learn from the story either to satisfy our natural curiosity or to avoid failures in our future actions. In fact, curiosity supports the learning of tools to avoid the traps waiting for us. Curious people can handle surprises much better than those who know all they want to know.

1Norbert

Leudemann informed us that the original name of Augsburg is “Augusta Vindelicum.” In 15 BC, it was an army camp while the first civil settlement dated to 40 AC. The official name of the provincial capital was “Municipium Aelium Augustum,” abbreviated as “Aelia Augusta.” © Springer Nature Switzerland AG 2020 M. J. Holler and B. Klose-Ullmann, Scissors and Rock, https://doi.org/10.1007/978-3-030-44823-3_1

1

2     M. J. Holler and B. Klose-Ullmann

1.1 Thinking Strategically After reading Adam Smith’s “The History of Astronomy,” an article which comes as a surprise itself (Smith 1982 [1758]), we realize how dangerous surprises can be. The message is: We are involved in research and try to understand things in order to minimize surprises. Thinking strategically, putting oneself into the shoes of the other, helps to understand social interaction and resulting social situations. For many such situations, a reliable theory and the understanding that derives from it reduce the likelihood of surprises. If decision making is strategic, then, typically, we can only hypothesize about the motivation, information, and reasoning producing the results that we see and want to explain to ourselves and, perhaps, to others. In the standard case, each decision maker can select one action only from a large set of alternatives without knowing what other decision makers will choose now or in the future, or, quite often, what they have chosen in the past. However, these choices specify the outcome that our decision maker wants to determine as he is likely to suffer or benefit from them. Sometimes we see the choices, and not the alternatives. Often, we only see the outcome—and nothing else—and we have to guess the choices and actions that caused it— as well as those involved in the decision, and their motivations. Of course, in these cases, we have to seek refuge in very strong hypotheses about human behavior; typically, this entails the rationality hypothesis and some degree of selfishness that characterizes the homo economicus, which have become the trademark of modern microeconomics and of the sciences invaded by it: sociology, philosophy, psychology, etc. In general, rationality and selfishness have to be further qualified to allow for deducting an explanation. In his “Essays: Moral, Political and Literary,” David Hume (1985 [1777]: 42) recommended that “in convincing any system of government…every man ought to be supposed to be a knave and to have no other end, in all his actions, than private interest. By this interest we must govern him, and, by means of it, make him, notwithstanding his insatiable avarice and ambition, co-operate to public good.” Are all men and women knaves or does this quotation merely imply that a successful government should be based on this assumption? Shall we imitate the government? As for the government, according to Machiavelli (1952 [1532]: 92), it is laudable…for a prince to keep faith and live with integrity, and not with astuteness, everyone knows. Still the experience of our times shows those princes to have done great things who have little regard for good faith, and

1  Playing for Susan     3

have been able by astuteness to confuse men’s brains, and who have ultimately overcome those who made loyalty their foundation.” He observes that for the prince “it is well to seem merciful, faithful, humane, sincere, religious, and also to be so,” but the prince must have the mind so disposed that when it is needful to be otherwise you may be able to change to the opposite qualities,” concluding that “it is not, therefore, necessary for a prince to have all the above-named qualities, but it is very necessary to seem to have them (Machiavelli 1952 [1532]: 93).

The shaping of expectations is essential to Machiavelli, even when it comes to architecture. How to build a fortress? In his The Art of War, he writes that he “would make the walls strong, and ditches…that everyone should understand that if the walls and the ditch were lost, the entire fortress would be lost” (Machiavelli 1882 [1521], Seventh Book). In the first step, it seems that walls have to be strong and enforced by ditches in order to motivate the spirit of those defending the fortress behind the walls. The next step, in Machiavelli’s reasoning, is that those who attack strong walls have to expect a spirit of defense. But this spirit was sometimes lacking, and, as Machiavelli observed, people relied on strong walls and reduced their efforts of defense. Therefore, strong walls were not an unambiguous signal and a reliable solution for keeping the enemy away, as Machiavelli himself noted (Machiavelli 1882 [1521], Seventh Book). A game-theoretical analysis could help to clarify this case. The history of game theory tells us that its success is, to a large extent, the result of its application to war and war-like situations. But if you are a pacifist, do not stop reading here. Strategic thinking is ubiquitous: It is an essential ingredient of “love and fear,” but also of less dramatic core functions of life such as consumption. A large share of consumption is directed not to pleasure and satisfaction, but to create “social distance” by impressing others. In The Theory of the Leisure Class, Thorstein Veblen’s world showed us an elite citizenry engaged in conspicuous consumption and honorific expenditures in search of pecuniary decency. A means to achieve this goal was to invest in delicate women, racing horses, and subduing dogs—and in Renaissance Art. The latter was thought to be most prestigious when it was transferred at large sums from an old English castle, owned by a semi-bankrupt lord, with the help of the most prestigious art dealer Joseph Duveen, who became himself a lord toward the end of his life. We told this story in detail in our “Art Goes America” article (Holler and Klose-Ullmann 2010). In order to create and satisfy standards of excellence, to capture a shadow of aristocracy, and to impress their fellow citizens,

4     M. J. Holler and B. Klose-Ullmann

the American leisure class tried to imitate their British upper-class models. Veblen (1979 [1899]: 145) observed that the “English seat, and the peculiarly stressing gait which has made an awkward seat necessary, are a survival from the time when the English roads were so bad with mire and mud as to be virtually impassable for a horse traveling at a more comfortable gait; so that a person of decorous tastes in horsemanship to-day rides a punch with a cocked tail, in an uncomfortable posture and a distressing gait.” In art and architecture, American rusticity was not yet popular among the rich when Veblen published his leisure-class book in 1899. The rich may still try to buy a Raffaello out of some lord’s castle. But soon they will demonstrate that, without social discounting, they can afford to show nineteenth-century American landscape painting of the Hudson River ­ School, a group of artists around Thomas Cole and his student Frederic Edwin Church, in their prairie house homes. Of course, this counter-snobbery was meant to impress the snobs (Steiner and Weiss 1951), but it made identification rather complex as long as American paintings were at a low price and the butcher could buy them as well. Fortunately, due to the additional demand, prices went up. Consequently, counter-snobbery had to find new ways to manifest itself.

1.2 Why not Learn Game Theory? As already said in the Introduction, we are sure that readers with some knowledge in game theory can easily find important issues that are missing in our text. However, readers who have so far been fully protected against game theory can sit down, enjoy the text, and get nervous with the thought experiments with which they are confronted. We strongly suggest that readers after having studied what is offered here ask themselves whether they have learned something from it—something that gives insights, something they can apply. Unfortunately, you have to learn game theory when you want to apply it. In general, it does not pay to hire a game theorist to do the job of strategic decision making for you. He or she does not know how much you like to win the battle and how strong your battleships, i.e., your resources, are. It is quite likely that, on the one hand, you cannot express your preferences and, on the other, you want to keep information concerning your resources as a secret. However, both items, your evaluations and your resources, are extremely important to model a game situation and to find a solution.

1  Playing for Susan     5

More specifically, let’s put ourselves into the shoes of the head of the sales department of a large company who wants to apply game theory to outsmart the competitor. We were told that we have to know game theory if we want to apply it. This statement appears trivial at first sight. However, reflecting on the activities of the sales manager, it becomes evident that he2 uses many skills in which he has not been formally trained—and which thus can hardly be reconstructed by an outsider. He continuously adopts results from the analyses of others, confiding in their reflections without being familiar with their principles. Why is it so? Many management skills are almost impossible to learn. A great number of those skills are based on intuition or they are the result of a socially evolutionary process. For instance, future managers who conform to a certain behavioral codex have better prospects of attaining an executive position within a company than those candidates who show behavior that deviates from this codex and, therefore, are less successful in the given business culture. On the other hand, there are various problems which the manager expects to solve with the help of experts without understanding the methods applied in detail. Think about operation research analysis or the application of econometric models. If the manager applied identical methods and based his work on identical data, he would achieve the same results as the expert, although probably with a greater effort. This is likely to hold, e.g., for the prediction of economic growth or of the development of interest and exchange rates. However, this does not apply to forecasting the effect of a price reduction that a manager envisions for his company, especially if the company operates in a market with one or just a few competitors. Under these circumstances, decision making is, in general, much too complex to use an analytical (numeric) approach—not because of a shortage of data but due to the small number of competitors. In this instance, decision making is of a strategic nature: Competitors are likely to react to price reductions. But how do they react? Game theory could provide an answer if the decision maker could interpret the market correctly. Therefore, the manager should not leave the application of game theory to a third party, although there are cases which can be molded in a more general framework. In principle, the manager must

2Not

all managers are men as exemplified by one of the authors. We apologize exclusively using “he” in this text.

6     M. J. Holler and B. Klose-Ullmann

evaluate the market conditions himself and do his own analysis. Knowing game theory can be of help—especially when you have to explain your decision to others.

1.3 The Working of the Invisible Hand In what follows, we illustrate the need for strategic reasoning of the manager using an example of a toy store which is meant to capture the stylized facts of some real-world markets. In the case of only two suppliers on the market, A and B, the effect of a price reduction by firm A is determined by the behavior of rival firm B and the demand of the customers. Thus, its effect depends on how B reacts to A’s decision to reduce the price. The objective of A’s price cut is to increase the demand for its own product. However, if the competitor reduces the price as well, the possible price cut effect is likely to be undermined. As a consequence, sales will not increase as much as expected and profits may even decrease. The decision on the price reduction by A will therefore be determined by A’s expectation of B’s reaction. B’s reaction in turn will be affected by B’s expectation of A’s reaction. In order for A to predict B’s reaction, A must take B’s expectation in relation to its own behavior into account. Thus, B’s expectation will depend on the expectations formed by both companies, A and B. The structure of this dependency is extremely complex. As a result, both companies will face a severe problem. The managers have to develop some idea of the competitor’s prices if they want to maximize their profits or to achieve a related goal (e.g., revenue maximization or increasing market shares). Moreover, in general, there is uncertainty about how the buyers will react to prices per se and also whether there is a potential entrant to the market. Let’s abstract from such intricacies for the moment and use our toy example. For further simplification, we assume that the two suppliers to the market have just two modes of behavior: to choose a high or a low price. In the language of game theory, the modes of behavior are called strategies: They label the set of plans from which the decision makers can choose. As the decision situation is characterized by strategic interaction inasmuch as the outcome depends on the choices of both agents and, as assumed, the two agents know about it, it constitutes a game situation. As a consequence, the decision makers can be viewed as players.

1  Playing for Susan     7

Matrix 1.1  The competitive trap Player

high

Low

high

(800,200)

(250,300)

low

(1500,100)

(500,150)

A

B

The strategic interaction is obvious when we look at its representation by means of Matrix 1.1. If both players, A and B, choose strategy “high,” then the matrix says that A and B will achieve payoffs of 800 and 200, respectively. In principle, the payoff numbers represent utility values, but for the given example profits seem to be good proxies. If player A chooses “high” and player B chooses “low,” then the profits are represented by the payoff pair (250, 300). Is it better to choose low prices? If both sellers choose “low,” the corresponding payoff pair is (500, 150). Obviously, it is not profitable for A to choose “high” when B chooses “low.” Is it profitable for A to choose “high” when B chooses “high”? No! Irrespective of whether B chooses “high” or “low,” it is always better for A to choose “low.” The strategy “low” is a strictly dominant strategy for A. By a similar reasoning, we will find out that “low” is also a strictly dominant strategy for B: Irrespective of which strategy A chooses, it is always better for B to choose “low” instead of “high.” To answer the question above, it seems that is better to choose low prices instead of high prices. But is this answer correct? If both players choose “high,” the payoffs are (800, 200), whereas if they choose “low” payoffs are (500, 150). Obviously, “low” prices are not profitable as both sellers are better off by choosing high prices. But above we have argued that low prices represent strictly dominant strategies for each player; that is, they are preferable irrespective of what the other player chooses. It seems that our players are trapped in a contradiction. Can we help them? Matrix 1.1 does not illustrate a logical contradiction, but a trap called competition. The fact that payoffs (800, 200) result, if both players choose “high,” is only of anecdotal value for an individual player, if he is solely interested in maximizing his own payoff. The latter objective suggests that he should choose his strictly dominant strategy: This is the individual

8     M. J. Holler and B. Klose-Ullmann

rational mode of behavior for both players in the decision situation described by Matrix 1.1; it results in the payoff pair (500, 150). This behavior is in conflict with collectively rational behavior—also labeled Pareto efficient behavior—that leads to the payoff pair (800, 200). But note that this behavior and its outcome are only efficient with respect to the sellers. From the point of view of the buyers, we should be happy about the low prices that result from the individual rational behavior of the sellers. The game described by Matrix 1.1 constitutes a Prisoners’ Dilemma—the most popular decision situation in game theory. In Chap. 3, we will learn why this game carries this name. It reflects a conflict between individual rational behavior and social efficiency (or collective rationality). But as the story goes, we should not feel sorry if the invisible hand of the competition works and drives the prices down. The merits of the invisible hand were already quoted by Adam Smith in his “Inquiry into the Nature and Causes of the Wealth of Nations,” first published in 1776/77. In Book IV, Chapter II, we read that, in general, every individual …neither intends to promote the public interest, nor knows how much he is promoting it. By preferring the support of domestic to that of foreign industry, he intends only his own security; and by directing that industry in such a manner as its produce may be of the greatest value, he intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention. Nor is it always the worse for the society that it was no part of it. By pursuing his own interest he frequently promotes that of the society more effectually then when he really intends to promote it (Smith 1981 [1976/1977]: 456).

Please note the word “frequently.” Adam Smith was quite aware that the invisible hand does not always work, because of cartels, or did not work properly because of institutional shortcomings—see his discussion of the banking sector—and the potential of free-riding in the provision of public goods. The emergence of externalities is another factor that makes the invisible hand tremble. From a game-theoretical point of view cartels are perhaps the most interesting handicap that hinders the successful working of the invisible hand. Adam Smith is very explicit that such cartels exist, for instance, on the labor market where wages depend on contracts, the parties’ “…interests are by no means the same. The workmen desire to get as much, the masters to give as little as possible.” Given this rather plain observation, Adam Smith (1981 [1776/1777], p 83) concludes, “The former are disposed to combine in order to raise, the latter in order to lower the wages of labour.” And he goes on to reason:

1  Playing for Susan     9

It is not, however, difficult to foresee which of the two parties must, upon all ordinary occasions, have the advantage in the dispute, and force the other into a compliance with their terms. The masters, being fewer in number, can combine much more easily; and the law, besides, authorises, or at least does not prohibit their combinations, while it prohibits those of the workmen. We have no acts of parliament against combining to lower the price of work; but many against combining to raise it. In all such disputes the masters can hold out much longer. A landlord, a farmer, a master manufacturer, or merchant, though they did not employ a single workman, could generally live a year or two upon the stocks which they have already acquired. Many workmen could not subsist a week, few could subsist a month, and scarce any a year without employment. In the long–run the workman may be as necessary to his master as his master is to him; but the necessity is not so immediate (Smith 1981 [1776/77]: 83f ).

But do these combinations really form? It seems that, in general, they are not made public and Adam Smith had to convince his readership that such combinations exist. We rarely hear, it has been said, of the combinations of masters; though frequently of those of workmen. But whoever imagines, upon this account, that masters rarely combine, is as ignorant of the world as of the subject. Masters are always and every where in a sort of tacit, but constant and uniform combination, not to raise the wages of labour above their actual rate. To violate this combination is every where a most unpopular action, and a sort of reproach to a master among his neighbours and equals (Smith 1981 [1776/77]: 84).

Here, we have some interesting observations which we discuss in detail in Chap. 9: Agreements can be tacit and enforced by social and perhaps economic pressure. The decision situation described by Adam Smith seems to imply a Prisoners’ Dilemma with respect to the cooperation of the masters, as an individual master that deviates from the tacit contract could benefit by paying higher wages and thereby attracting better skilled “workmen”—if there were not the threat “of reproach to a master among his neighbours and equals.” The relationship between the masters and the workmen constitutes a ­multi-person bargaining game which is, however, reduced to a market situation that has one agent, the “combination of masters,” representing demand and many individual workmen on the other supply side as, by law, workmen were not allowed to collude. Economists call such market situation ­demand-side monopoly or, in a more sophisticated manner, monopsony.

10     M. J. Holler and B. Klose-Ullmann

In the modern language of game theory, combinations are called coalitions. They describe situations of conflict and coordination and are especially relevant for games with more than two players. In the course of this book, we will learn how they emerge and how the coalition surplus will be shared between its members.

1.4 The Real World and Its Models From the interpretation of Matrix 1.1, we learned that a two-person Prisoners’ Dilemma game is characterized by two features: (a) The two players have strictly dominant strategies, i.e., each player has a best strategy irrespective of the strategy choice of the other players. (b) The result, determined by the equilibrium in dominant strategies, is socially inefficient with respect to the players inasmuch as both players are better off if they either find a mode of cooperation or if cooperation is forced upon them. Do such decision situations exist? Probably not in the abstract form as summarized by (a) and (b)! However, starting from the toy model described by Matrix 1.1, we can think of two gas stations that are close to each other on the same side of a highway. Their products are hardly differentiated. As a consequence, buyers will steer their car to the gas station with the lower price if prices differ. Similarly, many customers do not think that there is a quality difference between Coca-Cola and Pepsi Cola and buy the cheaper one if there is a choice at all. Often, the store decided already for the customer and offers either Coca-Cola or Pepsi Cola, but not both of them. Of course, the reasoning of the store manager is much more complicated because for him, in general, variables other than prices are relevant as well. Although there might be only negligible differences in the taste of the two drinks, the two suppliers can have very different marketing strategies directed to store managers that lead to a degree of monopolization inasmuch as a particular store only offers the brand that seems favorable to the manager. To get an understanding of such more complex cases, let us describe the decision problem in way typical for a game-theoretical analysis. Let’s assume we are one of the players and face a strategic decision situation. In order to manage such a situation, we have two basic concerns: (a) to find an adequate

1  Playing for Susan     11

description of the situation and (b) to find a solution to our decision problem. There are three steps to help us in this project. Step One: Identification of a decision situation as a game-theoretical problem. A decision situation is strategic if (a) the outcome is the result of the decisions of more than one decision maker, (b) each decision maker is aware of this interdependency, (c) each decision maker assumes that the other decision makers are aware of this interdependency, and (d) each decision maker takes (a), (b), and (c) into consideration. Of course, this only makes sense if the number of players is small such that the interdependency can be considered as relevant and being handled accordingly. However, what is a small number? In a way, this is defined by our behavior in such a decision situation. If we take (a), (b), (c), and (d) into consideration, then we think that the number of agents is small enough— and we see ourselves in a strategic decision situation. Step Two: Formulation of the adequate game model. A game consists of the following building blocs: (a) Decision makers, agents, etc., called players. (b) Strategy sets: Each player chooses his or her strategy out of a corresponding set of strategies that are given by the resources and defined by the rules of the game. (c) Payoffs—are utilities that the players assign to the possible outcomes determined by corresponding choices on strategies. Note that the outcomes (or events) do not show up in the game, but their evaluations in the form of payoffs do. In Matrix 1.1, we assumed that profits are a good proxy for payoffs and did not distinguish between the two concepts which is the regular approach procedure in standard microeconomics with respect to firms. But how shall we proceed if the outcomes are apples, pears, and bananas? We evaluate them in accordance with our preferences and assume that the other players will do the same. Of course, the problem is that, in general, we can only guess the other players’ preferences. To give our preferences in the form of numbers can be difficult enough as modern utility theory, referring to introspection and experiments, tells us. In Chap. 10, we will discuss some extreme cases of “misrepresentation.” With respect to strategies, we should keep in mind that they represent plans often in the form of a sequence of moves. Moves can be contingent in the form of “If player A does x, I will choose y; if A does z, I will choose v.” Think about chess, which is a popular illustration of a game, but note that the strategies of the game are certainly numerous. Nobody can formulate a plan that lists the suggested moves from the beginning to the end of the game. Still, the example may help us to understand that the set of strategies

12     M. J. Holler and B. Klose-Ullmann

depends on the rules of the game. Outside the game arena, such rules are often given by laws and public regulations, but also by behavioral standards. If we violate them, we may be eliminated from the standard games of the society we live in. Step Three: Selection of the solution concept. Applying a particular solution concept or, in short, a solution to a game is meant to determine the strategies that the players are expected to choose, and thus determine the outcome and the corresponding payoffs of the players. Often, the selected outcome is not unique, and for some solution concepts and a particular game, an outcome may not even exist. In the Prisoners’ Dilemma game, the solution concept, i.e., equilibrium in dominant strategies, is defined by the strategies that the players are expected to choose. Alternatively, we may define the set of Pareto efficient outcomes as a solution which corresponds to the payoff pairs (800, 200), (250, 300), and (1500, 100) in Matrix 1.1. Note that given one of these payoff pairs no player can be made better off without making the other worse off. Given this set, of course, we have to discuss how one of its elements can be achieved, given the game situation and self-interested players. A favorite answer has recourse to altruism. However, if we introduce Adam Smith’s “fellow-feelings,” proposed on the first page of his Theory of Moral Sentiments (Smith 1982 [1759]), into Matrix 1.1 and these fellow-feelings are strong enough so that at least one of the players has no strictly dominant strategy available, then the game is no longer a Prisoners’ Dilemma. Moreover, ­fellow-feelings among the managers of gas stations are not very likely. If we see that they choose high prices and thus deviate from the equilibrium of dominant strategies, we have to look for another explanation. Chap. 9 offers such an answer to this problem.

1.5 Winner-Takes-It-All and the Chicken Game Now let us apply our just developed scheme to a real-world case, but described in terms of its stylized characteristics. Let us take a historical case: the Browser War between Microsoft, on the one side, and Netscape, on the other. Time Magazine of September 16, 1996 (p. 53ff.), reported that a dramatic battle between Microsoft and Netscape developed. Each of the two suppliers of browser programs wanted to help us find our way on the Internet. The winner of this battle could expect to earn billions of dollars,

1  Playing for Susan     13

while the loser would become marginalist on the market and perhaps would even have to close down the business. This looked like a winner-takes-it-all game.3 An important component of any strategy in this battle was the compatibility of a particular browser program. In the beginning, the older program of the two, Netscape’s Navigator had the advantage to be widely used, and therefore, its net effects were larger than the net effects of the newcomer’s program. Netscape could expect that the users of Navigator would be loyal to their browser program. It seems that there was a strong first-mover advantage embedded in the net effects—and the routine of the users. However, this was challenged by the fact that the Internet Explorer of Microsoft was easier to handle for newcomers and it was offered for free. Of course, with a zero price, Microsoft could not expect to make profits out of the sale of browser programs. But it was expected that “buyers” of the Microsoft browser would also buy other programs and services supplied by Microsoft, and this is what happened. As a result of the zero-price policy of Microsoft, Netscape’s Navigator vanished from the market. To describe the set of intertemporal strategies that were available to the players is rather difficult in this case. Moreover, the decisions were driven by expectations, and we have as yet not the instruments to deal with expected values.4 So far we simply do not have the capacity to represent this situation adequately. However, we can look at a toy model of this case that nevertheless might be useful to illustrate the decision problem and to derive some preliminary conclusions. Let us start with Matrix 1.1. We identify Microsoft and Netscape by the players A and B, respectively. The entries in the cells of the matrix represent expected profits for A and B. So, if A chooses “high” and B chooses “low,” the payoffs will be 250 for A and 300 for B. However, Matrix 1.2 assumes the payoffs (−50, −100) for the case that both players choose low prices, while Matrix 1.1 assumed the payoffs (500, 150). Obviously, the underlying decision situations are different and the payoff pair (−50, −100) suggests that an ongoing price war will be hazardous to both suppliers in the long run.

3From The Winner Takes It All lyrics by ABBA: “The winner takes it all/The loser’s standing small/Beside the victory/That’s her destiny.” 4For expected values, see Chap. 10.

14     M. J. Holler and B. Klose-Ullmann

Would you choose a high or a low price if you were player A? Matrix 1.2  The Chicken Game Player

high

Low

high

(800,250)

(250,300)

low

(1500,100)

(-50,-100)

A

B

A comparison of the games in Matrices 1.1 and 1.2 shows that a perhaps minor change in the payoffs can have tremendous consequences for the decision situation. In Matrix 1.2, none of the players has a strictly dominant strategy. Therefore, the solution concept of an equilibrium in strictly dominant strategies does not apply. Whether a strategy is a good choice for A depends on which strategy B chooses, and vice versa. If we assume that players choose their strategies simultaneously, so that A does not know the strategy which B selects and B does not know the strategy which A chooses, then we see that the decision problem of the two players is nontrivial. In the course of this game, we will learn several solution concepts that should help players A and B to make rational choices—and to help us understand decisions made in game situations. Without going into detail, we see that the strategy pairs (high, low) and (low, high) are characterized by some stability as neither player is motivated to revise his or her strategy, given the strategy of the other player. In Chap. 3, we will learn that this property defines a Nash equilibrium. However, the strategy pairs (high, low) and (low, high) cannot be satisfied at the same time; that is, they are alternatives that exclude each other. Even though it would be beneficial for player A to see the strategy pair (low, high) put into reality, A cannot force B to choose a high price. Note that in a strategic decision situation a player cannot choose an outcome, independent of what the other player does—in fact, a player chooses a strategy and not an outcome. The game in Matrix 1.2 is known as Chicken Game. Different from the Prisoners’ Dilemma game in Matrix 1.1, it represents a rather complex decision situation as we will see in Chap. 4. In Chap. 4, we will hear of James Dean and learn why this game is called a “chicken” and how this game can be applied to analyze the lovers’ battle in the Kamasutra.

1  Playing for Susan     15

1.6 The Essence of Game Theory, the Brain, and Empathy The essence of game theory is to form expectations about the behavior of the other agents, when the situation is strategic, and then to choose one’s best reply in accordance with these expectations. “The better these expectations, the better the choices,” one should hope. However, we will come across situations where sophisticated reasoning about the choices of others does not help to improve one’s fate. Game theory will teach us the characteristics of some of these situations. Of course, putting oneself into the shoes of others necessitates some knowledge about others. Often specific knowledge is not available, but general knowledge about human behavior can be of help, too. In fact, the study of culture and institutions can help to form expectations even when the personality of the other decision makers is quite alien to us. In any case, our capacity for putting ourselves into the shoes of others is limited. In many cases, it assumes a rather complex thought process. In his review of V. S. Ramachandran’s “The Tell-Tale Brain: A Neuroscientist’s Quest for What Makes Us Human,” McGinn (2011: 32) discusses the author’s thesis that studying the brain is “a good way to understand the mind.” Here, “mind” describes our capacity of thinking and thus of decision making. The hypothesis implies that, in order to understand thinking, we should look at the corresponding parts of the brain and their specialization. The “mirror neurons,” discovered in 1990, are of special interest to strategic thinking as they serve as the mechanism of imitation in our brain and as a source of empathy.5 As a consequence, when you are watching someone performing an action these neurons “fire” sympathetically and you perform the same action in your brain, but sometimes also physically “as when your arm swings slightly when you watch someone hit a ball with a bat.” This reaction “runs by means of mirroring neurons an internal simulation of the other’s intended action.” If so, “we need special inhibitory mechanisms in order to keep our mirror neurons under control – or else we would be in danger of doing everything we see and losing our sense of personal identity. We are, in effect, constantly impersonating others at a subconscious level, as 5“Mirror

neurons are a particular class of visuomotor neurons, originally discovered in area F5 of the monkey premotor cortex, that discharge both when the monkey does a particular action and when it observes another individual (monkey or human) doing similar action.” The authors “present evidence that a mirror-neuron system similar to that of the monkey exists in humans” (Rizzolatti and Craighero 2004: 169).

16     M. J. Holler and B. Klose-Ullmann

our hyperactive mirror neurons issue their sympathetic reactions” (McGinn 2011: 34). An understanding of strategic decision situations may help us to bring this process to the conscious level and support preserving our personal identity. According to Ramachandran, autism can be understood as a deficiency in the mirror neuron system. “The autistic child cannot adopt the point of view of another person, and fails properly to grasp the self-other distinction … The brain signature of empathy is…absent in autistics” (McGinn 2011: 34). Can we conclude that autistics are unable to think strategically and game theory is an alien to them? Empathy corresponds to Adam Smith’s notion of sympathy that is responsible for the impartial-spectator concept being the cornerstone of his The Theory of Moral Sentiments, first published in 1759. The impartial spectator is “the man within the breast” (Smith 1982 [1759]: 132)—an illustration for one’s conscience. Sympathy is the ability to transcend oneself and see a situation from another’s point of view. The impartial spectator is derived from sympathy. But it is not the passion of others that puts our sympathy in motion, but our hypothetical experience of being in the other person’s position. Sympathy…does not arise so much from the view of the passion, as from that of the situation which excites it. We sometimes feel for another, a passion of which he himself seems to be altogether incapable; because, we put ourselves in his case, that passion arises in our breast from the imagination, though it does not in his from the reality. We blush for the impudence and rudeness of another, though he himself appears to have no sense of the impropriety of his own behaviour; because we cannot help feeling with what confusion we ourselves should be covered, had we behaved in so absurd a manner (Smith 1982 [1759]: 12).

We are back to the mirror neurons and to V. D. Ramachandran and his “neuroscientist’s quest for what makes us human.” Interestingly, Ramachandran also stresses the social dimension of our mind: “Culture consists of massive collections of complex skills and knowledge which are transferred from person to person through two core mediums, language and imitation. We could be nothing without our savant-like ability to imitate others.”6

6V.

S. Ramachandran, quoted by McGinn (2011:32).

1  Playing for Susan     17

McGinn (2011: 35) concludes that mirror neurons are an interesting discovery but raises the question of whether they really are “the explanation of empathy and imitation.” And he asks further: “What about the ability to analyze an observed action, not merely repeat it?” This question is of great importance when we relate game-theoretical thinking to the physiology of our mind. However, in what follows, we will not go deeper into the neuroscience of decision making. Important as this dimension seems to be, there are enough problems with the conceptual framework of strategic decision making to fill libraries. We will try to select the most important issues. By its very nature, and building on strategic thinking, game theory points to the “small world” in which agents are assumed to put “oneself into the shoes of others.” However, game-theoretical results are also interpreted for larger worlds in which the potential strategic behavior is limited or even inadequate—or restricted to a partial analysis. Some examples have been provided above. However, one should be very careful when leaving the “small world.”7 A “small world” seems to be a prerequisite for the knowledge and information that game theory assumes for the players. In this text, it is assumed that the players know the game, i.e., the set of players, their strategy sets, and their preferences with respect to the possible outcomes. This implies that player 1 knows the preferences of player 2 and player 2 knows player 1’s preferences. If these rather strong assumptions hold, the game is characterized by complete information; if not, we are talking of a game with incomplete information (see Sect. 5.5 for further details). Even stronger assumptions, often implicit to game-theoretical analysis, are common knowledge of rationality (CKR) paired with assuming consistent-aligned beliefs (CAB)—proposing “that everybody’s beliefs are consistent with everybody else’s” (Heap and Varoufakis 1995: 25). More specifically, every player believes that the other players behave rationally, maximizing their own payoffs, and that every player makes this assumption of every other player and decides accordingly. However, below we will discuss game situations in which the forming of such beliefs is impossible (or implausible), even in cases of complete information and assuming CKR and CAB.

7See Binmore (2017) for a discussion of the problems that result from applying the “small world decision theory” à la Leonard Savage to a “large world.” Rationality, defined for a small world, might not apply in a larger one.

18     M. J. Holler and B. Klose-Ullmann

1.7 Strategic Thinking that Failed—Perhaps It has been said that the German Chancellor Angela Merkel wanted to see Axel Weber as successor of Jean-Claude Trichet whose term as President of the European Central Bank, i.e., the ECB, ended on November 1, 2011. We assume that in order to overcome expected resistance to the nomination of Axel Weber, Mrs. Merkel strongly supported the nomination of the Portuguese Vitor Constancio for the position of Vice President, and he was nominated. This choice reduced the chance of Mario Draghi, the Head of the Italian Central Bank, to become Trichet’s successor, to zero. Mario Draghi ranked as major competitor to Axel Weber for the presidency of the ECB. But the Euro-Europe was not ready to accept two “southern European citizens” for the representation of its central monetary institution. If Angela Merkel had the plan to gain the ECB presidency for Germany, then her strategy had so far been quite successful. However, on February 11, 2011, rumors spread that Axel Weber had decided neither to serve as President of the ECB nor be a candidate for a second term as Head of the German Federal Bank. Rumors said that he might desire to become CEO of the Deutsche Bank, which is by far Germany’s largest private bank. However, concerning the second rumor, there was discussion whether such a “trading places” or “switching chairs” would be possible, given that the Head of the German Federal Bank is one of the supervisors of the private banking system of Germany and thus also of the Deutsche Bank. The Frankfurt Stock Exchange was “irritated” and prices dropped, but only to recover the very same day. Newspapers said that Chancellor Angela Merkel was also irritated, not to say that she was annoyed. An alternative interpretation says Axel Weber was not willing to be made responsible for an ECB policy which he had not supported as President of the German National Bank. It is said that he might have been afraid of being the scapegoat if other Euro countries asked for a bailout or if the Euro ran into further trouble and Angela Merkel needed a sacrificial pawn in order to please the French President Sarkozy. Of course, these are rumors. Niccolò Machiavelli reports in The Prince how Cesare Borgia made use of his Minister Messer Remirro de Orco to gain power and to please the people. When he [Cesare Borgia] took the Romagna, it had previously been governed by weak rulers, who had rather despoiled their subjects than governed them, and given them more cause for disunion than for union, so that the province was a prey to robbery, assaults, and every kind of disorder. He, therefore,

1  Playing for Susan     19

judged it necessary to give them a good government in order to make them peaceful and obedient to his rule. For this purpose, he appointed Messer Remirro de Orco, a cruel and able man, to whom he gave the fullest authority. This man, in a short time, was highly successful, whereupon the duke, not deeming such excessive authority expedient, lest it should become hateful, appointed a civil court of justice in the centre of the province under an excellent president, to which each city appointed its own advocate. And as he knew that the hardness of the past had engendered some amount of hatred, in order to purge the minds of the people and to win them over completely, he resolved to show that if any cruelty had taken place it was not by his orders, but through the harsh disposition of his minister. And having found the opportunity he had him cut in half and placed one morning in the public square at Cesena with a piece of wood and blood-stained knife by his side. The ferocity of this spectacle caused the people both satisfaction and amazement (Machiavelli 1952 [1532]: 55).

Note that Cesare Borgia used the law and the camouflage of a legal procedure to sacrifice his loyal minister and to gain the applause of the people. His use of cruelty and deceit was a successful solution to the strategic problem: how to bring order to the Romagna, unite it, and make peace and create fealty, without being made responsible for the necessary cruelties. The episode demonstrates that the power of Cesare Borgia depended on his skills of strategic thinking, his willingness to inflict cruelties on people who trusted and worked for him, and, one has to say, on the naivety of his minister. Messer Remirro de Orco could have concluded that the Duke would exploit his capacity, and, in the very end, this capacity included that he himself had to serve as a sacrifice to the people who had suffered cruelties before. Perhaps Messer Remirro de Orco saw himself and the Duke in a different context and the game that reflected this context did not propose the trial and his death as an optimal alternative to the Duke. Obviously, the misfortune of Messer Remirro de Orco was that the Duke ’s game was based on offering an “officer” to console the people. History demonstrated that his choice of strategy was successful. It seems reasonable here to think in game-theoretic terms. Strategic thinking is a dominant feature in Machiavelli’s writings. He could well be considered a pioneer of modern game theory. It does not come as a surprise that game-theoretical language straightforwardly applies to the core of Machiavelli’s analysis. There is no evidence that Donald Trump, Boris Johnson, or Angela Merkel ever read The Prince. Quite likely Putin read it. It is said that Napoleon read it and that a copy of this book was found in Hitler’s library.

20     M. J. Holler and B. Klose-Ullmann

Still, when Napoleon led the Grande Armée to Moscow, he was convinced that his troops dominated in the technology of weapons and strategic skills. He was a military genius but he forgot to consider one possible strategy of his enemy: the strategy of scorched earth. Its application caused enormous damage to the Russian population; however, it destroyed Napoleon’s troops. Perhaps reading Machiavelli’s The Art of War, in addition to studying The Prince, could have helped—perhaps it may have even kept war in check. Why did Julius Cesar go to the Senate on March 15, despite all the hints and warnings? He knew that many Romans wanted to see him dead and some were planning that this would happen. He knew his countrymen, but he did not expect that they would decide to coordinate on a collective murder. Obviously, he saw the problem but misinterpreted the decision situation. When Michelin, the French producer of tires, opened a production plant in the USA to facilitate its access to the American market, the US producer of tires, Goodyear, entered the French market. On April 8, 2002, the FAZ, a leading German newspaper, commented on this result: “If the managers of Michelin had done some game theory, they would have spared their firm considerable problems.”

References Binmore, K. (2017). On the foundations of decision theory. Homo Oeconomicus, 34, 259–273. Heap, S. P. H., & Varoufakis, Y. (1995). Game theory: A critical introduction. New York: Routledge. Holler, M. J., & Klose-Ullmann, B. (2010). Art goes America. Journal of Economic Issues, 44, 89–112. Hume, D. (1985 [1777]). Essays: Moral, political, and literary, Indianapolis: LibertyClassics. Machiavelli, N. (1952 [1532]). The prince, New York: Mentor Books. Machiavelli, N. (1882 [1521]). The art of war, in the historical, political, and diplomatic writings of Niccolò Machiavelli (C. E. Detmold, Trans.). (vol. 4). Boston: James R. Osgood and Co. McGinn, C. (2011). Can the brain explain your mind? New York Review of Books (March 24) 58: 32–35. Rizzolatti, G., & Craighero, L. (2004). The mirror-neuron system. Annual Review of Neuroscience, 27, 169–192. Smith, A. (1981 [1776/77]), An Inquiry into the Nature and Causes of the Wealth of Nations, ed. by R. H. Campbell and A. S. Skinner, Indianapolis: Liberty Press.

1  Playing for Susan     21

Smith, A. (1982 [1759]). The theory of moral sentiments, ed. by D. D. Raphael & A. L. Macfie, Indianapolis: Liberty Press. Smith, A. (1982 [1758]). History of astronomy. In W. P. D. Wightman & J.C. Bryce (Eds.), Essays on philosophical subjects (pp. 33–105). Indianapolis: LibertyClassics. Steiner, R., & Weiss, J. (1951). Veblen revised in the light of countersnobbery. Journal of Aesthetics and Art Criticism, 9, 263–268. Veblen, T. (1979 [1899]). The theory of the leisure class. New York: Penguin Books.

2 No Mathematics

No mathematics is necessary to understand the following text except the basic operations of summing up, subtracting, multiplying, and dividing. However, it could be helpful to get familiar with some basic concepts such as sets, functions, and vectors so that the reader can make use of other game-theoretical literature. Moreover, these concepts help to make the message of the text more concise and easier to structure. Readers with a basic training in mathematics or with some confidence in the associative capacity of their reasoning can skip this chapter in the first round of their reading and, hopefully, have never to come back to it. But you should read the note in the next section if you are interested in the history of game theory. In fact, we should be interested in it because we can learn a lot from it and it is exciting—not only because a larger part of it is a “child of war.”

2.1 Historical Note I: The Pioneers In 1926, John von Neumann, then called Johann, presented the Minimax Theorem in Hilbert’s seminar at the University of Göttingen. In 1928, a written version of this presentation appeared in the form of a scientific article titled “Zur Theorie der Gesellschaftsspiele” (published as “On the Theory of Games of Strategy” in 1959). In this article, von Neumann gave a definition of a game of strategy and the various components that are needed to define a game. He also delivered solution concepts for such games. Most prominently, he proved the Minimax Theorem for two-person zero-sum games. © Springer Nature Switzerland AG 2020 M. J. Holler and B. Klose-Ullmann, Scissors and Rock, https://doi.org/10.1007/978-3-030-44823-3_2

23

24     M. J. Holler and B. Klose-Ullmann

He also drafted a solution for games with more than two players for which coalition formation matters. In fact, the solution concept he proposed in this paper is very similar to the Core, a concept that is rather popular today. Later, however, in “The Theory of Games and Economic Behavior” (Von Neumann and Morgenstern 1944), he subscribed to a different solution concept which is less popular today. John von Neumann was born Neumann János Lajos in 1903 in Budapest and died in 1957 in Washington, DC. At the age of six, he could tell jokes in Classical Greek, memorize telephone directories, and was able to divide two 8-digit numbers in his head. His first mathematics paper, written jointly with Michael Fekete,1 then assistant at the University of Budapest who had been tutoring him, was published in 1922. At the age of 25, he had already published ten major papers in mathematics. In 1926, he received his Ph.D. in mathematics from the University of Budapest and a diploma in chemical engineering from the ETH Zurich. He taught as a Privatdozent at the University of Berlin2 from 1926 to 1929 and at the University of Hamburg from 1929 to 1930. In 1930, he became a visiting lecturer at Princeton University. In 1931, he was appointed professor there. In 1933, he became professor of mathematics at the newly founded Institute for Advanced Study in Princeton—a position he kept for the remainder of his life. Morgenstern (1976) reports that, on February 1, 1939, when he gave an after-luncheon talk on business cycles at the Nassau Club, he had a first chance to talk to John von Neumann about games. Over the years, their discussion and friendship progressed. In 1944, their “Theory of Games and Economic Behavior” (TGEB) was published, a volume of more than 600 pages, that became the cornerstone and reference point of the g­ame-theoretical research from 1945 to 1955. In his Von Neumann, Morgenstern, and the Creation of Game Theory: Chess to Social Sciences, 1900–1960, Leonard (2010) describes not only the historical background of the TGEB, but also its impact on the strategic thinking of the postwar period during the Cold War. This was highlighted by the research that was

1Fekete

succeeded the famous mathematicians Edmund Landau and Abraham Fraenkel in heading the Institute of Mathematics at the Hebrew University of Jerusalem. 2The University of Berlin was founded in 1810. From 1828 to 1946, it was named ­Friedrich-Wilhelms-Universität in honor of its royal founder. In 1949, situated in East Berlin and thus in the former German Democratic Republic, its name was altered into Humboldt-Universität zu Berlin and was maintained after the reunification of Germany in 1990. Humboldt is a good name when it comes to science.

2  No Mathematics     25

encapsulated in the activities of the RAND Corporation at Santa Monica which was and, most likely, still is engaged in research under contract with the United States Air Force. It is said that the founding of this institute in March 1946 was partly initiated by von Neumann. Even if this might not be true, von Neumann was an extremely important and frequent visitor at RAND. Amadae (2003: 10) claims that it “… is no exaggeration to say that virtually all the roads to rational choice theory lead from RAND. This observation draws attention to its role as a quintessential American Cold War institution, and in turn to the Cold War motives that underlay much of the impetus propagating rational choice theory.” During the years of collaboration with Morgenstern and after World War II, von Neumann served as a consultant to the armed forces. In 1940, he became a member of the Scientific Advisory Committee at the Ballistic Research Laboratories and in 1941 a member of the Navy Bureau of Ordinance. He was a consultant to the Los Alamos Scientific Laboratory from 1943 to 1955. In this function, he was a leading contributor to the development of the nuclear and, along with Edward Teller and Stanisław Ulam, of the hydrogen bomb. In 1956, he received the Presidential Medal for Freedom being America’s highest civilian award, recognizing exceptional meritorious service. Von Neumann died of bone cancer on February 8, 1957, “after much suffering” (Morgenstern 1976: 814). Obviously, von Neumann’s scientific interest strongly focused on mathematics and its applications. He has delivered substantial contributions to quantum physics, functional analysis, set theory, topology, numerical analysis, cellular automata, and computer science, and his work in economics looks just like another field of application of mathematics. However, quite surprisingly, Strathem (2001) devoted a full chapter to von Neumann in his “Brief History of Economic Genius,” of nearly the same length as the presentations of Adam Smith, Marx, and Keynes, on the basis of his g­ame-theoretical work, not mentioning his general equilibrium paper (von Neumann 1945 [1937]). The latter paper is, however, unknown to most economists and generally seen as an exercise in mathematics. What should you expect from a paper with the title “Über ein ökonomisches Gleichungssystem und eine Verallgemeinerung des Brouwerschen Fixpunktsatzes”—and with a single reference: a book titled “Topologie”? Von Neumann’s work in game theory consists of the 1928 article and the joint publication of “Theory of Games and Economic Behavior.” This is an outstanding work and quintessential for the further development of game theory. Yet, the claim that von Neumann is the “founding father of

26     M. J. Holler and B. Klose-Ullmann

game theory” has been challenged by friends and disciples of Emile Borel, a French mathematician and politician. Félix Édouard Justin Émile Borel (1871–1956) was born in ­Saint-Affrique, Départment Aveyron, France. He was not only a first-class mathematician, but from 1924 to 1936 he was a member of the French National Assembly. In 1925, he became Minister of Marine. During World War II, he joined the French Resistance. His political vision, however, was a united Europe. In mathematics, along with René-Louis Baire and his student Henri Lebesgue, he was among the pioneers of measure theory and its application to probability theory. One of his books on probability introduced the amusing thought experiment that entered popular culture under the name Infinite Monkey Theorem. The last section of this chapter contains an illustration. Between 1921 and 1927, Borel published a series of papers on game theory and became “perhaps” the first defining the game of strategy “in which the winnings depend simultaneously on chance and the skill of the player” (Borel quoted by Rives 1975: 559). The focus on probability theory also carried over into his pioneering work in game theory. Combining the idea of strategic behavior with probability theory leads Borel to suggest the concept of mixed strategies: the potential that players choose their strategies with probabilities smaller than one. In fact, he came very close to the understanding of the mixed-strategy equilibrium and its somewhat paradoxical consequences.3 We will come back to this concept in Chap. 9. To our knowledge, Borel did not, however, prove that, given this potential, an equilibrium in the form of a pair of strategies, possibly mixed, always exists if the interests of the players are strictly antagonistic, as in the zero-sum game. As already pointed out, von Neumann gave a proof of this result, known as Minimax Theorem, in his 1928 article. This may move one to ask how one pioneer of game theory copes with the other pioneer’s work. It seems that they tried to avoid reading the other’s material. We do not know whether Émile Borel read German, but von Neumann had the reputation that he could converse “in all major European languages,” including Classical Greek and Latin. On May 1928, von Neumann sent a note to Borel, as it seems in French, announcing that he had “proved a theorem two years previously … concerning the existence of a ‘best’ way to play in the general two-person, zero-sum case” (Leonard 2010: 62). Borel is hardly known for his contribution to game theory, but the concept of a Borel set is named in his honor. Moreover, we find Borel’s paradox, 3See

Leonard (2010: 60).

2  No Mathematics     27

the Heine–Borel theorem, and the Borel–Cantelli lemma in the literature as well as concepts like Borel algebra, Borel measure, and Borel space. In addition, one of the many craters on the moon carries his name. There is, however, a von Neumann crater as well on the moon. But let us come back to the idea of sets. It is helpful to understand this concept before going into the basics of game theory.

2.2 The Concept of Sets When set theory entered the curriculum of elementary schools, it was met with disapproval and refutation by the parents. Many parents felt it difficult to accept that 1 + 1 may be equal to 1. We do not want to comment why set theory should be taught at elementary schools, but simply summarize the basics referred to in this volume. A preliminary, but very useful definition is to think of a set as a collection of objects that are different from each other. The apple and pears in Paul Cézannes’ fruit basket (Fig. 2.1) form a set if we consider each apple different from another apple and of course different from a pear and consider each pear different from another pear and different from an apple. The members of the US Senate and the directors of Microsoft are sets. The natural numbers 1, 2, 3, 4, … can be described by a set, although this is an infinite set as there is always a number greater than each number we pick from this set. In fact, there are two conventions for the set of natural numbers: either the set of positive integers {1, 2, 3, …} or the set of non-negative integers {0, 1, 2, …} which includes zero. (What is natural about zero?) Sometimes the natural numbers together with zero are referred to as whole numbers. However, this term is also used for the set of positive and negative integers, including zero. In general, it does not make much sense to “add up” apples to pears if the difference between them is considered essential—for example, if John is allergic to apples. But to sum them up as fruits could make sense if Jean is on a fruit diet and apples and pears are considered equivalent from the diet’s point of view. Whether it makes sense to distinguish objects that may form a set or not depend on whether it makes practical sense to do so. If we distinguish the objects but relate them to form a set—for instance, put them into a basket—then these objects are called elements of the set. Thus, number 3 is an element of the set of natural numbers. The players of the game that is represented by Matrix 1.1 form a set, the set of players, and player A is an element of this set.

28     M. J. Holler and B. Klose-Ullmann

Fig. 2.1  Basket with apples and pears à la Paul Cézanne (The original material to this piece was produced by Raphael Braham, Hamburg. We would like to thank him and his parents for letting us have this material)

There is the convention to use capital letters for labeling sets: A, B, C, …, X, Y, Z. If we label the set of natural numbers by A, then 3 is an element of A. Often, we use lower case letters for elements and curly brackets to lump them into a set: For example, Z = {a, b, c, d} is the set of the elements a, b, c, and d and T = {1, 2, 3, 4, 5, 6, 7, 8, 9} is the set of single-figure numbers. However, as the sequence of elements within a set does not matter {1, 2, 3, 4, 5, 6, 7, 8, 9} = {9, 2, 1, 3, 4, 5, 6, 7, 8}. That is: 1. Two sets A and B are identical if their elements are identical. 2. A set does not include two or more identical elements. Set X is a subset of set Y, if all elements of X are elements of Y. We write X ⊆ Y. Set X is a proper (or strict ) subset of Y if it is a subset of Y and Y contains elements which are not in X. We write X ⊂ Y. The apples in Cézanne’s basket, depicted in Fig. 2.1, are a proper subset of the fruits in the basket, i.e., the set of apples and pears.

2  No Mathematics     29

A a b

c

B

e

d

g

f

Fig. 2.2  Union (of sets) A ∪ B

A a b

c

B

e

d f

g

Fig. 2.3  Intersection (of sets) A ∩ B

If we take the squares of the natural numbers, we can think of them of an infinite set. However, this set will be a proper subset of the set of natural numbers, despite its infinity. If a set X contains no elements, then it is an empty set and we write X = ∅. If a set X contains only one element, say i, then it is singleton, and we have X = {i}. If two sets A and B form a union, then the union set contains all elements of A and B. For the union, we write A ∪ B. However, identical elements in A and B will be “listed” only once. The union of A = {a, b, c, d} and B = {c, d, e, f, g}, thus, is A ∪ B = {a, b, c, d, e, f, g} (see Fig. 2.2). It should be obvious that X ∪ Y = Y if X ⊂ Y. It should also be obvious that A ∪ A = A. Due to lack of mathematical symbols on the typewriter or “pure laziness,” this expression is sometimes written as A + A = A. Is this how we get 1 + 1 = 1? Figure 2.3 illustrates the intersection of the two sets A and B, i.e., A ∩ B, where A = {a, b, c, d} and B = {c, d, e, f, g}. We get A ∩ B = {c, d} for the intersection of A and B. Of course, X ⊂ Y implies the intersection X ∩ Y = X. Obviously, A ∩ A = A.

30     M. J. Holler and B. Klose-Ullmann

There is another important concept used in game theory, especially when we look at coalition formation: Set A is the complement of Ac if A ∩ Ac = ∅ and A ∪ Ac = Ω. Here, Ω is the set of all elements under consideration. In Fig. 2.1, Ω is represented by all apples and pears in the basket. The set of apples is the complement of the set of pears, and the set of pears is the complement of the set of all pears.

2.3 Prices and Quantities The set of real numbers consists of whole numbers, rational numbers, and irrational numbers like π and √2. Rational numbers can be written as a ratio of two whole numbers (of course, we are not allowed to divide by zero). In economics, we refer to the set of all non-negative real numbers to describe the set of (possible) prices. Figure 2.4 illustrates this set. Therefore, the set of prices P is the set of all real numbers p, given p ≥ 0, such that any p is a particular price, i.e., an element of P. If we assume that goods (commodities or services) can be split up into units so that any (non-negative) quantity is feasible and can be described by a real number, then we get the set of quantities Q. Figure 2.5 illustrates Q. Therefore, if q is an element of Q, then q is a quantity. Of course, we have q ≥ 0. If we combine the sets of Figs. 2.4 and 2.5, we get a price-quantity diagram. The diagram illustrates the set (P, Q) defined by all pairs (p, q) that result in a combination of P and Q. The pairs (p1, q1), (p2, q2), (p3, q3), (p4, q4), and (p5, q5) are elements of the set (P, Q). The price-quantity pairs have been chosen so that they are consistent with a negatively sloped demand curve. Economists like this type of curve, but it has very little relevance for game theory. Here, it serves to introduce the idea of a function: more precisely, the graph of a function. p 0

5

10

15

Fig. 2.4  The set of prices P

q 0 Fig. 2.5  The set of quantities

5

10

15

2  No Mathematics     31 pi

. .

0

.

.

. qi

Fig. 2.6  A price-quantity diagram

2.4 From Set to Mapping and Function The representation of prices and quantities by real numbers implies that both are measurable. Consequently, the set (P, Q) is a subset of ­(two-dimensional) space. If, instead of prices and quantities, the two dimensions represent the profits of agents A and B, respectively, then we get a profit space as illustrated in Fig. 2.7. GA and GB represent the profits of A and B, respectively. Line gg represents the maximal profits of B, given alternative levels of GA. Of course, gg also shows the maximal levels of profits of A, given alternative levels of GB. The line GG captures the maximum of total profit G* such that GA + GB = G* holds for GG. Figure 2.7 demonstrates that G* can be achieved only if G* is shared according to GA* and GB*. Other pairs of (GA, GB) represent sums G° = GA + GB, smaller than G*. The line gg implies that the set of possible profits given in Fig. 2.7 is convex. A convex set Z is characterized by the fact that all elements of the connecting line segment of any two “points” that represent elements of Z are elements of Z. Figure 2.8a represents a convex set Y. However, Y is only weakly convex, as there are connecting lines that are subsets of the border of Y. The set X of Fig. 2.8b is non-convex: For instance, the connecting line of elements a and b has elements which are not contained in X. The profit space in Fig. 2.7 assumes that each element (“point”) in this space represents a pair of profits (GA, GB). Correspondingly, the pair (800, 200) assigns profits of 800 and 200 to agents A and B, alternatively (see Matrix 1.1). We should expect that A would not be very happy if the

32     M. J. Holler and B. Klose-Ullmann GB G

g *

GB

*

GA

0

g

45° GA

G

Fig. 2.7  A convex profit space

2-8a

2-8b a

Y

X

b

Fig. 2.8  Weakly convex set and non-convex set

sequence of the pair (800, 200) was neglected. An element in a space with well-defined sequence of the components defines a vector. A space presupposes that the components are measurable and thus can be described by real numbers. Examples are the price-quantity pairs in Fig. 2.6 and the pairs of profits in Fig. 2.7. If X is an n-dimensional vector space, then the vector x = (x1, …, xn) is an element of this space. Figure 2.6 expresses a relationship between the set of prices P and the set of quantities Q. Such a relationship is called a function or mapping. We write: f: P → Q

(F)

Here, f is the mapping; P is the domain; and Q is the range, i.e., the set of images. P and Q are the sets representing the independent variable p and

2  No Mathematics     33

p

q

0 Fig. 2.9  Demand function q = f(p)

the dependent variable q, respectively. A mapping f is a function if each element of P corresponds to one element in Q only. Then, we write q = f(p). Note that there can be several p-values that have the identical q value, and f(p) is still a function. Thus, a function is a set of ordered pairs such that there are no two pairs with identical values in range of the function. Corresponding to the price-quantity Fig. 2.6, (p, q) are the pairs which can be related to the function q = f(p), i.e., the demand function. A problem with the graphical representation of this function is due to Alfred Marshall (1842–1924). He introduced the convention to represent the independent variable p on the vertical and the dependent variable q on the horizontal— which does not concur with the standard graphical representation of a function. If we accept Marshall’s convention and assume non-negative prices and quantities, then Fig. 2.9 describes a possible demand function f(q).

2.5 Utilities, Payoff Functions, and Strategy Vectors Utilities are of utmost importance in games as they are in game theory. “Players try to win.” The utility function u relates an event result e to a real number u(e) which expresses the evaluation of e by a particular agent, i.e., player. If E represents the set of all possible events (“outcomes”), and E and R the set of real numbers, then u is a function u: E → R.

(U)

34     M. J. Holler and B. Klose-Ullmann

Of course, two elements of E can correspond to the same element in R; that is, they can be evaluated identically, as demonstrated in Fig. 2.10. This is the case if the agent is indifferent between a and b. More generally, the utility function expresses the evaluation of a particular decision maker i. In order to capture this individual evaluation, we give the u function the index i. Then ui implies: If i prefers e* to e° and e* and e° are elements of E, then the utility function says ui(e*) > ui(e°). The preference relation “e* better than e°” will be transformed into the space of real numbers. So far, the utility function is only a shorthand representation for the preference relations. Decision makers have preferences. They prefer e* to e°, or they are indifferent. In general, only the analyst assigns numbers to represent preferences. However, when applying game theory, the decision maker assigns numbers to represent his or her preferences as well as the preferences of the other players. The mapping ( ) ( ) if “e∗ better than e◦ ” follows “ui e∗ > ui e◦ ” (OU) constrains the numbers expressing ui(e*) > ui(e°) to relative measures only. We are free to specify this relationship by any pair of numbers as long as ui(e*) > ui(e°) is satisfied. Thus ui(e*) = 10 and ui(e°) = 5 do not imply that e* is twice as good as e°; it only says that e* is better than e°. The numbers only reflect an ordinal ranking. Analogously, the values ui(e) = 4 and uj(e) = 2 do not imply that the preference of i concerning outcome e is twice as strong as the preference of j for the same event. We cannot even specify ui(e) > uj(e), comparing a utility value of i to a utility value of j, as interpersonal comparison of utility is excluded so far. In Chap. 10, cardinality of utility will be introduced in the form of a von Neumann-Morgenstern utility function. Such a utility function is called a payoff function. Note that payoffs are utilities, and no money. If money values are the events evaluated in the game, then most game theorists speak of monetary payoffs. (However, not all game theorists submit to this standard.) But this does not exclude that we may take money values as proxies of utility E a b

Fig. 2.10  Mapping utilities

R x

2  No Mathematics     35

values. In some cases, money seems to be an adequate proxy—like in the case of business profits. The pioneering work on game theory of von Neumann (1959 [1928]) and von Neumann and Morgenstern (1944) focuses on zero-sum games which means that, for each particular event e°, the utilities of players 1 and 2 sum to zero or, equivalently, u1(e°) = −u2(e°) for all e° which are elements of E. This assumes cardinality, but also interpersonal comparison of utility. These are rather strong assumptions; modern game theory, starting with Nash (1950), tries to avoid them. (For a historical interpretation, see Holler (2016).) The mapping represented by (U) defines the utility function with respect to events or outcomes. But how do we get these events? Where do they come from? An event is the result of the specification and selection of the strategies of the players. A strategy choice is an element (s1, …, sn−1, sn) of the strategy space S. S is the Cartesian product S = S1 × S2 × … × Sn − 1 × Sn where Si is the strategy set of player i. Thus, a strategy choice determines for each player i with strategy set Si a particular strategy si which is an element of Si. The event function (or outcome function ) e is a mapping e: S → E.

(E)

Now we can combine (U) and (E) so that we get the utility function in the form of a direct relationship of the strategy space S and the set of real numbers R: u: S → R.

(U*)

As Si contains mixed strategies, i.e., probability distributions over pure strategies, u has to deal with expected value. This necessitates a cardinal interpretation of u (which will be introduced in Chap. 10).

2.6 Monkeys Write Shakespeare, but Where Is Hamlet?4 Probabilities can be a challenge, and not only in game theory. Here is an illustration based on Émile Borel’s Infinite Monkey Theorem. It also serves to demonstrate the power and weakness of infinity when it comes to real-world decision making. Obviously, there are capacity constraints. Let a monkey 4We

have written this section two years ago and cannot recover the references.

36     M. J. Holler and B. Klose-Ullmann

use a typewriter and let us assume that he randomly presses the keys for an infinite time. Then, there is a probability of 1 that he will write a piece that is identical with Shakespeare’s Hamlet. As he has typed many other pieces, many ill-defined and not identifiable, the problem will be how to find Hamlet in this infinitely large garbage pile of letters and words. The problem will become lucid if we study the structure of the argument of the proof. Probability theory tells us that we have to multiply the probabilities of two events if the two events are independent and we want to know that both events occur. If the probability of rain is 0.5 and the probability that you do not take your umbrella with you is 0.2, then the probability that you get wet is 0.5 times 0.2 which is 0.1. However, one might argue that the two events are not independent of each other. There are people who claim that it never rains when they take their umbrella with them. In general, these people bring forward this claim when it is raining and they left their umbrella at home. So, let us believe in independency, and let us assume that a monkey strikes each key of a typewriter with independent probabilities. If there are fifty keys, then there is a probability of 1/50 that a monkey will strike the key for “H” in his first attempt and, because of independency, again there is a probability of 1/50 that he will strike the key for “A” in his second attempt. Thus, there is a probability of 1/50 times 1/50 equal to 1/2500 to find “HA” on the paper. This probability can be written as 1/502. In other words, the probability that we do not find “HA” written is 1 − 1/502 which is quite large but its difference from 1 is still within the domain of our visual capacity and daily experience. But what is the probability that we see “HAMLET” typed by Mr. Monkey? It is 1/506 and thus the probability that we do not see “HAMLET” typed is 1 − 1/506. For all practical purposes, this number is close to 1. The conclusion is that we will not see “HAMLET” unless the monkey tries again and again. Let him type another six strokes. Again, the probability that we find “HAMLET” on the paper will be 1/506. Therefore, the probability that we do not see “HAMLET” is again 1 − 1/506. We will not see “HAMLET” with the probability p = (1 − 1/506) (1 − 1/506) = (1 − 1/506)2 in two trials of six letters, but obviously the probability of failure has decreased as (1 − 1/506)2 is smaller than 1 − 1/506. (Take any positive number x smaller than 1 and multiply it with itself then you will see that x2 is smaller than x.) Now let us assume that the monkey takes, instead of 2, n sequences of six letters, one after another, then the probability for not typing “HAMLET” will be p = (1 − 1/506)n. The bad message is that if n equals a million the

2  No Mathematics     37

probability p is still 0.999. However, for an n that equals 10 billion, the p will be 0.53, and for n = 100 billion, the probability will be p = 0.17. Obviously, for a very large n, the probability p approaches 0 and the monkey will have typed “HAMLET” has a probability close to 1. Of course, the monkey will accomplish this job much faster when we allow for having, for example, “HAML” in sequence n − 1 and “ET” in sequence n. Still, it is not very likely that any monkey will live long enough to write “HAMLET” under this condition, not to speak about the time it needs for a monkey to write the full play: A monkey is a poor substitute for William Shakespeare. But if the number of monkeys approaches infinity, then they can even accomplish the text of the Hamlet play plus the text of Homer’s Iliad. You see, if 100 billion monkeys punch a sequence of six letters, then there is a probability of 1 − 0.17 = 0.83 that one of them typed “HAMLET.” Of course, one might wonder from where we can recruit 100 billion monkeys and how we can feed them. Even to arrange for 100 billion typewriters might be a problem as the experiment does not promise to be profitable. The next problem is how to find “HAMLET” if each of the 100 billion monkeys has produced a sequence of 6 letters and a probability of 0.83 that one of the monkeys typed “HAMLET.” There will be mountains of paper that need to check for “HAMLET.” However, perhaps this issue is not relevant at all. In 2003, it was reported by the Zoo of Paignton, in relation to the University of Plymouth in Devon, England, that a keyboard was put into a cage with six macaques for one month. Some members of the macaque family live on the Rock of Gibraltar and enjoy the Mediterranean view, but the six specimens in the cage were expected to write. During this time, the monkeys produced five pages of writing that mainly consisted of the letter “S”—which is unfortunate if one expects the monkeys would produce “HAMLET.” Moreover, the monkeys attacked the keyboard with stones and urinated on it. In other words, the probability model that we assumed above obviously did not apply. On the other hand, if we took human beings instead of monkeys, then we would hope that most of them would be able, given the proper incentives, to write the word “HAMLET” in six strokes only; the independence of probabilities no longer holds. But there are not too many people who can write a Shakespeare play without copying it, irrespective of whether it is Hamlet, Romeo and Juliet or As You Like It. Instead of challenging this hypothesis, we now start our adventure into game theory.

38     M. J. Holler and B. Klose-Ullmann

References Amadae, S. M. (2003). Rationalizing capitalist democracy: The Cold War origins of rational choice liberalism. Chicago: The University of Chicago Press. Holler, M. J. (2016). John von Neumann (1903–1957). In G. Faccarello & H. D. Kurz (Eds.), Handbook on the history of economic analysis, Volume I: Great economists since Petty and Boisguilbert (pp. 581–586). Cheltenham and Northampton: Edward Elgar. Leonard, R. (2010). Von Neumann, Morgenstern, and the creation of game theory: Chess to social sciences, 1900–1960. Cambridge: Cambridge University Press. Morgenstern, O. (1976). The collaboration between Oskar Morgenstern and John von Neumann on the theory of games. Journal of Economic Literature, 14, 805–816. Nash, J. F. (1950). Equilibrium points in N-person games. Proceedings of the National Academy of Sciences, 36, 48–49. Rives, N. W., Jr. (1975). On the history of the mathematical theory of games. History of Political Economy, 7, 549–565. Strathem, P. (2001). Dr. Strangelove’s game. A brief history of economic genius. London: Hamish Hamilton. Von Neumann, J. (1959 [1928]). On the theory of games of strategy. In A. W. Tucker & R. D. Luce (Eds.), Contributions to the theory of games, Volume 4. Translation of „Zur Theorie der Gesellschaftsspiele,“ Mathematische Annalen (Vol. 100, pp. 295–320). Von Neumann, J. (1945 [1937]). A model of general economic equilibrium. Review of Economic Studies, 13, 1–9. Translation of Über ein ökonomisches Gleichungssystem und eine Verallgemeinerung des Brouwerschen Fixpunktsatzes. In K. Menger (Ed.), Ergebnisse eines Mathematischen Seminars (Vol. 8). Vienna. Von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior. Princeton: Princeton University Press.

3 The Prisoners’ Dilemma, but Who Are the Players?

Two suspects are put in solitary confinement. The prosecutor is convinced that they committed a serious crime, but does not have sufficient proof to convict them in a court trial. He points out to each of them separately that they have two possibilities: to confess or not to confess. In case both do not confess he will prosecute both of them because of a few minor delinquencies, such as illegal ownership of weapons, and each will be given a sentence of two years. If both confess, they will be prosecuted together and the prosecutor will request a penalty of ten years. If only one of them confesses, the confessor will serve only one year while the other will have to reckon with the maximum penalty of twenty years.

3.1 From Game Form to Payoff Matrix In their now classical volume “Games and Decisions,” Luce and Raiffa (1957: 95) were among the first to tell the above story of the Prisoners’ Dilemma, probably the most popular tale in game theory writings.1 The tale describes a problem of whistle-blowing for those who want to save their skin when facing a court trial. The strategic problems of potential state witnesses can be illustrated by the event matrix 3.1.

1In

Luce and Raiffa (1957: 95) we still read “Prisoner’s Dilemma”: The game situation is a dilemma for every single player who is involved. Today “Prisoners’ Dilemma” is preferred—perhaps because it needs at least two agents to be in such a dilemma. © Springer Nature Switzerland AG 2020 M. J. Holler and B. Klose-Ullmann, Scissors and Rock, https://doi.org/10.1007/978-3-030-44823-3_3

39

40     M. J. Holler and B. Klose-Ullmann

Matrix 3.1  Event matrix of a Prisoners’ Dilemma

The suspects are called players 1 and 2. Their set of strategies is described by the elements (i.e., the pure strategies) “confess” or not “confess.” The potential prison terms of both delinquents, expressed in years, constitute the events that correspond to the strategy choices of players 1 and 2: If player 1 chooses the strategy “confess” and player 2 chooses “not confess,” player 1 receives a penalty of 1 year and player 2 receives a penalty of 20 years. The specification of the players, the set of strategies and the events define the game form of a game, i.e., the event matrix in the case of a matrix game. In the course of this text, we will get to know alternative ways of describing a game form, e.g., the game tree. In this chapter, we are looking at games and game forms in matrix form only. This representation is also called the normal form or strategic form of a game. In order to capture the decision problem that a player is facing, we have to enrich the game form by the players’ evaluation of the outcomes. This is being done as follows: Each player assigns utility values to the various outcomes. He does so not only from his own point of view but also with regard to how he or she thinks his or her fellow player would evaluate the events. It seems appropriate to assume that the suspects assign the higher utility value to a small punishment. If we want to introduce a formal concept in order to rank these utility values then, as already proposed in Sect. 2.5, we can define a function ui(.) such that ( ) ( ) if “e∗ better than e◦ ” follows “ui e∗ larger than ui e◦ ” (OU) However, all we really need is the ordinal ordering expressed by “better than,” on the one hand, and “larger than,” on the other. We may thus insert the values shown in Matrix 3.2. Note that if we multiply these numbers by 3 or divide them by 2.75, or any other real number, we get an equivalent representation. Only the ordinal relationships “larger than” and “smaller than”

3  The Prisoners’ Dilemma, but Who Are the Players?     41

matter in this case. As long as larger numbers are larger and smaller numbers are smaller, we have the same game: Any monotonic transformation of the utility values will give the same strategic decision problem. Matrix 3.2 represents the game matrix or payoff matrix of the Prisoners’ Dilemma game, in short, the Prisoners’ Dilemma. As already said in Chap. 2, payoffs are utilities—not money—although they are sometimes treated like money. In more traditional game theory books, in fact, payoff is identified with money. Today, if we speak of money proper we talk about money or monetary payoffs. In this book, we will speak of money when we mean money and of payoffs when we mean the evaluation of an event, i.e., utilities. For the analysis of the game in Matrix 3.2, we just need to know that the player prefers the results of larger numbers to the results of smaller numbers: The higher the value of numbers, the better! Matrix 3.2  Payoff matrix of the Prisoners’ Dilemma Player

not confess

confess

not confess

(80,20)

(25,30)

confess

(150,10)

(50,15)

1

2

When looking at Matrix 3.2, should we confess or not confess if we were player 1? What would you recommend to the players if you were their lawyer? If we multiply all numbers in Matrix 3.2 by 10, we come to Matrix 1.1 in Chap. 1 that shows a competitive trap: cooperation among the duopolists becomes impossible; the described decision situation is a Prisoners’ Dilemma. This seems bad, but how would the demand side interpret this decision situation and the results? If we consider the buyers additional players, the outcome is no longer inefficient. Then, Matrix 1.1 tells only part of the story, which demonstrates that not all Prisoners’ Dilemma situations are bad. The evaluation depends on the players taken care of. It is important to clarify: “Who are the players?” Matrices 3.2 and 1.1 represent a game of the same type: They describe equivalent strategic decision situations. The payoffs (or utilities) in both matrices are chosen randomly, of course, taking care of the rather intuitive

42     M. J. Holler and B. Klose-Ullmann

restriction given by condition (OU) which relates preferences and utilities. Therefore, we can substitute for the game in Matrix 3.2 a much more lucid version of the game as shown in Matrix 3.3. The matrices are absolutely equivalent when it comes to game theory. Moreover, the symmetry in the payoffs of Matrix 3.3 has no meaning whatsoever. From property (OU), it follows that player 1’s utility may not be compared to the utility of player 2. Matrix 3.3  The Prisoners’ Dilemma Player

not confess

confess

not confess

(2,2)

(0,3)

confess

(3,0)

(1,1)

1

2

In the civil law systems of Continental Europe, the state (or crown) witness regulation is somewhat out of place and infrequently applied. In Germany, it was introduced to fight the terrorist group RAF (Rote-Armee-Fraktion, also called Baader-Meinhof-Gruppe), without ­ much success. However, it is integral part of the US legal system which follows common law principles. In the investigation, launched by the US Department of Justice in 2002 to look into alleged price-fixing in the DRAM computer chip market, the executives from Micron received immunity from criminal charges because they agreed to testify as state witnesses against the other parties in the price cartel. The investigation resulted in fines to the chip manufacturers Samsung, Infineon, Hynix and Elpida Memory totaling $731 million for illegal price-fixing, and violating antitrust law.2 However, in 2006, the willingness of Micron executives to act as state witnesses did not give them a carte blanche in the antitrust lawsuit against seven computer chip-makers, among them Micron Technology Inc., filed by 34 state prosecutors in the US District Court for the Northern District of

2Süddeutsche

Zeitung, 15./16., July 2006, p. 26. Also “Discovery Lessons” by Bartko and Bunzel in The Recorder, Autumn (2008), www.callaw.com, and “Micron Settles Class-Action Lawsuit Alleging Price Fixing,” The Associated Press, published 9:36 PM ET Wed, 10 Jan 2007 updated 4:13 PM ET Thu, 5 Aug 2010.

3  The Prisoners’ Dilemma, but Who Are the Players?     43

California, charging them with conspiring to inflate prices. US law provides for a fine as high as triple the damages. In April 2002, Sotheby’s chairman, Alfred Taubman, and its chief executive, Diana Brooks, were found guilty of conspiring with Christie’s to fix commissions. Mr. Taubman served ten months of a one-year prison sentence; Ms. Brooks was given six months’ house arrest, a $350,000 fine and 1000 h of community service. During the time of house arrest, she was allowed to leave her 12-room, $5 million apartment for two hours each Friday to go grocery shopping at any store selling food or products related to food preparation, said James T. Blackford, a probation supervisor who oversaw Ms. Brooks’ case for the Federal District Court. Ms. Brooks had pleaded guilty to price-fixing and then testified against her boss Alfred Taubman. No one was charged at Christie’s, which had blown the whistle on the commission-fixing. To understand the different types of behavior, it might be useful to take into consideration the differences in the ownership structure of the two auction houses. In a special report on the art market, published in the Economist (November 26, 2009), we read that “Sotheby’s is a quoted company whereas Christie’s, once listed, was taken private in 1999 by its current owner, Mr. François Pinault. Christie’s business has since hugely expanded, partly thanks to Mr. Pinault’s pivotal position in the art world.” As a consequence, “the company can pick and choose what information it wants to reveal.” Perhaps it comes as a surprise that “it has in fact become more open over the past ten years.” As for the fixing-price-scandal, it may seem that Mr. Pinault wanted to cripple Sotheby’s. However, as pointed out to us by Professor Isidoro Mazza (University of Catania), another interpretation is that Mr. Bernard Arnault, at the time owner of the auction house Phillips de Pury & Company, was the main beneficiary. He could have reinforced his position against the two rivals by buying Sotheby’s shares with a 50% discount. As it was impossible to take over Christie’s, Sotheby’s was the right target for Mr. Arnault. However, he did not buy Sotheby’s and sold Phillips when he lost the momentum to fill the gap with the other two auction houses. In the described case, there are at least two intertwined w ­ histle-blowers: Ms. Diana Brooks and Christie’s. This makes the analysis of the decision making, even when looking backward, rather difficult. Moreover, whistle-blowing is not always related to a Prisoners’ Dilemma game. In the case of the Micron executives and Ms. Brooks, whistle-blowing was perhaps the dominating strategy, while the players on the other side had no such option. If we introduce the law enforcers as an additional player,

44     M. J. Holler and B. Klose-Ullmann

this whistle-blowing case is of course no longer a Prisoners’ Dilemma— the outcome is no longer inefficient. Whether Christie’s whistle-blowing was a dominant strategy is not that obvious. We will look into the case of Christie’s and Sotheby’s again.

3.2 Equilibrium in Dominant Strategies Matrix 3.3 shows that both suspects are profiting from the strategy “confess,” independent of the other player’s strategy. “Confess” is a strictly dominant strategy for each player as it is the best strategy of each player, irrespective of what strategy the other player chooses. The solution of the game is an equilibrium in strictly dominant strategies. However, the outcome, which corresponds to the utility pair (1, 1), is inefficient because both players could improve their results if they choose the strategy “not confess” instead of “confess.” Then they would enjoy the utility pair (2, 2) instead of (1, 1). But the individual player has no incentive to deviate from the strictly dominant strategy “confess,” as this is the best strategy one can choose, whatever strategy the other player selects, i.e., even when the other player chooses “not confess.” Of course, we assumed that a player is strictly interested in maximizing his or her own utility and not the sum of the utilities of both players. To sum up, the characteristics of a Prisoners’ Dilemma are: (a) Each player has a strictly dominant strategy. (b) The equilibrium in strictly dominant strategies leads to an inefficient outcome. As to property (a), the following question suggests itself: Is it important that both suspects are put in different cells and cannot communicate with each other? The answer is straightforward: If the players can assume that the general attorney, i.e., the prosecutor, told both suspects the same story, the answer is “no.” Player 1 would choose the strictly dominant strategy “confess” even if he knew that player 2 knows his (player 1’s) strategy decision before making his own decision. Player 2 reasons and chooses, respectively. From a player’s point of view, a strictly dominant decision is always the best decision and knowing the other player’s strategy has therefore no impact on one’s decision. The first sentence of the Prisoners’ Dilemma story (“Two suspects are put in solitary confinement.”) at the beginning of this chapter has led to many misinterpretations. Readers often forget that rational players will choose

3  The Prisoners’ Dilemma, but Who Are the Players?     45

strictly dominant strategies, and therefore it does not make any difference for the (game-theoretical) outcome whether the players are kept segregated or whether they can communicate with each other. However, it is important that the players cannot make any binding agreements and third parties (e.g., the Mafia) do not intervene. Another misinterpretation of the Prisoners’ Dilemma results from a tacit assumption that player 1 can decide not only for himself but also for player 2 to choose “not confess.” However, it is the nature and essence of a game that players are in a position to control their own strategies, but not the strategies of the other players.

3.3 Catch-22 and Other Social Traps “From now on I’am thinking only of me.” Major Danby replied indulgently with a superior smile: “But, Yossarian, suppose everyone felt that way.” “Then” said Yossarian, “I’d certainly be a dammed fool to feel any other way, wouldn’t I?” If we look at the logical structure of this conversation taken from Heller’s (1961) notorious novel “Catch-22,” we cannot but agree with Yossarian. Acting for one’s own benefit is a dominant strategy in the situation discussed between Major Danby and Yossarian and describes individually rational behavior. If the result of this behavior is socially not desirable, as we can conclude from the argument of Major Danby, then we are facing a social trap that, in general, constitutes a Prisoners’ Dilemma. The Prisoners’ Dilemma has often been interpreted as description of a decision situation that implies a fundamental conflict between individually rational behavior and socially desirable, i.e., collectively rational behavior. In general, the latter presupposes a behavior that leads to an efficient outcome such that no individual member of the society under consideration can be made better off without reducing the benefits of another member.3 However, a cooperative behavior leading to an efficient outcome is not always achieved when it is in conflict with individual rationality. We can achieve such generalization of the Prisoners’ Dilemma in terms of cooperative and ­non-cooperative behavior by renaming the strategies. A comparison of Matrices 3.3 and 3.4 illustrates this generalization. The strategy “not

3This

is called Pareto optimality or Pareto efficiency, honoring the contributions of Vilfredo Pareto (1848–1923) to modern economics.

46     M. J. Holler and B. Klose-Ullmann

confess” is interpreted as “cooperate,” and “confess” is replaced by “not cooperate.” Obviously, Matrix 3.4 represents a Prisoners’ Dilemma if a > b > c > d applies. Matrix 3.4  The generalized Prisoners’ Dilemma Player

cooperate

not cooperate

cooperate

(b,b)

(d,a)

not cooperate

(a,d)

(c,c)

1

2

Matrix 3.4 represents a social dilemma which “is defined as a situation of strategic interdependence in which the deci-sions of individually rational actors lead to an inferior outcome for all or some parties than the decisions of ‘collectively rational’ actors” (Diekmann and Przepiorka 2016: 1311). Whether we are dealing with cartel agreements as in the game in Matrix 1.1 or with financing public or collective goods, the socially desirable outcome of (cooperate, cooperate) is not reached because it contrasts with individually rational behavior. This is the essence of the dilemma. The strategy “not cooperate” is often identified as free-riding. A free-rider uses a collective good without giving something in return—like traveling on a public bus without paying for the ticket. The free-riding analogy also applies to the polluter who dumps his old tires in the woods, to the power station that channels its cooling water in the Danube river, or to fishing boats using drag nets. It applies just as well to so-called tax evaders, to people obtaining subsidies by false pretenses, and, more generally, to users of public goods that either do not contribute properly to their provision and maintenance, or to exploit those goods to a disproportionate much larger extent than they are entitled to from a legal or moral point of view. These individually rational actors have in common that their own utility from exploiting the collective good “environment” or “state” is larger than the costs to be allotted to them as members of the society which they are “exploiting.” In case everybody acts like this, the environment will be destroyed and the state is reduced to a mere power game without any resources to provide public goods. From a social point of view, the outcome

3  The Prisoners’ Dilemma, but Who Are the Players?     47

is inefficient and is individually undesirable, and yet, no individual can improve his or her outcome by choosing a “socially desirable” behavior. This is also true for the game in Matrix 3.4: collective good “cooperation” is not provided although each individual player would be better off.

3.4 Ways Out of the Dilemma The possibility of free riding prevents collective goods being provided voluntarily. Therefore, many authors4 used the Prisoners’ Dilemma as illustration of the Hobbesian Jungle in which, as Thomas Hobbes (1588–1673) stated, life is “solitary, poor, brutish and short.” As a solution, he postulated an unrestricted state authority, establishing itself as Leviathan, to defeat anarchy (see Hobbes 1996 [1651]). Of course, the social traps which can be described by the Prisoners’ Dilemma have inspired many attempts to find efficient solutions. We, for instance, can show that cooperative behavior on a voluntary basis is possible if the Prisoners’ Dilemma situation is repeated infinitely or rather with an unforeseeable end, and the players do not discount their utility too much. Such a setting is called repeated Prisoners’ Dilemma game or Iterated Prisoners’ Dilemma. (We will further discuss it in Chap. 9.) Many real-world decision situations have features that can be approximated by such a setting. The standard objection to this solution is: If the game is played repeatedly with unforeseeable end, then, in fact, it is no longer a Prisoners’ Dilemma. In contrast, if there is a potential for confusion, the original game may be specified as one-shot Prisoners’ Dilemma. Similar objections hold when the bilateral relation reigning in the original Prisoners’ Dilemma is embedded in a social network, and information about cooperative and non-cooperative behavior is spread through this net. A o­ ne-time deviation from the “path of virtue” may lead to lasting ostracism. Social networks are often based on structures that minimize the degree of violating cooperation. In general, if violations are not sanctioned then the 4See,

for example, Taylor (1976) in “Anarchy and Cooperation,” and Heap and Varoufakis (1995: 148). However, the Hobbes scholar Tuck (1989: 65ff) argues that the source of the Hobbesian Jungle is not the result of conflicting interests but the fact that people have no language to communicate on moral values and thus to coordinate their behavior. Given this Hobbesian uncertainty, it is not easy to justify that we assign payoffs to a decision maker that make “not cooperate” a strictly dominant strategy, as in Matrix 3.4. If we follow Tuck, the Stag Hunt Game, however, with a low degree of social trust, seems to deliver a more adequate description of the Hobbesian Jungle (see Sect. 8.4).

48     M. J. Holler and B. Klose-Ullmann

networks are highly unstable and prone to dissolve. It is the threat of punishment, whether in social networks or in repeated social interaction, that may bring about cooperation if the one-shot decision situation constitutes a Prisoners’ Dilemma. In the case of the chip manufacturers, mentioned in 3.1 above, who were sentenced to a fine of 731 million dollars because of price agreements and which are now threatened with class action charges,5 the time dimension mixes in a very complex way with the network dimension. The notion of embeddedness in a network is not the same for all companies concerned. Apparently, Micron’s managers interpreted the first round of the proceedings as a one-shot Prisoners’ Dilemma, and therefore, in 2002, they acted as state witnesses. It would be interesting to see how many months and years these managers continued to run the company. Cooperative behavior on a voluntary basis is possible if both players expect that the fellow player will cooperate if he himself chooses the cooperative strategy—and that the fellow player chooses the non-cooperative strategy if he himself acts non-cooperatively. Under this hypothesis, the players follow conditional strategies that will motivate them—on the basis of individually rational behavior—to choose cooperative strategies. The outcome is an expectation equilibrium. However, in a (one-shot) Prisoners’ Dilemma the strategies are not conditional. Why should player 1 choose “cooperate” if this strategy is played by player 2? Because of mirror neurons? (see Sect. 1.6). Regarding the Prisoners’ Dilemma in Matrix 3.4, cooperative behavior on a voluntary basis might be the outcome if both players act altruistically and evaluate the other player’s utility positively, revising their own utility in the game accordingly. However, as long as a > b > c > d holds, Matrix 3.4 describes a Prisoners’ Dilemma, irrespective of any altruism, envy, or other type of fellow-feeling. But if a > b > c > d does not apply because of too much altruism or envy, then the game in Matrix 3.4 is not any longer a Prisoners’ Dilemma. A similar argument holds for the Mafia solution of the Prisoners’ Dilemma game. Here, the game has not been modified by an adequate framing of the players’ utilities but by “shaping” the events. If the Mafia punishes “confess” with killing the traitor, Matrix 3.5 represents a corresponding game form. 5A

class action is a form of lawsuit in which a large group of people collectively brings a claim to court or in which a “class” of defendants is being sued. This form of collective lawsuit is rather common in the USA but also gets increasingly popular in the European legal systems.

3  The Prisoners’ Dilemma, but Who Are the Players?     49

Matrix 3.5  Event matrix of a Mafia solution Player 1

2

not confess

confess

not confess

(2 years, 2 years)

(20 years, death)

confess

(death, 20 years)

(death, death)

Of course, the players evaluate the event “death” the worst, which is assumed by the Mafia’s intervention and “confess” is certainly not a dominant strategy. To the contrary, the game that corresponds to the game form in Matrix 3.5 is likely to have “not confess” as the dominant strategy if we assign plausible payoffs to the players. Mafia members, learning from history, experience, and introspection, can discover this solution without deeper knowledge of game theory.

3.5 Who Are the Players? Introducing the Mafia to the Prisoners’ Dilemma game with the ensuing changes in the utilities of the two suspects leads us to the question of who are the players in this game. Apparently, the two suspects would commit a serious mistake in their decisions if they neglected the Mafia. This is drastically shown when comparing the game forms in Matrices 3.1 and 3.5. The conclusion is: A careful analysis of the decision situation with game-theory tools must always start with the question: Who are the players? In many cases, the answer to this question is already an essential element of analyzing the decision situation. To formulate the decision model adequately is an indispensable prerequisite of a “good” decision. In many cases, the question “Who are the players?” is highly relevant: It gives a new dimension even to the story of the two suspects at the beginning of this chapter. Now it is no longer the story of the two suspects but it becomes the story of the prosecutor as well. The prosecutor needs a confession in order to convict the two suspects as criminals. We can only guess the motives for this perhaps a chance of being promoted or his personal involvement for law and

50     M. J. Holler and B. Klose-Ullmann

order is his driving force; we do not know. Yet we understand that he definitely wants a confession. We might even suspect that he would prefer that both suspects would confess. This is to be concluded from the design of the decision situation, described in Matrix 3.1, with which the two suspects are confronted. Yet this decision situation is only part of the problem. The prosecutor, as well, has to make a decision. Which event matrix should he present to the suspects? Thus, he should be seen as a player in a “larger game;” Matrix 3.1 represents only a subgame of it. Later in this text, we will learn more about subgames: We will see that it is strongly recommended taking subgames into account, especially when the sequential form of a game matters. Matrix 3.6  The Prisoners’ Dilemma as subgame Player

not confess

confess

not confess

(2,2,0)

(0,3,1)

confess

(3,0,1)

(1,1,2)

1

2

Looking at the likely preferences of the prosecutor, we obtain for each of the events in Matrix 3.1, a triplet of payoffs. These triplets are written as entries of Matrix 3.6 in the form of three-dimensional vectors: The first and second digits are the values of the suspects (players 1 and 2) while the third digit is the value of the prosecutor. Vector (3, 0, 1) means that the event (confess, not confess) is the best outcome for player 1 and the worst for player 2 whereas the prosecutor evaluates this outcome better than (not confess/not confess) but worse than (confess/confess). The structural properties of the Prisoners’ Dilemma—strictly dominant strategies and inefficient outcome—are valid for the game in Matrix 3.6 but with a few restrictions. The two suspects still have strictly dominant strategies and the corresponding outcome (confess, confess) constitutes an equilibrium in dominant strategies. But this outcome is no longer inefficient since not all players, starting from the choice (confess, confess), can improve their situation. We see that for player 3, the prosecutor, every other outcome is worse than (confess, confess). Thus, (confess, confess) constitutes a Pareto efficient result: Nobody can get better without somebody being worse off.

3  The Prisoners’ Dilemma, but Who Are the Players?     51

By including the prosecutor, we can solve the conflict between individually rational and collectively rational behavior. Under the assumption that the preferences of the prosecutor reflect the preferences of society, and that the suspects—by their criminal behavior—position themselves outside of society, there is no social dilemma anymore. In fact, if we think that the suspects are no longer members of society, the outcome (confess, confess) would be the only efficient one. It gives maximal utility to the prosecutor and thus to the society which he represents. Our discussion about the relevant set of players leads us to the somewhat paradoxical but not really surprising result: The Prisoners’ Dilemma is no longer a Prisoners’ Dilemma if we include the prosecutor. By discussing the set of players N, we often find features which are essential for interpreting the decision situation. The definition of N is an important instrument of game theory, although often not specifically addressed. Even if the decision situation of a price-setting duopoly was a Prisoners’ Dilemma, as assumed in Matrix 1.1, is it really a social trap? Or, do consumers profit from more competition on this market due to lower prices, better quality, or both. Who are the relevant players? Who is the society? From a social point of view not every Prisoners’ Dilemma is bad! To a large extent, the interpretation depends on which society is relevant or who are the people defining the society by whose standards the result is evaluated. The Prisoners’ Dilemma story itself is an illustration that the relevant society and its value system matter. But it is not always that obvious what the value system of the society could be. In general, however, game theory deals with individuals and their decisions and the individuals’ preferences motivate their choices. If players represent social values, like the prosecutor, social preferences become relevant. These social preferences are held by individuals. Another type of social preference becomes relevant if the player is a social entity, e.g., a family, a firm, a government, a state, etc. Then there is the problem of aggregating the preferences of the decision makers within the entity in order to get the payoff function of the collective player. A related problem is to unravel the collective set of strategies, i.e., to determine what the collective entity can do, which plans are feasible and which plans are out of reach, etc. to capture the power of the entity. The aggregation of preferences and the unraveling of the strategy sets are likely to be games in themselves, played inside the collective decision body. Thus, “…intrapersonal strategic conflicts are transformed into interpersonal ones to which game theory can and should be applied” (Güth 1991: 405).

52     M. J. Holler and B. Klose-Ullmann

3.6 Then Strike The widely quoted Chinese military strategist Sun Tzu wrote: “To a surrounded enemy you must leave a way of escape.”6 Sun Tzu’s ninth-century commentator Tu Mu adds: “Show him there is a road to safety, and so create in his mind the idea that there is an alternative to death. Then strike” (Sun Tzu 1963: 110).7 The recommendation “Then strike” is sometimes omitted in the discussion but it is a rational consequence of “leaving a way of escape.” We will learn that backward induction should allow the surrounded party to deduct that this is the likely fate and therefore inspire them to “fight to death.” However, commanders do not always trust that this argument is understood. Again, Sun Tzu’s commentator Tu Mu is an inspiring source: “It is a military doctrine that an encircling force must leave a gap to show the surrounded troops there is a way out, so they will not be determined to fight to death. Then, taking advantage of this, strike. Now, if I am in encircled ground, and the enemy opens a road in order to tempt my troops to take it, I close this means of escape so that my officers and men will have to fight to death” (Sun Tzu 1963: 132f ). “Fighting to death” is a dominant strategy if the opening of a road of escape is just a trap laid out by the enemy to gain easy victory. The first information of Sun Tzu’s text came to Europe when Father J. J. M. Amiot, a Jesuit missionary to Peking, published his translation in Paris in 1772.8 We should keep this date in mind when we read the following quote taken from Niccolò Machiavelli’s Discourses (Book III, Chapter 12), published in 1531. The ancient commanders of armies, who well knew the powerful influence of necessity, and how it inspired the soldiers with the most desperate courage, neglected nothing to subject their men to such a pressure, whilst, on the other hand, they employed every device that ingenuity could suggest to relieve the enemy’s troops from the necessity of fighting. Thus they often opened the

6Sun

Tzu (1963: 109). This quote corresponds with Dixit and Nalebuff (1991: 136) citing Sun Tzu as “When you surround an enemy, leave an outlet free.” We do not know when Sun Tzu lived and wrote his “The Art of War.” Most likely (if at all) he lived in the period of the Warring States (453–221 B.C.). See Samuel Griffith’s extensive introductory comments that accompany his translation of the Sun Tzu text. 7For a sophisticated game-theoretic interpretation of Sun Tzu’s “The Art of War,” see Niou and Ordeshook (1994). 8See Griffith in his preface to Sun Tzu’s text (Sun Tzu 1963: ix).

3  The Prisoners’ Dilemma, but Who Are the Players?     53

way for the enemy to retreat, which they might easily have barred; and closed it to their own soldiers for whom they could with ease have kept it open. (Machiavelli 1882 [1531]: 361)

This sounds very familiar after reading Sun Tzu and later interpretations. But Machiavelli also considers the other side of the fighting and its rationale. A skillful general, then, who has to besiege a city, can judge of the difficulties of its capture by knowing and considering to what extent the inhabitants are under the necessity of defending themselves. If he finds that to be very urgent, then he may deem his task in proportion difficult; but if the motive for resistance is feeble, then he may count upon an easy victory. (Machiavelli 1882 [1531]: 361)

Here “necessity” is the concept that explains why people will fight, attack, or surrender. Machiavelli, again following the ancient Roman historian Titus Livius, makes extensive use of it to explain the essence of successive military strategies when it comes to capture a city. …a captain who besieges a city should strive by every means in his power to relieve the besieged of the pressure of necessity, and thus diminish the obstinacy of their defence. He should promise them a full pardon if they fear punishment, and if they are apprehensive for their liberties he should assure them that he is not the enemy of the public good, but only of a few ambitious persons in the city who oppose it. Such a course will often facilitate the siege and capture of cities. (Machiavelli 1882 [1531]: 362)

Of course, often there are wise people among the besieged who understand that those who besiege a city are not necessarily friends of its inhabitants. It would be a stretch for them to believe that the attackers are only a threat to “a few ambitious persons in the city who oppose” the takeover. Note that the strength of the above promise puts a wedge between the besieged population. Those who warn and object, fall in the basket of the “few ambitious persons” who oppose the takeover. This implies that those who do not oppose expect a brighter future than those who do oppose. Then preaching resistance is not a strictly dominant strategy. Of course, “[A]rtifices of this kind are quickly appreciated by the wise, but the people are generally deceived by them. Blinded by their eager desire for present peace, they do not see the snares that are concealed under these liberal promises, and thus many cities have fallen into servitude” (Machiavelli 1882 [1531]: 362).

54     M. J. Holler and B. Klose-Ullmann

Machiavelli gives a number of cases which support this observation. Here is a historical example, which, however, shows that such a promise still might have an effect, despite its strategic “inconsistency” pointed out by game-theoretical thinking and Machiavelli’s reasoning. This was the case with Florence in our immediate times, and in ancient times with Crassus and his army. Crassus well knew that the promises of the Parthians were not to be trusted, and that they were made merely for the purpose of removing from the minds of the Roman soldiers the impression of the necessity of defending themselves. Yet so blinded were these by the offers of peace that had been made by the enemy, that Crassus could not induce them to make a vigorous resistance. (Machiavelli 1882 [1531]: 362)

However, the opening of a road of escape can be a credible strategy of the attacker when resistance is getting too strong and the siege is too costly. Of course, this strategy can only be successful if the besieged party is quite sure that the opening is not a trick and a deadly strike will follow. If the attackers are strong enough for a siege, but too weak for a battle, then this expectation seems to be justified. However, this is an unlikely case. Still, C. Manilius had led his army against the Veientes, and, a part of the troops of the latter having forced a passage into his intrenchments, Manilius rushed with a detachment to the support of his men, and closed up all the issues of his camp, so that the Veientes could not escape. Finding themselves thus shut in, they began to combat with such desperate fury that they killed Manilius, and would have destroyed the rest of the Roman army if one of the Tribunes had not the sagacity to open a way for them to escape. This shows that the Veientes, when constrained by necessity, fought with the most desperate valor; but when they saw the way open for their escape, they thought more of saving themselves than of fighting. (Machiavelli 1882 [1531]: 363)

Now that we are familiar with the strategy of rendering a road of escape and the possible counter-actions, and we find ample examples in politics and in the economy to which this structure applies. Often it turns out that both parties have a strictly dominant strategy and the outcome will be inefficient. The above examples demonstrate that often there is a third agent, a general or a tribune, who arranges the situation such that a strictly dominant strategy of “fighting to the end” prevails, just like the district attorney designed the decision situation of the two subjects such that “confess” became the dominant strategy.

3  The Prisoners’ Dilemma, but Who Are the Players?     55

In the news of April 22, 2017, an officer of the Iraqi army said that their troops attack the city of Mosul from three sides giving the IS fighters the possibility to take flight or to surrender.

3.7 Tosca’s Dominant Strategy The comparison of the quotes by Sun Tzu and by Niccolò Machiavelli shows an amazing similarity. If history of publication and translation holds, then Machiavelli should not have known Sun Tzu’s text. And yet, is the similarity really that amazing? Are the recipes given and conclusions presented of the two authors not straightforward, once we assume rational behavior for the various parties that are involved in war and war-like situations? If you have a strictly dominant strategy, then choose it, irrespective of what you expect the other agents do. The conclusion also extends to situations where only one of two players has a dominant strategy. However, a higher degree of rationality is required for the second player to identify the strictly dominant strategy of the first player and then choose a best reply to it. The second player has to assume that the first player is rational and can identify his or her strictly dominant strategy. Moreover, the second player has to pick his best reply strategy, given the dominant strategy of the other player. Matrix 3.7 illustrates a corresponding situation. Note that the decision situation does not represent a Prisoners’ Dilemma as only one of the two players has an unconditional strictly dominant strategy. Matrix 3.7  Defend or not defend? Player

2

defend

not defend

allow escape

(2,2)

(3,0)

do not allow

(1,1)

(0,3)

1

Please check whether it is obvious that player 2 will choose strategy “defend.” Given rationality, and given that player 2 assumes that player 1

56     M. J. Holler and B. Klose-Ullmann

is rational, then this result should be obvious. Of course, we may doubt whether the payoffs of player 2 make sense. However, this is a question that cannot be answered by game theory. Numerous classroom tests which the first author carried out at the Universities of Aarhus, Catania, Hamburg, Helsinki, and Paris (Sorbonne) showed that almost all students who participated chose their dominant strategy when they had to decide from the perspective of the corresponding player, similar to player 1 choosing “allow escape” in Matrix 3.7. Yet, many students failed to assume that their counterpart would act accordingly when the latter had a strictly dominant strategy—but they themselves did not have one. They chose “not defend,” even after ten hours of introduction into game theory. Note that player 2 receives the smallest payoff for “not defend” when player 1 chooses the dominant strategy “allow escape.” A player 2 deciding in favor of “not defend” shows that he does not think strategically. Yet, if he put himself in the place of player 1, he would be aware of having the dominant strategy “allow escape.” Consequently, he would expect that player 1 chooses this strategy. Following such thinking, he would select the strategy “defend” for his best reply, instead of “not defend.” Many participants in these tests and, alas, also in game theory exams seem to be unable to put themselves in the place of their fellow players. Thus, they do not live up to what the application of game theory is all about. This is partly because they do not recognize the strategic decision problem represented by the payoff matrix. Of course, the game model, i.e., the ­game-theoretical mapping of reality, does not fully represent the decision situation as we find it in the real world. Every model is necessarily an abstraction: It represents reality only imperfectly—otherwise it would not be a model. Often, people who are not familiar with applying game theory do not leave aside real-world information and prejudices that support their intuition. Consequently, they misinterpret the game model and draw conclusions that are neither supported by the model nor by game theory. Heap and Varoufakis (1995: 147) describe a Prisoners’ Dilemma as depicted in Puccini’s opera Tosca. In the opera, the police chief, called Scarpia, lusts after Tosca. He has an opportunity to pursue his lust because Tosca’s lover has been arrested and condemned to death. This enables Scarpia to offer to fake the execution of Tosca’s lover if she will agree to submit to his advances. Tosca agrees and Scarpia orders blanks to be substituted for the bullets of the firing squad. However, as they embrace, Tosca stabs and kills Scarpia. Unfortunately, Scarpia has also defected on the arrangement as some bullets were real.

3  The Prisoners’ Dilemma, but Who Are the Players?     57

What could be socially beneficial—collectively rational—in this case? Tosca not stabbing Scarpia and Tosca’s lover surviving? Is stabbing Scarpia a dominant strategy? But if so, then Scarpia should have taken care that Tosca has no knife with her when he embraced her. It seems more obvious that killing Tosca’s lover is a dominant strategy and feasible. Tosca should have understood this and rejected the agreement—and should have stabbed the police chief at the first possible instance. However, perhaps there was no such instance.

References Bartko, J. E., & Bunzel, R. (2008). Discovery lessons. The Recorder (Autumn). www.callaw.com. Diekmann, A., & Przepiorka, W. (2016). “Take one for the team!” individual heterogeneity and the emergence of latent norms in a volunteer’s dilemma. Social Forces, 94:1309–1333. Dixit, A. K., & Nalebuff, B. J. (1991). Thinking strategically: The competitive edge in business, politics, and everyday life. New York and London: Norton. Economist. (2009, November 28). Suspended animation: A special report on the art market. The Economist, pp 1–16. Güth, W. (1991). Game theory’s basic question—Who is the player? Examples, concepts and their behavioral relevance. Journal of Theoretical Politics, 3, 403–435. Heap, S. P. H., & Varoufakis, Y. (1995). Game theory: A critical introduction. New York: Routledge. Heller, J. (1961). Catch-22. London: Gorgi Books. Hobbes, T. (1996 [1651]). Leviathan (R. Tuck, Ed., Revised Student ed.). Cambridge: Cambridge University Press. Luce, D. R., & Raiffa, H. (1957). Games and decisions. New York: Wiley. Machiavelli, N. (1882 [1531]). Discourses on the first ten books of Titus Livius. In The historical, political, and diplomatic writings of Niccolò Machiavelli (C. E. Detmold, Trans., 4 Vols.). Boston: James R. Osgood and Co. Niou, E., & Ordeshook, P. C. (1994). A game-theoretic interpretation of Sun Tzu’s The Art of War. Journal of Peace Research, 31, 161–174. Sun Tzu. (1963). The Art of War (S. B. Griffith, Trans.). Oxford: Oxford University Press. Taylor, M. (1976). Anarchy and cooperation. London: Wiley. Tuck, R. (1989). Hobbes: A very short introduction. Oxford: Oxford University Press.

4 The Nash Equilibrium

... is the general solution concept for non-cooperative games. By general we mean the following. Many game theorists postulate that for a ­non-cooperative game, only a Nash equilibrium is justified to define the outcome. This is because in a Nash equilibrium no player has an incentive to behave differently, i.e., to opt for another strategy than the one provided for in the equilibrium strategy vector. One should add: given the strategies of the other players. However, is this criterion sufficient to prefer the Nash equilibrium to other solution concepts (e.g., the Maximin Solution)? We will return to this question. Why are we confronted with the question of the appropriate solution concept in the first place? Not every game has an equilibrium in dominant strategies; the outcome is not always as convincing as in the Prisoners´ Dilemma. In the Chicken Game, for instance (cf. Sect 4.3), neither of the two players has a strictly dominant strategy, not even a weakly dominant one. In order to be able to evaluate the outcome of such a game and help the players in choosing appropriate strategies, we need another solution concept. The Nash equilibrium offers this.

4.1 On the Definition of the Nash Equilibrium In his Ph.D. thesis in mathematics, first published in 1950 on one page, John Nash proved that every game with a limited number of players, in which players have a limited number of pure strategies, has at least one equilibrium.1 1Nash

(1950a, 1951) contains two versions of the proof.

© Springer Nature Switzerland AG 2020 M. J. Holler and B. Klose-Ullmann, Scissors and Rock, https://doi.org/10.1007/978-3-030-44823-3_4

59

60     M. J. Holler and B. Klose-Ullmann

For his proof to hold, the number of players and strategies may be very large, but not infinite. More specifically, the proof applies to finite games only, i.e., to games in which each player has only a finite number of pure strategies. For instance, a game which defines the winner by calling the largest natural number is not finite, as the set of natural numbers is not bounded. But note, an infinite game can have an equilibrium, too. The equilibrium concept, for which Nash gave an existence proof, is now called the Nash equilibrium. It describes a strategy vector where no player is motivated to choose an alternative strategy, given the (equilibrium) strategy of the other players. Note that such strategy vector may also contain mixed strategies.2 The Nash equilibrium can be defined in a more formalized manner: A Nash equilibrium is a strategy vector s* = (s1*,…,si*,…,sn*) such that no player i has an incentive to choose an alternative strategy si, differing from si*, given the equilibrium strategies of the other n – 1 players. In the case of two players, the strategy pair (s1*, s2*) is a Nash equilibrium if neither player 1 wants to choose a strategy s1 that is different from s1*, given s2*, nor does player 2 want to choose a strategy s2 that is different from s2*, given s1*. Taking into account that we can express positive, negative, better, or worse incentives by utilities, the definition of the Nash equilibrium can be formalized by means of utility functions ui (.).. In the case of two players, the equilibrium conditions read like ( ) ( ) u1 s1 ∗ , s2 ∗ > u1 s1 , s2 ∗ for all s1 in S1 , ( ) ( ) u2 s1 ∗ , s2 ∗ > u2 s1 ∗ , s2 for all s2 in S2. S1 and S2 describe the strategy sets of players 1 and 2, respectively. Irrespective of the definition you choose, you come to the following conclusions: (i) An equilibrium strategy si* is the “right choice” with regard to the given strategies of the other players. The latter restriction is very important although not always taken into consideration when applying the Nash equilibrium concept.

2Chapter 10 describes at length the concept of mixed strategies. As we will see, a mixed strategy means that a player selects a pure strategy with a probability smaller than 1.

4  The Nash Equilibrium     61

Matrix 4.1  A game with two Nash equilibria Player 2

U(p)

D(own)

L(eft)

(1,1)

(0,0)

R(ight)

(0,0)

(0,0)

1

The game in Matrix 4.1 has two Nash equilibria (in pure strategies), the strategy pair (L,U) and (R,D). The second equilibrium does not look very convincing as a description of an outcome, however, R is a “best reply” of player 1, if player 2 chooses D, and D is a “best reply” of player 2, if player 1 chooses R. The trouble is that U is also a best reply of player 2, if player 1 chooses R. Later in this book we will learn how game theory helps us to get rid of the less-convincing equilibrium (R,D). Talking about “best reply,” we deviate somewhat from common usage which provides for one best answer only. In Matrix 4.1, both strategies U and D are best answers to R. But the equilibrium (R,D) represents a pair of mutually best answers. R is no best answer to U; thus (R,U) is no Nash equilibrium. (ii) An equilibrium strategy si* is a best reply to the given strategies of the other players. Thus, we can define the Nash equilibrium as strategy vector containing mutually best replies for all players. (iii) The game in Matrix 4.1 shows that a best answer does not necessarily give higher utility than another strategy but it may not place the respective player in a worse situation. The conclusion is: A player may have more than one best reply to the strategies of the other players. However, all of them must give him the same utility, given the strategies of the others. (iv) From the conclusion (ii) above follows: Every equilibrium in dominant strategies is a Nash equilibrium. Reversing this sentence does not hold, as the equilibrium (R,D) shows. (However, L and U are weakly dominant strategies only.) (v) The Nash equilibrium satisfies the assumption of common knowledge of rationality (CKR) and of consistent-aligned beliefs (CAB) as suggested in Sect. 4.6.

62     M. J. Holler and B. Klose-Ullmann

4.2 Historical Note II: Nash and the Nash Equilibrium3 In addition to proving the existence of an equilibrium for finite games, Nash (1950b, 1953) gave a definition of the bargaining problem and suggested a solution to it—the solution is rather popular among economists. It is an essential contribution to cooperative game theory, i.e., when the rules of the game allow players to make binding agreements. Chap. 12 contains an introduction to it. Nash (1953) discussed the potential of non-cooperative games—more specifically, the Nash equilibrium—to determine the result proposed by cooperative games. Today this approach is known as the Nash program (see Sect. 12.4 for details). One of its applications is mechanism design: to create a set of rules (i.e., an institution, convention, etc.)—a game form?—such that self-interested rational behavior results in an outcome proposed by a cooperative game solution. In general, the latter implies the objective of social efficiency. The 2007 Nobel Prize for Economics was awarded for outstanding contributions to mechanism design theory. The winners were Leonid Hurwicz of the University of Minnesota, Eric Maskin of the Institute for Advanced Study at Princeton, and Roger Myerson, University of Chicago. The application of the Nash equilibrium was pivotal for their work. The Nash equilibrium is certainly the most significant solution concept of non-cooperative game theory. By now, it is part of the standard toolset of economics and is even used in political science, philosophy, and sociology, when game theory is applied. As to economics, this concept has become so common that some people have forgotten that its name is derived from a person. John Nash, born in 1928, had been suffering from a recurrent schizophrenia from 1959 onwards, however, in the early 1990s he recovered. His important research work on game theory was published in the early 1950s. Four decades are a long stretch of time; Nash as a person was hardly noticed anymore. Then, in 1994, he received the Nobel Prize for economics, together with John C. Harsanyi and Reinhard Selten, for his outstanding and very important contributions to the theory of non-cooperative games. Nash’s life was interpreted in the movie “Beautiful Mind,” based on a biography of the same name by Sylvia Nasar (published in 1998); he became a cult figure. Sadly, on May 23rd, 2015, Nash and his wife Alice were killed in a car accident on their way home to West Windsor Township, New Jersey. 3For

Historical Note I, see Sect. 2.1.

4  The Nash Equilibrium     63

This happened on the ride from the airport after a visit to Norway, where Nash had received the prestigious Abel Prize for his work in mathematics. Their taxi driver lost control of the vehicle and struck a guardrail. Nash and his wife were ejected from the car; it appeared neither passenger had been wearing a seatbelt. In modern textbooks on microeconomics, references to the Nash equilibrium are ubiquitous. Readers with basic knowledge of economic theory understand at once that the classic Cournot solution of an oligopoly can be interpreted as Nash equilibrium. There is even the term Cournot-Nash equilibrium. The Bertrand solution, Heinrich von Stackelberg’s asymmetry solution for a homogeneous duopoly, and Hotelling ’s result for price competition in a spatial duopoly all represent Nash equilibria. It is less common to describe the Walras equilibrium or the monopoly solution as Nash equilibria as these market situations are not characterized by a strategic decision situation in a game-theoretical sense. And yet, these solutions are Nash equilibria as defined above: none of the economic actors has an incentive to change his behavior given the assumed or observed behavior of the others.

4.3 Nash Equilibria and Chicken Game In order to illustrate the Nash equilibrium concept and its definitions, we discuss the Chicken Game as illustrated by Matrix 4.2. Here “def ” (defensive) means the strategy “getting out of the way” and “agg” (aggressive) for “not getting out of the way.” Matrix 4.2  The chicken game Player B

def

agg

def

(2,2)

(1,3)

agg

(3,1)

(0,0)

A

64     M. J. Holler and B. Klose-Ullmann

The story of the Chicken Game derives from the movie “Rebel Without a Cause” starring James Dean. The film takes place in the 1950s showing the rivalry for leadership in a gang of teenagers. Two candidates face a test of courage.4 They climb into their big Buicks and drive toward each other at high speed. The one getting out of the other’s way will lose. He is the chicken, while the other guy will be the hero and future gang leader. If neither one makes way, the test ends with a crash and a possibly tragic result. If both get out of the way, there is neither a chicken nor a hero—and the group has no leader. Would you make way if you were player A or would you try to become a hero? The game in Matrix 4.2 illustrates many market situations. In general, they are characterized by the fact that at least one of two market sides has a small number of actors and thus represents a strategic decision situation. This holds for oligopolies or, rather, oligopolistic market forms. In Chap. 1, we mentioned the dramatic battle raging between Microsoft and Netscape in 1996. Each of the two suppliers of browser programs wanted to provide us with the entry into the Internet. The perspectives in this fight were: The winner would most likely earn billions of dollars while the loser would probably disappear from the market. However, if both want to win “at any price,” this might lead to a struggle with large losses for both suppliers. Looking at the (price) strategies chosen, such an outcome was possible. The Internet-Explorer-program was offered by Microsoft free of charge while Netscape was charging an amount of $79 for its more common program Navigator. This is far less than the toner for our laser printer cost in 1996. However, it seemed that the “battle for the internet” was not decided by prices but by the capacity of continuously investing, bringing ever more clever products to the market. Thus, the previous versions of the products, the own ones as well as the ones from the competitor, became quickly outdated. It goes without saying that the size of the development potential is determined by the prices which were paid for products at earlier stages. Let us come back to Matrix 4.2. The players have no dominant strategy. Applying the definition of the Nash equilibrium to this game, we see that both (agg,def ) and (def,agg) are Nash equilibria. In (agg, def ), player A chooses the strategy “agg” and player B chooses “def.” The corresponding payoffs are 3 (for A) and 1 (for B). This is the highest payoff which A can

4In the movie, the braggart Buzz provokes his rival Jim (James Dean) to a test of courage: The two drive at high speed toward a cliff. The one who gets out first is a chicken. Buzz crashes down and is dead. Jim survives. We find driving toward each other is more compelling as a duel.

4  The Nash Equilibrium     65

receive in this game. Thus, for A, it is out of question whether he or she could be better off by an alternative strategy choice. Regarding B, this question seems justified, as the payoff 1 does not look very high. However, if B chooses “agg” instead of “def,” given player A´s strategy “agg,” B will receive the payoff 0 only, i.e., B’s payoff shrinks. We conclude that the strategy pair (agg, def ) is a Nash equilibrium. Starting with (agg, def ), player B could reach a higher payoff than 1, that is a payoff of 2, if player A chose “def ” instead of “agg.” When checking whether (def, def ) is a Nash equilibrium, we are asking the following question: Has a player an incentive to change his other strategy, given the strategy of the other? Given (def, def ), we see that both players have an incentive to choose “agg,” given the strategy of the other player. The strategy pair (def, def ) does not imply a Nash equilibrium. On the other hand, if we test the strategy pair (def, agg), we see that it represents a Nash equilibrium: Neither of the players has an incentive to change his behavior, given the other player´s decision. It is not surprising that each of the two players has an incentive to change his behavior if (agg, agg) is the starting point, especially if the other player’s strategy is given. If Microsoft was sure that Netscape is aiming at succeeding with its browser “at any price,” Microsoft would certainly refrain from investing large amounts of money into its own browser; such sums would only be justified if there was a chance to gain market dominance. This result holds if the strategies and payoffs in Matrix 4.2 adequately reflect the strategic decision situation in which Microsoft and Netscape see themselves. Of course, it is advantageous for Netscape to convince Microsoft that it (Netscape) would accept whatever losses in order to gain market dominance. Microsoft, however, being aware of this, would not believe Netscape merely announcing to choose strategy “agg.” However, if Netscape succeeds in committing itself to the strategy (agg), i.e., reducing its set of strategies to this one, the decision situation changes considerably—in favor of Netscape. So, the reduction of strategies can be profitable. This is like burning one’s own ships or destroying bridges to block the possibility of withdrawal or even escape. We shall come back later to mechanisms of self-commitment or of binding oneself. If both players, on the basis of strategy pair (agg, agg), change their behavior, they get the strategy pair (def, def ) and the payoff pair (2,2) results, representing a considerable improvement, compared to payoffs (0,0) received for the strategy pair (agg, agg). But the strategy pair (def, def ) is no Nash equilibrium as we have already argued.

66     M. J. Holler and B. Klose-Ullmann

Let us summarize: The Chicken Game in Matrix 4.2 has two Nash equilibria, the strategy pairs (agg, def ) and (def, agg). We should add “in pure strategies” because for this game there exists also, as we shall see in Chap. 10, an equilibrium in mixed strategies. This holds for all games that have the strategic features of Matrix 4.2 and are therefore called the Chicken Game. The equilibria we discussed result from a thought experiment imputed to the players. The Chicken Game does not provide for the players to revise their strategies when they find out that they could be better off by choosing another strategy. At best, they could be angry about the missed chance to choose a “best reply” to the strategy of the other player, but this is not taken into consideration on the abstract level. (But there are decision models that refer to minimizing regret.) A Nash equilibrium typically shows an outcome where none of the players, given the other players’ strategies, has to regret his strategy choice. This means also that the expectation formed with regard to the decision of the other players is confirmed by the Nash equilibrium. One’s own strategy choice is also confirmed by the Nash equilibrium as it is the best response to the given expectations described by the equilibrium. This implies consistent expectations. However, these consistent-aligned beliefs might not satisfy Lord Henry. In Oscar Wilde’s The Picture of Dorian Gray, Lord Henry observes: “Faithfulness is to the emotional life what consistency is to the life of the intellect—simply a failure” (Wilde 1997[1890]: 37). This sounds rather hypocritical, but for the player who only receives a payoff of 1, it is hardly comforting that mutually best replies are chosen in the corresponding equilibrium. Besides, players cannot be sure that the resulting outcome concurs with equilibrium and the choices are best replies to each other. In general, when there are more than one equilibria, the coordination problem of strategy selection in simultaneous choices cannot be solved convincingly with reference to the Nash equilibria. Because of the coordination problem and in order to avoid (agg,agg) and the corresponding 0-payoffs, we might expect that both players choose “def.” This concurs with the Maximin Solution of the game which is, however, not a Nash equilibrium and does not satisfy common knowledge of rationality (CKR) and consistent-aligned beliefs (CAB). However, in Sect. 5.9 we will learn that the Theory of Moves supports this result. On the other hand, if, e.g., a rational player A, believes that player B will choose “def,” then A should choose “agg.” Again, this demonstrates that beliefs “rule the game.” Note that the outcomes (def, def ) and the two Nash equilibria in pure strategies are Pareto efficient. In this case, the efficiency criterion is no suitable device to solve the coordination problem that has its root in the

4  The Nash Equilibrium     67

autonomous decision making of individuals. The game-theoretical analysis of the Chicken Game demonstrates that this game describes a rather complex decision situation. We cannot suggest any particular outcome for this game. It is one of the merits of game theory that it accurately shows the complexity, if the real decision situation is burdened with complexity. It teaches us either to get familiar with contradictions or to create a less complex game situation, i.e., redesign the game.

4.4 Inefficient Equilibria in the QWERTY-DSK Game Standardization is a means of solving coordination problems. Standardization is ubiquitous; there are formal and informal standards. Driving on the right side of the road is such a standard, supported by the law; this is a formal standard. Let elderly people pass first was an informal standard implanted by convention. Similarly, there were times when men kept doors open for women—by convention—and women accepted this. There is a standardization industry involving national and international institutions such as DIN, AFNOR, BSI, ANSI, CEN, CENELEC, and ISO. Their output is numerous explicit standards. Most of them are optional. Their implementation is the result of the design of the social, economic, or political environment and of individual decisions. In order to illustrate the strategic problem of standardization, and to demonstrate the power and problems of the Nash equilibrium as a solution concept to strategic decision making, we will exploit a rather dated story which once stirred up a substantial discussion. Looking at your computer keyboard, you will notice that the line below the numbers starts with the letters Q, W, E, R, and T. On the German keyboard, the next letter is Z, on the English one it is Y. In the latter case, this reads QWERTY, and QWERTY has become the grammalogue for the standard of how the keys bearing so-called Latin letters have been arranged in almost 99 percent of all keyboards of typewriters and computers. For some time, QWERTY was used as symbol for inefficient standards. Again and again it was quoted both in daily newspapers as well as in scientific publications in order to demonstrate that the market can fail to coordinate supply and demand efficiently. In his seminal paper “Clio and the Economics of QWERTY,” Paul David (1985) argued that there are much better standards than QWERTY, e.g., the Dvorak Simplified Keyboard (DSK), which allow us to type much faster. To use such an alternative could be advantageous for everybody involved in typing, e.g., the typists and their employers.

68     M. J. Holler and B. Klose-Ullmann

David quoted from a study made by the US Navy which proved that only ten days of working with DSK were sufficient to have all training and retraining costs amortized. This holds under the assumption that the typist is always working at full capacity. On the basis of such figures, Liebowitz and Margolis (1990) came up with an estimated yield of 2200 percent for an investment in DSK. Why was DSK not offered as standard type? Why was this book written on a QWERTZ keyboard? The established answer is: We are trapped by the inefficient QWERTY standard. How should we learn to write on DSK when the keyboards are tied to QWERTY? Will a keyboard producer succeed by offering DSK, given that more or less all users are adapted to or were even trained on QWERTY? We know that especially semi-professional writers, like most professors of economics, who keep looking at the keys when typing, easily suffer from the slightest modification of the keyboard. In any case, it is more than doubtful whether a DSK producer would be successful. Most likely, the QWERTY-trained typist would resort to his or her familiar writing material. It seems that only by massive support from outside, e.g., by schools, government, and industry associations, would a change from QWERTY to DSK be feasible. Do you agree? The strategic decision problem which user and producer are facing, without assistance from outside, can be illustrated by the game in Matrix 4.3. Player A is to be a “user” and player B a “producer.” The payoffs given in the matrix (expressing the respective utilities) are only defined in relative terms to each other for each player. The absolute values have no meaning and it does not make any sense to compare the values of one player to the values of the other. Matrix 4.3  The QWERTY-DSK game Player B

QWERTY

DSK

A QWERTY

(1,1)

(0,0)

DSK

(0,0)

(2,2)

4  The Nash Equilibrium     69

Why don’t you learn DSK? Why did you choose a QWERTY keyboard for your personal computer? Do you regret this decision? (Today there are computer programs that in fact allow to choose DSK.) Of course, it is a substantial “abstraction” to reduce the large number of suppliers and users to one producer and one user. But we assume that the suppliers do not make joint arrangements and the users communicate very little with each other. This is always the case if, at a certain moment, the majority of users do not consider the problem of the keyboard of utmost importance. Thus, Matrix 4.3 is not a too far out illustration of the situation that shows a single decision maker on either side of the decision problem. The game in Matrix 4.3 has two Nash equilibria: the strategy pair (QWERTY, QWERTY) and (DSK, DSK), the respective payoff pairs being (1,1) and (2,2). Apparently, the QWERTY equilibrium is inefficient and the DSK equilibrium is efficient. It seems that the underlying strategy problem can be reduced to a question of coordination. In contrast to the Chicken Game, the two players have no conflict of interest regarding the two equilibria. Both parties prefer the DSK equilibrium to the QWERTY equilibrium. If both parties aim for efficiency, then this issue should be easy to solve. However, there is still the question of how the problem can be solved, given that the status quo is dominated by QWERTY. With so many suppliers and users—with no (reliable) communication between them—it seems unlikely that there will be any serious joint agreements among the agents of one “market side” or between the two sides. Nor will there be any cooperation by repeated interaction, as in an Iterated Prisoners ’ Dilemma (IPD).5 The players are caught in the QWERTY trap. If one player tries to deviate from the status quo without being able to coordinate with the other market side, he or she is worse off. What does it help to learn the efficient DSK system when the keyboard is still designed in accordance with the QWERTY standard? Nothing, to the contrary! The training is wasted and the user is bungling with QWERTY. In their paper “The Fable of the Keys,” Liebowitz and Margolis (1990) convincingly argue that a coordinator would be found if the advantages of DSK were as important as assumed in the US Navy study quoted in David (1985). An entrepreneur would be motivated to finance the training costs as well as the costs for switching production to DSK if he could, either totally or partially, claim the profit from efficiency gains that result from a full

5See

Chap. 9 below.

70     M. J. Holler and B. Klose-Ullmann

switch to DSK as his remuneration. There is historical evidence that supports this hypothesis. In the “early times of typewriters,” producers used to pay for the training of the users: Some systems were too complicated for the users to use on their own. On similar grounds, initial rebates, guarantees, or leasing agreements could help the producers install DSK keyboards as the superior system. Our everyday experience would support such expectations. Yet, the fact that there is no such entrepreneur should make us hesitant. Either economic theory is wrong or the advantages of DSK, as compared to QWERTY, are not as considerable as assumed.

4.5 Who Are the Players in the QWERTY-DSK Game? Liebowitz and Margolis (1990) had a closer look at DSK’s record. In 1936, August Dvorak, who later became professor of pedagogy at University of Washington, had the DSK k patented. His claim was that DSK considerably reduced the necessary finger movements resulting in less strain for the typist, easier learning, as well as faster typing speeds. Dvorak’s claim was verified in a number of tests; however, the tests cannot be found and, it is said, Dvorak’s claims may be specious. The most convincing tests seem to have been the US Navy series conducted in 1944. The literature, stressing the efficiency of the DSK system, typically refers to these tests. But even these experiments are to be questioned. A major objection is that Lieutenant Commander August Dvorak himself directed these tests. During World War II, he was the Navy’s top researcher for studies on time and movement. Not only did he have the patent for DSK, he also received US $130,000 from the Carnegie Commission for Education to finance the studies. This illustrates the level of interest in connection with DSK. Also, a carefully designed study by Earle Strong, conducted in 1956 for the General Service Administration, concluded that a retraining from QWERTY to DSK has no substantial advantages. As with the Prisoners’ Dilemma, we need to ask: Who are the players? If we introduce August Dvorak as a third player, the game can have the form as described in Matrix 4.4. The third component of the payoff vector shows the utility values of August Dvorak that concur with the interpretation of

4  The Nash Equilibrium     71

Liebowitz and Margolis. In accordance with this interpretation, we assumed in Matrix 4.4 that the players A and B, the provider and the user of typewriters, are indifferent between DSK and QWERTY. Matrix 4.4  The QWERTY-DSK game with player August Dvorak Player B

QWERTY

DSK

QWERTY

(2,2,0)

(0,0,1)

DSK

(0,0,1)

(2,2,2)

A

The game in Matrix 4.4 has a QWERTY and a DSK equilibrium. For August Dvorak, the third player, DSK is a strictly dominant equilibrium; however, A and B can establish the QWERTY equilibrium without Dvorak’s support. Thus, the strategy triple (QWERTY, QWERTY, DSK) is consistent with the QWERTY equilibrium. Because of Dvorak’s preferences, the QWERTY equilibrium is (“weakly”) inefficient compared to the DSK equilibrium. A and B have identical utility values regarding both equilibria, but Dvorak prefers the DSK equilibrium (DSK, DSK, DSK). Efficiency is a social target value, i.e., it is connected to society. Therefore, the efficiency of the DSK equilibrium should not be overrated: There are many millions of QWERTY users and several thousands of QWERTY producers, but only one August Dvorak. In case some of the QWERTY users find their system even slightly better than DSK, the efficiency criteria as such do not speak for DSK any longer. From a game-theoretical point of view, it does not matter in the first place, as we are dealing with decisions of the individual actors concerning their strategies, and not with outcomes or social conditions—although the outcomes are the result of the individual choices.

72     M. J. Holler and B. Klose-Ullmann

4.6 Nash Equilibria in Kamasutra Games In his analysis of Vatsyayana’s Kamasutra,6 Kumar (2011) presents the Lovers’ Quarrel in the form of a Chicken Game. Both man and woman have the two pure strategies: “Victory” and “Defeat.” Payoffs are given such that the decision problem presents a Chicken Game.7 Matrix 4.5  The Lovers’ Quarrel given a > b > c > d Player Woman

Defeat

Victory

Defeat

(b,b)

(c,a)

Victory

(a,c)

(d,d)

Man

More specifically, both “partners prefer sustaining the relationship after claiming a nominal victory versus continuing after accepting defeat. In case of the partner who has transgressed (been wronged), accepting one’s mistake (forgiving the transgressor without a fuss) is defeat and not accepting one’s mistake (forgiving only when the transgressor repents) is victory. If both players try to claim victory, the relationship breaks” (Kumar 2011: 493). If they both choose “defeat” the situation is unstable; the quarrel will not end here. Perhaps it is too optimistic to assign payoffs to the (defeat, defeat) outcome that are larger than c; more likely, the decision situation concurs with a Battle of the Sexes as analyzed in Sect. 5.6 below. Kumar (2011: 494) argues: “Only when they make different choices—(defeat, victory) or (victory, defeat) …the relationship continues,” i.e., only when an equilibrium in

6Vatsyayana’s Kamasutra is a Sanskrit text, written down in the third or fourth century CE. The text is well known for its frank treatment of erotic love and sexual practices also explaining the “art of love making.” The translation referred to by Vikas Kumar is Doniger and Kakar (2002). As Kumar (2011: 482) noticed, neither of the translators is a game theorist. 7Matrix 4.5 derives from Fig. 4.2 in Kumar (2011), but has been adjusted to the present text.

4  The Nash Equilibrium     73

pure strategies has been achieved then can the quarrel end in harmony. But there are two equilibria in pure strategies. Which one to select? “…the actual outcome depends on conventions, say, (a) the partner whose moves (say, other relationships) lead to a temporary rupture accepts defeat and allows the other partner to claim victory or (b) the partner who claimed victory in the last quarrel accepts defeat this time” (Kumar 2011: 494).8 The second alternative assumes that the game is not one-shot, “but will repeat itself.” But then this is a different game. Also, we can think that Lovers’ Quarrel is “played” sequentially; “The hurt partner creates a scene and depending on the gravity of the situation the offender chooses to submit or downplay his/her offence” (Kumar 2011: 494). The Kissing-game Quarrel “…is similar to Lovers’ Quarrel except that the woman is offended because she has lost a pre-coital game (say, “who first grasps the other’s lips”) and is unwilling to proceed unless she wins a replay” (Kumar 2011: 493, Fn 7). In the next chapter, we will discuss an adequate representation of the sequential structure of a game, the game tree, and apply it to the Chicken Game. Of course, the arguments also hold for the Lovers’ Quarrel if played sequentially. In the context of the Kamasutra, the question of who are players is of course highly relevant. In principle, extramarital love—one of the pivotal issues in Kamasutra—presupposes at least three participants. It could well be that one of them is not part of the game—e.g., because two of them have a secret. Of course, secrecy can be a major component in the design of a game.9

References David, P. A. (1985). Clio and the economics of QWERTY, papers and proceedings. American Economic Review, 75, 332–337. Doniger, W., & Kakar, S. (2002). Kamasutra. Oxford: Oxford University Press. Holler, M. J. (2018). The economics of the good, the bad, and the ugly: Secrets, desires, and second-mover advantages. London and New York: Routledge. Kumar, V. (2011). Scheming lovers and their secrets: Strategy-making in the Kamasutra. Homo Oeconomicus, 28, 479–498.

8Kumar (2011: 494) pointed out that “Vatsyayana thinks that if the man accepts his mistake wholeheartedly (for instance, ‘by falling at her feet,’ …), then the lady accepts him back because ‘even a bashful or very angry woman cannot resist a man falling at her feet; this is a universal rule’”. 9For an extended analysis of secrets and secrecy in strategic situations, see Holler (2018: 121–229).

74     M. J. Holler and B. Klose-Ullmann

Liebowitz, S. J., & Margolis, S. E. (1990). The fable of the keys. Journal of Law and Economics, 33, 1–25. Nash, J. F. (1950a). Equilibrium points in N-person games. Proceedings of the National Academy of Sciences, 36, 48–49. Nash, J. F. (1950b). The bargaining problem. Econometrica, 18, 155–162. Nash, J. F. (1951). Non-cooperative games. Annals of Mathematics, 54, 286–295. Nash, J. F. (1953). Two-person cooperative games. Econometrica, 21, 128–140. Wilde, O. (1997[1890]). Collected work if Oscar Wilde. Ware: Wordsworth Edition.

5 Sequence of Moves and the Extensive Form

Te sequence of moves determines the outcome of the game. Te player making the frst move determines how the game develops. He decides which direction the game takes. Who enters the market frst, gets the most proft.—Tese statements are correct in many cases but they do not always hold. Tey do show that time, i.e., in the sense of the sequence of decisions, can play a decisive role in a game situation. Tere are both frst-mover and second-mover advantages.1 Who will make the frst move? Do the other players see which move was chosen?

5.1

The Shrinking of the Event Matrix

Let us assume player 1 makes the frst move and he knows that player 2 learns about his move before making her own decision. In this case, it is possible for player 1 to limit the set of strategies from which player 2 will reasonably choose. By choosing a move x instead of y, player 1 passes information to player 2 as to which events she should reckon with and which she should not. Some outcomes may even be excluded by the choice of x instead of y. We suppose move x implies strategy s11 and y implies strategy s12. If player 2 observes move x, and, knowing the event matrix 5.1, she recognizes that events C and D cannot be realized because x was chosen. Terefore, she focuses on the events A and B.

1Holler

(2018) contains a larger chapter on second-mover advantages—including theoretical analysis and a bundle of historical cases. © Springer Nature Switzerland AG 2020 M. J. Holler and B. Klose-Ullmann, Scissors and Rock, https://doi.org/10.1007/978-3-030-44823-3_5

75

76     M. J. Holler and B. Klose-Ullmann

Matrix 5.1  Event matrix and move structure Player

s21

s 22

x resp. s11

A

B

y resp. s12

C

D

2 1

Of course, in game-theoretical terms, Matrix 5.1 represents a game form; the choice of a particular sequence can be interpreted in terms of mechanism design. From player 2´s point of view, the Matrix 5.1 and the information of player 1 choosing s11 reduce the decision of player 2 to choose either A and B. In this interpretation of Matrix 5.1, time is characterized by defining a sequential structure of the decisions, thus determining a certain information structure: Whoever makes the first move is transmitting her strategy decision to the other player by such a move. The sequential interpretation of Matrix 5.1 defines player 1 as a sender of information and player 2 as a receiver. Note that it is unimportant how much time elapsed between sending and receiving the message. It is the sequence that matters. “player 1 chooses x” → “player 2 gets to know this choice”→ “player 2 decides between s21 and s22” → “the outcome”

This sequence represents strategic time, not real time as measured by the watch or made public by the bells of the town hall tower. Strategic time has an impact on the choice of strategies as it “creates” strategic information, i.e., information that is relevant for the decisions of the players. This information derives from the past, as observed by the players. Note: Strategic time does not exist without decisions.

5.2 Sequential Structure and Chicken Game A sequential analysis of a Chicken Game may help to illustrate the importance of strategic time or, respectively, the effect of the sequential structure of the moves and the relevant information. Of course, this is no longer

5  Sequence of Moves and the Extensive Form     77

the Chicken Game of simultaneous moves that we discussed above and as repeated in Matrix 5.1. We now assume that player 1 has the first move and, for instance, chooses strategy s12. When player 2 gets to know this information, she cannot but opt for strategy s21, unless she is a happy with a payoff of 0. But if she is happy with outcome D instead of C (to borrow from Matrix 5.1), then this preference should be reflected in the payoffs; and player 2’s payoff should not be 0 for outcome D if a 1 is assigned to outcome C for her. Matrix 5.2  The Chicken Game Player

s21

s 22

s11

(2,2)

(1,3)

s12

(3,1)

(0,0)

2 1

Apparently, it is advantageous for player 1 to have the first move and thus being the first to decide on her strategy. But to have an impact and to benefit from this advantage, player 1 has to make sure that player 2 is informed of player 1´s choice before making her own strategy decision. If this prerequisite is not fulfilled, the Chicken Game remains in its original form. Again, “everything can happen”; the pertaining game is adequately represented by Matrix 5.2. If player 2, with regard to the case “everything can happen,” considers all events as equally probable, she may well prefer the state of uncertainty, corresponding to an expected utility of 1.5, to the sure outcome, resulting from (s12, s21), which gives her only her second worst value, namely 1.2 She could pour wax into her ears as did Ulysses with his fellow travelers when they were expecting the sirens with their bewitching but unfortunately deadly singing. In such case player 2 could not hear which move player 1 had chosen and player 1 could not limit player 2 to strategy s21 2The

expected value of 1.5 was reached as follows: Considering each of the four events corresponding to the various strategy combinations as equally probable, we multiply the utility values 0, 1, 2, and 3 by the probability ¼ = 0.25 and add the results. Here, we apply the Laplace principle or the “Principle of Insufficient Reason.” The theoretical concept behind such procedure is the expected utility hypothesis which we will study more closely in Chap. 8.

78     M. J. Holler and B. Klose-Ullmann

which is advantageous for player 1, but obviously not to player 2. Player 2 could put a shade over her eyes, or remove the paper from the fax machine, or go on a trip without leaving an address behind, or she could put her head in the sand or just become crazy.3 Then, player 1 could not constrain player 2 to choosing s21 as the necessary information could not be transferred to player 2. Pouring wax in one’s ears and cutting phone connection are ways to undermine information, i.e., to devaluate the strategic advantage of a first mover. Think about it if you ever get blackmailed.

5.3 Extensive Form and Game Tree Often it is cumbersome to sort out the chronological structure of the decisions and the information relations among the players and their actions in a footnote to the pertaining game matrix. It is not always easy for the readers to understand how the time structure of the decisions and the game matrix are linked. But there is a way that is more adequate to show the sequential structure: the game tree. Let us have a look at Fig. 5.1 that represents the game tree of the Chicken Game in Matrix 5.2 if played sequentially. We assume that player 1 decides on her strategy first and acts accordingly, i.e., chooses the corresponding move. Player 2 observes the move and can derive player 1’s strategy decision from her observation before choosing her own strategy. The empty circles are called (decision) nodes. This is where players 1 and 2 decide on their next move consistent with their strategies. The number at the pertaining node shows which player makes a decision at this very point. Player 2 chooses at decision node A, if A was reached at all. If, instead, decision point B was reached, player 2 has to decide at point B. The choices available for the player at a particular node are expressed by the (decision) branching off this node. Sometimes branches are called edges or even twigs. The node to which no branches lead to (but leading off) is called origin, root, or initial node. It shows which player has the first move, and which move he or she can choose. The full circles are the end (or terminal) nodes of the game. No branches are going off them. There are no choices to be made; thus, there is no number representing a player. Utility vectors are assigned to these, showing how

3Choosing craziness, this is how Hamlet (in the first version of the play) escaped the life-threatening persecution of his father´s murderer.

5  Sequence of Moves and the Extensive Form     79

2 s12 1 s11

s22

(0,0) (3,1)

A

s21

2

s22

(1,3)

B

s21

(2,2)

Fig. 5.1  An extensive form of the Chicken Game

the players evaluate the events following from the strategy decisions of the players. The events are—as in the pertaining game matrix—only implicit, and not described. From Fig. 5.1, we learn that player 1 is the first to make a move. As each strategy consists of one move only, the two concepts coincide.4 Player 1 chooses the strategy s12 and player 2 the strategy s21, the subsequent event being evaluated by player 1 with 3, her highest value in this game; and by player 2 with 1, her third best outcome, which is not very good for her. Obviously, player 2 cannot achieve a better outcome if player 1 chooses strategy s12. Often, game trees are not depicted vertically, as trees should be, but horizontally as in Fig. 5.1. This corresponds to the structure and experience of how we look at figures: Most of us start from the left and read to the right. This is just the way we read the game trees: Reading the left-right structure, we are reproducing the sequential structure of the decisions captured by the game tree. One might also argue that drawing from left to right is easier than drawing from up to down. Or, are trees drawn from down to up?

5.4 Information: Perfect, Imperfect, Complete, and Incomplete A game tree shows the extensive form of the game and represents the strategic information players have. The game tree in Fig. 5.1 shows that both players have perfect information; that is, each player knows all preceding decisions when it is his or her turn to decide a move. In the game tree of the 4In

fact, move is what we see—an action—strategy is the plan.

80     M. J. Holler and B. Klose-Ullmann

2 s12 1 s11

s22

(0,0) (3,1)

A

s21

2

s22

(1,3)

B

s21

(2,2)

Fig. 5.2  Chicken Game with simultaneous decisions, version I

Chicken Game, such classification is trivial when a sequential structure like in Fig. 5.1 is assumed. The player making the first move knows that there was nobody before her making a move. The player who will move second observes the move of the player who had the first move and can therefrom derive the strategy choice of this player. There is no other information that can be uncovered. Players have perfect information. Not all decision trees are that simple. Often players have strategies that consist of more than one move. Perfect information implies that the player knows all preceding decisions, her own as well as the decisions of her fellow players. In other words, whenever she makes a decision, she knows exactly at which decision node she is. Now, let us assume the players choose their strategies simultaneously, as in the Chicken Game, without sequential structure, i.e., without knowing the strategy decisions and the pertaining move of the other player. Then, starting with Fig. 5.1, player 2 does not know whether she is in the upper decision node (A) or lower one (B) of the decision tree. This fact is captured in Fig. 5.2 by the oval line that circumvents nodes A and B describing that A and B are in the same information set and that player 2 has no previous observation that distinguishes these points. Often the information in Fig. 5.2 is expressed by a dotted line connecting the nodes of an information set. Therefore, the following Fig. 5.3 contains the same information as Fig. 5.2: Player 2 does not know whether she is in node A or B when it is her turn. Information is not perfect in this game. Alternatively, to the representations in Figs. 5.2 and 5.3, an information set can also be expressed by a quadrangular border around the pertaining nodes or by a colored background. However, Figs. 5.2 and 5.3 show the

5  Sequence of Moves and the Extensive Form     81

s22

2 s12

A

1

2

s11 B

s21

(0,0) (3,1)

s22

(1,3)

s21

(2,2)

Fig. 5.3  Chicken Game with simultaneous decisions, version II

most popular versions of how to represent non-singleton information sets, i.e., information sets that contain more than one node. A Chicken Game, simultaneously played, can be represented by alternative game trees, depending on the choice of origin. In Figs. 5.2 and 5.3, the decision node of player 1 is the origin. The alternative is to consider the decision node of player 2 as the origin. In this case, the information set of player 1 contains two nodes. It goes without saying that the two alternatives are equivalent to each other, and they are equivalent to the payoff matrix in Matrix 5.1. On the basis of the concept of information sets, we may define perfect information as follows: A player has perfect information if every information set attributed to him contains one decision node only. Consequently, imperfect information means that at least one information set contains more than one node and every node contains the same decision possibilities. Of course, the possible moves following these nodes must be the same. If different moves were possible from two nodes that are supposed to be in the same information set, the player could recognize from his decision options in which node he is. However, then the two nodes are not in the same information set. In this introductory text, we, by-and-large, assume that the players know the game, be it depicted by matrix or by game tree, i.e., the set of players, their strategy sets, and their preferences with respect to the possible outcomes. This implies that player 1 knows the preferences of player 2 and player 2 knows player 1´s preferences. If these rather strong assumptions hold, the game is characterized by complete information, if not, we are talking of a game with incomplete information. Please make sure that you can specify the difference between complete and perfect as well as incomplete and imperfect information. This difference is essential for the later sections.

82     M. J. Holler and B. Klose-Ullmann A u

R

2 1

H

N

d 2

u d

L

B C

D

L R L

(5,3) (8,5) (7,8) (3,2) (4,6)

R

(2,4)

L

(6,1)

R

(1,7)

Fig. 5.4  Game tree with a forgetful player

5.5 Perfect Recall Missing Up to now, we assumed that a player might not observe all moves of his counter player before he makes his own decision and thus may face imperfect information. But it may also happen that a player does not know in which decision node he is because he forgot a previous move. Figure 5.4 represents the following case: Having to decide between the products Left (L) and Right (R), e.g., producing cars for left-hand or right-hand driving, in the second round, player 1 does not remember the decision he made in the first round when, e.g., his production capacity and capital expenditure High (H) and Low (N) were at stake. Therefore, the nodes A and C as well as B and D are in the same information set. Perfect recall is missing. Consequently, on the second decision level, player 1 has two information sets and each contains two nodes, if he can observe what player 2 decided before, i.e., whether player 2 chose Up (u ) or Down (d ). If he observes d, then it is advantageous to choose L, irrespective of whether he decided for H or N in the origin. However, if he observes u, then the success of his choice between L and R depends on whether he is in node A or C. But he does not have this information as he does not recall whether he chose H or N in the origin of the game. Imperfect information can be decisive for the outcome of the game. In order to learn more about the properties of this game, we can write the sequential game in Fig. 5.4 in the form of a game matrix. Try this and check which information has been lost by representing the sequential game in the form of a matrix. Matrix 5.3 (below) helps to answer this question.

5  Sequence of Moves and the Extensive Form     83

The game in Fig. 5.4 could illustrate a duopoly market. At first, Firm 1 decides on its production capacity (H or N). Thereafter, Firm 2 offers its product (u or d ) on the market. Firm 1 reacts by supplying R or L, in the given case, however, without considering its production capacity as this information is no longer available. Obviously, it depends on the decision between H and N whether its product choice is successful given that Firm 2 opted for alternative u. For instance, given that Firm 1’s choice was H, then it is better off by choosing R on the second level, given that Firm 2 chose u. In this case, the outcome is (8,5). On the other hand, if Firm 1 chooses L, given H and u, the outcome is (5,3). Of course, you can invent or know a much more fascinating tale, perhaps a fairy tale, where player 1 has to choose between two alternatives, but then forgets what he has chosen. In the sequel, he has to choose again, knowing that the effect of his decision depends on his earlier one, and how the other reacted to it. Perhaps, if the story should have a happy ending, then, by the reaction of the other, our hero can reconstruct her earlier choice. The game tree in Fig. 5.4 can be used to illustrate the difference between strategy and move. Each strategy of player 1 has two moves. This is why we talk about decisions at the first and second levels. A strategy choice means that player 1 decides on the two moves already in the origin of the game tree—which includes a “when I forget, what will I do?” plan. The strategies of player 2 have only one move, or level; therefore, it is not necessary to distinguish between move and strategy for this player. Perhaps more fundamental is the fact that strategy is a plan while move is an action. The latter can be observed, in principle, while the information about plans of the other players can only be inferred from their actions and the possible evaluations of the results. This information remains hypothetical throughout the game, although it is the core of forming game-theoretical expectations. We can conclude that a strategy is a plan of actions that player i “hypothesizes” for player j in order to form expectations on the possible choices of player j. For a further analysis of the issue of perfect information, including perfect recall, let us now start again from the sequentially played Chicken Game in Fig. 5.1. It features perfect information of the players as no information set contains more than one information node. Choosing the corresponding mode of illustration by a matrix (as, for instance, Matrix 5.2), some information about the game situation is lost. The matrix does not differentiate between a sequential game where player 1 has the first move, and a game where player 2 starts. Information is no longer perfect.

84     M. J. Holler and B. Klose-Ullmann

Matrix 5.3  Game matrix with a forgetful player Player u

d

HL

(5,3)

(7,8)

HR

(8,5)

(3,2)

NL

(4,6)

(6,1)

NR

(2,4)

(1,7)

2 1

The consequence of information loss when switching from game tree to matrix representation is demonstrated by Matrix 5.3; it derives from the game tree of Fig. 5.4 and is somewhat cumbersome to read. In this matrix, the Nash equilibrium (HR, u ) is a plausible candidate for reflecting on the outcome. However, considering the sequential structure illustrated by the game tree in Fig. 5.4, only the Nash equilibrium (HL, d ) seems to make sense describing the outcome. Player 2, after having seen that his competitor chose H, will choose d, as he is aware that player 1 can observe his choice. Regarding player 1, the best answer to d is L, independent of having chosen H or N at the beginning of the game. The payoff pair (7,8) results. However, player 1 forgets what he has chosen in the origin. If he knows that he will forget, a somewhat paradoxical assumption, he could also have chosen N. Then, player 2 will react with u, expecting that player 1 will choose L. The payoff pair (4,6) results. Looking at Matrix 5.3, player 1 could infer that player 2 chooses d only if he (i.e., player 1) has selected N in the origin. This way, player 1 can reconstruct the information on his choice that he cannot recall: He knows that player 2 observed his choice in the origin and expects that player 2 makes a rational choice, given this information. So if player 2 selects u, player 1 concludes that he himself has chosen N. On the other hand, if player 2 selects d, player 1 can conclude that he himself has chosen H. But why should player 1 ever choose N in the first place? Matrix 5.3 shows that strategy NL is strictly dominated by HL. Strategy NR is even strictly

5  Sequence of Moves and the Extensive Form     85

dominated by all other strategies of player 1. Thus, each strategy with the move N is dominated by a strategy with the move H. Therefore, at the first level, player 1 will choose H and then forget whether he chose H or N. Interestingly, player 1 can afford to forget the choice of his first move if he follows the “path of rationality.” Player 2 can observe H and will choose d. This would bring about no other outcome than the Nash equilibrium (HL, d ). If this works, then rationality substitutes for memory. In fact, this is how we find our wallet, our glasses, and the keys. We do not remember where we put them but start looking for them where it is “rational” for us to put them. However, this presupposes that we made a “rational decision” in the first place but forgot what we decided. A “rational decision” can coincide with routine. Because of the missing recall, illustrated in Fig. 5.4, the game becomes quasi-simultaneous. Consequently, the matrix analysis in Matrix 5.3 is adequate. The situation of recall, perfect or not, is quite different from dealing cards when we do not know what the players get. Still we might be able to reconstruct what hand the various players have from the way they play. But to derive this we cannot assume that we dealt the cards rationally, cheating excluded. The rationality argument could also help to reconstruct the decision of a third player when we cannot observe that decision, but another player does. From the reaction of this other player, and assuming that the third decision maker is rational, we may be able to successfully speculate on the latter’s choice. When we represent the sequential game in Fig. 5.4 by means of Matrix 5.3, we lose the information that player 1 knows player 2´s decision for d. However, if player 1 does not have this information, the Nash equilibrium (HR, u ) also describes a plausible outcome of this game. Moreover, the payoff matrix does not take into consideration that player 1 does not recall his choice between H and N in the origin. Concerning this aspect, the payoff matrix assumes more information for player 1 than the game tree expresses. However, we have seen that player 1 can deduce the information about his choice between H and N from the game tree if he is rational, and assumes that player 2 is rational and this assumption is correct. This corresponds to common knowledge of rationality (CKR) and consistent-aligned beliefs (CAB). In Chap. 8, we will meet more sophisticated game-theoretical instruments (e.g., subgame perfectness and backward induction) that help us to solve games in sequential form and to reconstruct missing information as in the game of Fig. 5.4. If these instruments help you find your passport, it might be worthwhile to study them.

86     M. J. Holler and B. Klose-Ullmann

There is a special case of imperfect recall if player 1 forgets the nature (property, identity) of the other players, i.e., if player 1 does not remember the moves of player 2 when deciding between L and R. As a result, the sequential game becomes quasi-simultaneous; this can lead straight to disaster. In The Outsider, Camus (Camus 1982 [1942]:78) recounts the story of a man who had left his… …Czech village to go and make his fortune. Twenty-five years later he’d come back rich, with a wife and child. His mother and his sister were running a hotel in his native village. In order to surprise them, he’d left his wife and child at another hotel and gone to see his mother who hadn´t recognized him when he’d walked in. Just for fun, he’d decided to book a room. He’d shown them his money. During the night his mother and sister had clubbed him to death with a hammer to steal his money, and then thrown his body into the river. The next morning, the wife had come along and without realizing revealed the traveller’s identity. The mother had hanged herself. The sister had thrown herself down a well.

Meursault, i.e., Camus’ Outsider, found this story on a scrap of newspaper when in prison waiting for his trial for murder. He concluded that “the traveller had deserved it really and that you should never play around” (Camus 1982 [1942]: 78).

5.6 The Battle of the Sexes The game in Matrix 5.3 resembles the Chicken Game if player 1 avoids the strictly dominated strategies NL and NR. Just like the Chicken Game, it has two Nash equilibria in pure strategies that the two players evaluate differently. However, there is a difference to the Chicken Game, namely, in Matrix 5.3 both players evaluate the events that do not correspond to one of the two Nash equilibria worse than the equilibrium events. That is, the non-equilibrium outcomes are payoff-dominated by the equilibrium outcomes: Both players prefer the equilibrium outcomes to the non-equilibrium outcomes. This property and the payoff asymmetry in the equilibria characterize a well-known game called the Battle of the Sexes. Matrix 5.4 shows a version of this game that derives from the story that gave this game its somewhat curious name.5 Oskar and Tina meet by 5There are many versions of this story. Our version even entered the world of (German language) literature in the form of a quote. See the fiction “Der Egoist” by Helmut Eisendle.

5  Sequence of Moves and the Extensive Form     87

chance in a café. Falling in love with each other at first glance, they carry on a lively conversation. It turns out that in the evening, Tina, an enthusiastic football fan, definitely wants to see the cup game of her club, whereas Oskar is not at all interested in football. He is an ardent moviegoer and wants to convince Tina to see the latest Woody Allen movie that is shown at the local movie theater—its last show is this evening. Somehow it becomes clear that Tina does not really enjoy going to the movies. While still in discussion, Oskar suddenly remembers that he has to go to a very important interview. Because of his rapture, he would have almost forgotten this date. Telling Tina, “You are fantastic, we have to see each other tonight,” he gives her a quick parting kiss and leaves precipitously. Tina is just as enthralled. When they both notice that they forgot to fix a meeting point or exchange addresses, it is too late. Where should they go to see each other again—to the football stadium or to the movie theater? Both know that Tina prefers the stadium and Oskar the movie theater. If, however, they do not reconnect with each other, they would have no pleasure in either the cup game or the movie. Matrix 5.4  Battle of the Sexes Tina Stadium

Movie theater

Stadium

(1,2)

(0,0)

Movie theater

(0,0)

(2,1)

Oskar

We do not want to discuss the literary quality of this excerpt, nor do we want to discuss whether the role distribution that deliberately deviates from the usual male–female cliché is helpful in a textbook. Experience shows that such deviations can create confusion even in “less traditional societies.” Instead, we want to see how Tina and Oskar can escape from this dilemma. The game-theoretical analysis reveals that the game has two Nash equilibria in pure strategies, namely (Stadium, Stadium) and (Movie theater, Movie theater). The corresponding outcomes, which are straightforward from the

88     M. J. Holler and B. Klose-Ullmann

chosen strategies, are valued differently by Oskar and Tina: No one strategy is for both players better than the other and both strategies seem to be equally probable, just as probable as each of the two strategy pairs that imply that “the two people in love” will not meet. Luckily, Tina took a course in game theory. She comes to the conclusion that it is quite unlikely to meet each other in the stadium, since almost 50,000 spectators are expected to come, even if both of them go to the stadium. Therefore, she decides to see Woody Allen, and, hopefully Oskar. Unfortunately, Oskar also took a course in game theory. He knows the coordination dilemma, inherent in the “Battle of the Sexes.” He calls at the cinema where the Woody Allen movie is shown, leaving the message that he will wait for Tina “behind the goal.” Being not interested in football, he does not realize that it is impossible for Tina to get from the cinema to the stadium until after the game begins, and he does not consider that a soccer field has two goals. (Unfortunately, game theorists often misjudge the decision situation.) When Tina finally reaches the stadium, she cannot buy a ticket as the tickets are sold out. The stadium is packed with people. “No entrance!” Thus, neither Tina nor Oskar enjoyed the football game. So far the attempts of Tina and Oskar failed to find a solution for the simultaneous decision problem. However, imagine that Oskar succeeded in getting Tina´s telephone number and left a message on her answering machine. What would he say? “I am looking forward to see you at the Woody Allen movie.” In his opinion, this message would be the best if Tina agrees. In fact, Tina might be delighted to hear Oskar’s voice, but she might consider not meeting him. However, the latter reaction is not optimal for her as we conclude from the game tree in Fig. 5.5. What should we expect if Tina answered the phone instead of the answering machine? Would she suggest to Oskar to meet at the football stadium? Perhaps she would not be fixed to her first preference but suggest to Oskar to see the movie. Of course, it is not clear whether they would reach an agreement. The decision situation is complex even if they can talk to each other. Game theory shows that in such cases, we cannot rely on thinking only. We must act in a way to reduce the complexity of the decision situation, thus assuring that we can solve the problem satisfactorily. This means, exchange addresses or telephone numbers and make a clear arrangement for a date: when, where, and perhaps even with whom. This advice seems perfect but it is not always followed in reality. If one of the basic features of “when, where, and with whom” is ill-defined, even game theory cannot help. Therefore, it seems that we should follow this recipe,

5  Sequence of Moves and the Extensive Form     89 (2,1) Kino

Tina Kino

Stadion

(0,0)

Oskar

Stadion

Tina

Kino

(0,0)

Stadion

(1,2)

Fig. 5.5  Battle of the Sexes with sequential structure and labeling in German

regardless of whether it pertains to private life or business. However, we could think of situations where it is more profitable, at least for one player, to leave at least one of the dimensions to “randomness,” possibly to avoid major conflicts. Oskar might be hesitant to go into a bargaining process and accept going to Tina’s preferred football stadium in the end. Leaving the “where” pending, he has a chance to see the Woody Allen movie and Tina, and not just one of them. We do not go into chances here, but note that, in addition to the two equilibria in pure strategies, the Battle of the Sexes comprises a mixed-strategy equilibrium as well. Therefore, as we will see in Chap. 10, probabilities are indeed relevant in this game.

5.7 What Is a Strategy? The strategies reflected in the matrix games can be considered a highly abstracted series of moves. Sometimes the series are short enough to be depicted by one move only; strategy and move are presented as being identical. Yet, even in rather simple parlor games, every player usually has several moves (although not so in a one-shot Rock-Paper-Scissors game). A strategy means planning a series of moves, even for situations which might not occur in the course of the game. If, contrary to any expectations, the player gets into such an unexpected situation, he should have a plan about the moves to choose in order to not being surprised. A strategy is a series of planned moves. A strategy is a plan. Moves are actions.

90     M. J. Holler and B. Klose-Ullmann

In Fig. 5.4, a strategy of player 1 consists of two moves. The first move is chosen from the set {H,N} while the second is an element of {L,R}. We specified the game such that HL is player 1’s equilibrium strategy. However, strictly speaking, the strategy of player 1 has also to consider the move that player 1 will choose if he finds himself in node C or D, although by choosing H, player 1 should never reach these nodes. In the game of Fig. 5.4, the possibilities C and D are of some relevance because player 1 cannot recall whether he chose move H or N as first move. Similarly, we could assume that player 1 has perfect recall, but his hand may tremble—as assumed in the Chap. 7. In principle, a rational chess player should prepare such a well-defined strategy before making the first move. However, then he would never make the first move. Let us assume he had the white pieces and would start. Even if he only considered Black´s immediate reply, he would have to think about 400 positions that are possible after his own move and the following move of his opponent. As Ernst Strouhal notes in “Schach. Kunst des Schachspiels” (2000: 13f ) (“Chess: The Art of Playing Chess”), after the second move already 71,825 positions are possible and after the third move 91 million. Moreover, even if we limit a game at random to 40 moves and assume that per move only 30 possible positions have to be scrutinized, we still receive 25 times 10116 different positions which is an astronomical number. Let us assume a very fast computer could calculate one billion positions per second; let us furthermore assume that the problem could be scaled such that one million computers could work simultaneously, calculating all variants of the game up to move 40 would take around 1093 years. In other words, this would need approximately 1080 times as long as the universe has been existing.6 When chess players are talking of strategies, they obviously refer to a different concept than game theorists. Luckily the series of moves in economic game situations is usually shorter and limited to what human beings can plan. (Is this part of the economics of economic games?) However, the decision situations are usually more complex: In most economic games, there is more than one equilibrium. They are not just a matter of winning, losing, or ending in a tie. Often, all participants could win but, in the end, they all have to pay the bill if they do not succeed in changing the game, i.e., its rules or the set of players. Again, in order to reach this “stage of designing,” you must understand the game and its rules—and the set of possible players and their strategies and preferences. 6Translated

and paraphrased by the authors.

5  Sequence of Moves and the Extensive Form     91

5.8 Sharing a Cake The information which is produced in a sequential game can be used to create a fair outcome in the problem of “dividing a cake.” Here, following standard microeconomics, fair means that the outcome is efficient and ­envy-free. In the two-player case, a division of a cake is envy-free if player i does not prefer the share of player j to his own. Player i is likely to prefer the share of j and keep his own share, but this is greed, and not an expression of envy. If a division is envy-free, then it is also called equitable. If the players prefer more of the cake to less, then the division is efficient if every crumb is distributed. Now there is a cake waiting to be divided between Adam and Eve. Both like cakes, and more is better than less to them. The cake is homogeneous inasmuch as Adam and Eve consider each slice of cake of the same size and of equal value. Given this, we can tell Adam and Eve that a division ­half-half is fair—and give them a long list of philosophers whose theory of justice supports this outcome. Or, we make them read Chap. 12 of this book which discusses the Nash solution and let them apply this concept to their division problem. Alternatively, we can give Adam a knife to cut the cake in two pieces and give Eve the authority to choose one of the two pieces. If we let Adam know that Eve will choose, what result do we expect? Adam will cut the cake in two pieces of equal size; we get the half-half division, i.e., a fair outcome. Obviously, if Adam’s hand trembles and he cuts the cake such that there is a larger and a smaller piece, Eve will choose the larger piece and Adam will envy her. This procedure is well known under the label divide-and-choose.7 It is an example of a mechanism design: A fair solution is implemented in the form of a Nash equilibrium. It matches the prescription of a social norm, i.e., a collective objective, using individual rational behavior. If the two sides of the matching correspond to each other—i.e., lead to equal divisions of a cake— then we say that the social norm is implemented via the Nash equilibrium (of a non-cooperative game). It seems that the divide-and-choose mechanism is widely accepted. However, how robust is this mechanism? Does it always produce a fair outcome? Let us assume that Adam prefers cream and Eve prefers chocolate.

7For

a formal analysis of the divide-and-choose mechanism, see van Damme (1987:130ff).

92     M. J. Holler and B. Klose-Ullmann

It so happens that one half of the cake has cream on the top and the other half is topped by chocolate. How will Adam cut the cake? It depends! If Eve cannot grab a piece of cake and then (re-)negotiate with Adam, Adam will take all of the cream section of the cake and an additional share of the chocolate section, such that Eve is almost indifferent between the pure chocolate piece and the cream + chocolate piece, but still prefers the pure chocolate. The stronger her preference for chocolate, the smaller will be the piece of pure chocolate. In the end, there will be a piece of pure chocolate that is only marginally larger than half of the total chocolate section of the cake. Obviously, the just-described strategy of Adam presupposes that he is well informed about the preferences of Eve. The better his information, the more he can capture the chocolate section preferred by Eve. (It is often observed that in such situations, the chooser will try to hide the intensity of his or her preferences behind a veil of indifference.) In general, there is a first-mover advantage to the divider. If this is common knowledge, we may expect that Adam and Eve will throw a coin to decide who cuts the cake. The strategic situation is quite different, if Eve can negotiate after grabbing her piece. If Adam follows the pattern just described, then Eve will take the cream-chocolate piece and Adam is left with a share of the pure chocolate piece which, of course, he appreciates less than the cream-chocolate piece and even a cream piece of the same size. This does not look like a good start for Adam going into negotiations with Eve.

5.9 Theory of Moves A selected structure of moves can stabilize a strategy pair which is not a Nash equilibrium. Perhaps this is not surprising as the Nash equilibrium considers only one potential move from the equilibrium for each player. If this one move is not profitable, then there is no particular motivation for this player to change his behavior. If this applies for all players, then we have a Nash equilibrium. In other words, the Nash equilibrium does not consider that player i could react to player j deviating from the Nash equilibrium. Of course, this does not take into account any counter-reaction of j on the reaction of i, and so on. If players take care of reactions and counter-reactions, however, we might get stable results which are not supported by a Nash equilibrium as the following example illustrates. Let us look at the Chicken Game in Matrix 5.2 in Sect. 5.2 and assume that, in the status quo, players chose s11 and s21. Of course, the strategy pair

5  Sequence of Moves and the Extensive Form     93

(s11,s21) is not a Nash equilibrium: Both players can improve, given that the other player does not “move” by choosing a different strategy. It seems that (s11,s21) does not represent a pair of stable strategies and the payoff pair (2,2) not a likely result. However, if player 1 considers choosing s12 instead of s11, in order to increase his payoff, he might assume that player 2 reacts by choosing s22. This forces player 1 to choose s11, unless he accepts the worst outcome 0 as a result of the strategy pair (s12,s22). However, if player 1 revises his choice s12 and selects strategy s11, player 2 is not motivated to change his strategy s22 and the payoff pair (1,3) prevails. This result corresponds to a Nash equilibrium, but implies a lower payoff for player 1 than what he earns in the initial assignment of strategies, i.e., (s11,s21). As similar reasoning applies to player 2, the pair (s11,s21) seems to be stable as neither player 1 nor player 2 is motivated to move, given the chain of reaction and counter-reaction just outlined, if the players have the capacity to think ahead. According to Brams and Wittman (1981), the strategy pair (s11,s21) is a nonmyopic equilibrium—however, it is not a Nash equilibrium. Thinking ahead stabilizes. Of course, the underlying move structure is of some importance. If (s12,s22) is reached, it is up to player 1 to do the next move. Otherwise, player 1 could hope that player 2 changes to strategy s21 and, given s12, the payoff pair (3,1) prevails. However, given the assumed alteration of moves, this is excluded. Yet, in real life it should matter who suffers most of the losses related to (0,0)—or, who is expected to suffer least. The answer should determine the sequence of moves. Moreover, the starting point (i.e., the status quo) matters. Look at Matrix 5.2 and assume that (s12,s21) describes the “point of departure.” Should player 1 be afraid that player 2 chooses s22, forcing player 1 choosing s11 to achieve equilibrium (s11,s22) with payoffs (1,3)? By the reasoning above, the answer is “yes.” Given this, player 1 is well advised not to rely on the stability of the Nash equilibrium (s12,s21) and to choose s11 instead of s12. Then the pair (s11,s21) results, which was identified above as a nonmyopic equilibrium, given the assumed move structure and the backward induction of the players. However, who assigns the first move to player 1? Is there a first-mover advantage? What happens if the players, doing backward induction, choose simultaneously: Players 1 and 2 choose s11 and s22, respectively, with the resulting payoff pair (1,3)? Who has the next move? Note: There can be simultaneous moves in thought experiments.

94     M. J. Holler and B. Klose-Ullmann

The reasoning, here illustrated by the Chicken Game, is generalized in the Theory of Moves by Steven Brams (1994) and the pioneering work in Brams and Wittman (1981) and Zagare (1984).8 However, this theory postulates additional constraints: “A player will not move from an initial state if this move (i) leads to a less preferred final state (i.e., outcome); or (ii) returns to the initial state (i.e., makes the initial state the outcome)” (Brams 1994: 27). Taken (i) literally, given an initial state (s12,s21) with the corresponding payoff pair (3,1), player 1 cannot move first. Instead, he is helplessly exposed to player 2 choosing s22; player 1 can react on this choice of 2 only by choosing s11 generating the outcome (1,3). This result suggests that (i) should be revised: A player should compare possible outcomes, like (2,2) and (1,3), if the initial state is not supported by the other player(s) as outcome, when deciding whether to move or not. Of course, player 2 will compare these results as well and rush to choose s22. This could result in simultaneous choices s11 and s22 producing the payoff pair (1,3). Whether this will be the (final) outcome depends on who will have the opportunity of a next move. If player 2 is the lucky one, then he will abstain from moving and (1,3) will be the outcome. The design of the move structure, although decisive for the outcome, is somewhat ambiguous. There is a wide range of applications and some variations if we assume that players look more than one step ahead. For instance, Brams (1994: 34ff; 2011: 57ff) applied the Theory of Moves to the story of Samson and Delilah from the Old Testament: After Samson, blinded by love, revealed to Delilah that his strength is embedded in his long hair, she had his hair cut while he slept and delivered him to the Philistines who were his enemies. Was this the end of the story? There seems to be a close relationship between the Theory of Moves and the concept of a second-mover advantage. However, while applying the Theory of Moves often ends in stabilizing a result which is not a Nash equilibrium, numerous examples in Holler (2019) demonstrate that the existence of second-mover advantage proposes that nothing will happen at all—as there will be no first-mover. To some extent, this difference is driven by alternative decision situations. What is “nothing happens” in a game? Doesn’t this describe the status quo—e.g., supported by strategies (s11,s21)?

8There

is a recent discussion in Frahm (2019: 314–326).

5  Sequence of Moves and the Extensive Form     95

References Brams, S. J. (1994). Theory of moves. Cambridge: Cambridge University Press. Brams, S. J. (2011). Game theory and the humanities: Bridging two worlds. Cambridge, Mass., and London: MIT Press. Brams, S. J., & Wittman, D. (1981). Nonmyopic equilibria in 2×2 games, conflict. Management and Peace Science, 6, 39–62. Camus, A. (1982 [1942]). The outsider. Harmondsworth: Penguin. Frahm, G. (2019). Rational choice and strategic conflict: The subjectivistic approach to game and decision theory. Berlin and Boston: De Gruyter. Holler, M. J. (2018). The economics of the good, the bad, and the ugly: Secrets, desires, and second-mover advantages. London and New York: Routledge. Strouhal, E. (2000). Schach. Die Kunst des Schachspiels, Hamburg: Nikol Verlagsgesellschaft. An unauthorized reprint of E. Strouhal (1996), 8x8. Die Kunst des Schachspiels, Wien: Springer. van Damme, E. (1987). Stability and perfection of Nash equilibrium. Heidelberg: Springer. Zagare, F. (1984). Limited move equilibria in 2×2 games. Theory and Decision, 16, 1–19.

6 Chaos, Too Many and Too Few

…are closely related problems with regard to strategic decisions. Chaos is one of the most popular terms when the destiny of self-organizing systems is at stake and small causes produce large effects. Think of the notorious butterfly flapping its wings in the Amazon forest that triggered a tornado in Texas. The flapping wings represent a small change in the initial condition of the system, which causes a chain of events leading to large-scale phenomena such as hail storms and hurricanes, droughts, and floods. There is no proof of cause and effect in these cases. But a causal relationship between “small causes and large effects,” the wing beat and the hail storm, cannot be excluded altogether. This is what Chaos Theory is all about. It was Edward Lorenz’s paper delivered in 1972 to the American Association for the Advancement of Science in Washington, D.C., entitled “Predictability: Does the Flap of a Butterfly’s Wings in Brazil set off a Tornado in Texas?,” that made the Butterfly Effect popular. But was the concept surprising? Was it new? In his Essai philosophique sur le gouvernement, où l’on traite de la nécessité, de l’origine, des droits, des bornes et des differentes formes de souveraineté, selon les principes de feu M. François de Salignac de la Mothe-Fénelon, Archbishop and Duke of Cambrai, published in 1721, some years after the War of Spanish succession, a report by Chevalier Andrew Ramsay, a Scottish catholic who spent most of his time in France. Ramsey refers to a mosquito fly that bites the hand of a young prince who dies because of the inflammation of his hand. His death triggers a dispute and, in the end, war between the major powers in Europe erupts. The following quote demonstrates that the author was quite aware of the Butterfly Effect. “Le moindre mouvement d’un atome peut causer des révolutions innombrables dans le monde. © Springer Nature Switzerland AG 2020 M. J. Holler and B. Klose-Ullmann, Scissors and Rock, https://doi.org/10.1007/978-3-030-44823-3_6

97

98     M. J. Holler and B. Klose-Ullmann

Un petit insecte venimeux, voltigeant dans l’air, pique la main d’un jeune prince; elle s’enflamme: l’inflammation augmente, l’enfant royal meurt: il s’élève des disputes sur la succession; l’Europe entière s’y intéresse; les guerres commencent partout, tout les empires sont renversés, et le premier mobile de tout ces révolutions a été l’action d’un animal invisible” (Ramsay 1980[1721]: 28).1 However unlikely this event seems, there is no randomness involved as this eighteenth-century author stresses. This perfectly fits the framework of twentieth-century Chaos Theory which added the mathematical description to chaotic phenomena. But the standard nonlinear equation systems hardly ever apply when kings decide on making war (see above), and Brian Arthur decides to visit the El Farol (see below).

6.1 The El Farol Problem or “Too Many People at the Same Spot” On January 23, 1997, an article appeared in the German newspaper Süddeutsche Zeitung headlined “Tonight at the Bar?” It was about a music bar called El Farol situated in the old Indian town Santa Fe, New Mexico. Every Thursday this bar featured a band playing Irish music with an often jam-packed audience. The article was written by W. Brian Arthur2 who was of Irish descent and an ardent fan of Irish music. The article said that he hated nothing more than to be in a room filled up with brawling drunkards. Every Thursday, as the article goes on, he had to make the decision: To go or not to go to El Farol. Fortunately, Brian Arthur was working in the Santa Fe Institute conducting research on complex systems. This deals with forecasting the behavior of individuals who want to be at the same spot at the same time but this spot should not be overcrowded. Brian Arthur’s job turned out to be a blessing in disguise for him. It was his professional interest to deal with problems often occurring because there are “too many people at the same place.” Other 1As

we could not locate an authorized translation into English of the text although it was said that such a translation was already published in 1722, there follows our own translation. “The smallest movement of an atom can cause chaos in the world. A little poisonous insect flying around, stings the young prince in the hand, the wound gets inflamed: the inflammation becomes worse and the young prince dies, his death leading to quarrels over the succession among all major European powers; everywhere, there are wars of succession. The starting point of this more than complex situation was the action of an almost invisible little animal.” 2In this article, he is introduced as doing research in chaos theory and economics. Most likely, he is identical to Brian Arthur who contributed to the literature on the QWERTY typewriter keyboard (see Chap. 4 above).

6  Chaos, Too Many and Too Few     99

potential guests of El Farol had to cope with this phenomenon without any professional expertise or the possibility to publish an article. However, the article does not tell us whether Brian Arthur was happy about this advantage, or how he solved the problem of standing in a congested bar, suffering from not being able to enjoy the Irish music, a quite frequent experience. Yet, we do read that the problem was very complex and that the number of guests varied enormously, this being confirmed by computer simulations that were run at the Institute. Here is the reason for such complexity: If, after thorough considerations, Brian Arthur comes to the conclusion that on this very Thursday there will be only very few guests in El Farol and therefore he will be able to enjoy the music, others will come to the same conclusion on the basis of “parallel considerations.” The result is a music bar overcrowded with brawling drunkards (see above). If, however, Brain Arthur comes to the conclusion that tonight El Farol will be packed, he prefers visiting his Institute to simulate chaos, other potential visitors may refrain from listening to Irish music on the basis of the same reasoning. As a consequence, El Farol will stay empty. In both cases, Brian Arthur will regret his decision. The complexity of the decision problem faced by Brain Arthur and other fans of Irish music in Santa Fe with the same prejudice against congested bars and brawling drunkards can be illustrated by Matrix 6.1. All other guests, who according to Brian Arthur and other friends of Irish music would make up the large crowd in El Farol, are being represented by Patrick: Thus if Brian and Patrick visit El Farol on the same Thursday, neither of them can enjoy the music. Each would prefer to stay at home or visit the Institute. But each of them would prefer being at El Farol if the other one is not there. Matrix 6.1  The El Farol problem Patrick to El Farol

not to El Farol

(1,2)

(0,0)

(0,0)

(2,1)

Brian not to El Farol to El Farol

100     M. J. Holler and B. Klose-Ullmann

Let us assume that Brian and Patrick are indifferent as to whether being both of them are at El Farol or none of them are. Then Matrix 6.1 shows that the El Farol problem has a very similar decision structure as the Battle of the Sexes, which we discussed in Sect. 5.5. There is one substantial difference: In the El Farol problem, the two players consider the outcome is poor when both are at the same place at the same time whereas in the Battle of Sexes the outcome is considered poor if the two players are at different places at the same time. The game in Matrix 6.1 has two Nash equilibria in pure strategies with the payoff pairs (2, 1) and (1, 2). The game needs just the flapping of a butterfly or a piece of Irish music from the local radio station so that El Farol gets overcrowded—filled up with brawling drunkards.

6.2 Self-referential Systems The example shows that the complexity of the problem lies in the strategic interrelationship of the decisions. An event is not determined by a single player but by the decisions of two (or more) players and each player has to consider the decision of the other players if she wants to forecast the outcome. The considerations that we assumed for Brian Arthur may lead to their own “negation”: It would be the best for Brian to go to El Farol and to have Patrick stay at home. But Brian has no control on Patrick´s behavior. The same reasoning, however, applies to Patrick. Whatever Brian is considering, it necessarily leads to the “wrong solution” if others think the same. If by putting himself in Patrick’s shoes, Brian assumes that Patrick has the same preferences and the same capacity of decision making, his considerations become self-referential.3 They relate to one another and consequentially do not create any information, but make existing information “useless.” The structure of Brian’s thoughts can be characterized by the following construction: The following sentence is true! The preceding sentence is wrong! What did Epimenides, the Cretan, say? All Cretans are lying! 3In

the early 1980s, the concept of self-referential systems became popular by Douglas Hofstadter’s book Gödel, Escher, Bach. In this book, Hofstadter (1980) discusses the significance of this concept in mathematics, art, and music.

6  Chaos, Too Many and Too Few     101

What did Patrick McNutt4 say when leaving El Farol? This bar is always packed with people! Nobody will go there any more! If there is no information, anything can happen. Computers that are being fed with self-referential systems of statements either produce no results, or they produce results with chaotic variations. The latter is based on the assumption that the logic of self-referential systems of statements or decisions is replaced or amended by less rigid conditions permitting decisions and consequential behavior leading to dynamic processes that can be depicted by nonlinear equations. With regard to this interpretation, Brian Arthur’s thoughts may be considered the micro-conditions. By means of the assumption that the other visitors are brooding over the same thoughts, they are transmitted to the system “Visit to El Farol.” In this macro-context, the self-referential statements and information cancel out. Thus, on a macro-level, there might be results that contradict the considerations on the micro-level: If every potential visitor comes to the conclusion that El Farol will be overcrowded and therefore refrains from going there, the bar will be empty. Does this contradict the assumption of common knowledge of rationality (CKR) paired with assuming consistent-aligned beliefs (CAB)—proposing “that everybody’s beliefs are consistent with everybody else’s”?5

6.3 Solutions to the El Farol Problem With his computer simulation, Brian Arthur certainly found a solution to the El Farol problem. Maybe he had plenty of time to do so because El Farol was often already overcrowded when he decided to spend an evening listening to Irish music. So he went back to the Institute and to his computer. Unfortunately, his solution was not discussed in the article published in the Süddeutsche Zeitung. Thus, we can only guess! Let us assume that Brian Arthur, following his considerations, comes to the conclusion that there is a self-referential system that does not permit one to form a rational expectation regarding the behavior of the other fans of Irish music in Santa Fe. He could flip a coin: When it shows a number, he 4Patrick

McNutt represents player Patrick of Matrix 6.1. He is not the Irish economist who published books like Game Embedded Strategy: Introducing Framework Tn3 and GEMS for Business Strategy, The Economics of Public Choice, and Political Economy of Law. 5For CKR and CAB, see Sect. 1.6.

102     M. J. Holler and B. Klose-Ullmann

will go to El Farol. Otherwise he will go to the Institute or stay at home. This decision procedure corresponds to the principle of insufficient reason or to the Laplace principle, respectively, as it is impossible to differentiate between Patrick’s alternatives of action. Thus, his choices become equally probable to Brian, and there is no reason for Brian to behave differently. Flipping a coin with probability ½ does not coincide with a mixed strategy equilibrium for the game in Matrix 6.1, as we will see in Chap. 10. The principle of insufficient reason is applied if there is no reason to differentiate between the available strategies. Consequentially, each strategy is chosen having the same probability. Following another approach, the simultaneous game in Matrix 6.1 is changed to a sequential game. In our case, this implies “Don´t think, but act!” Brian Arthur goes to El Farol. If, in his opinion, the critical mass indicating overcrowding has not yet been reached—he enters. If all potential visitors share the same definition of critical mass and all detest conditions of overcrowding, it is assured that Arthur can enjoy the Irish music by arriving earlier than others. However, sometimes he might go to El Farol in vain as it is already full before he arrives. But the sooner he is there the more unlikely this gets. Therefore, we often go to the movie theater or to the football stadium early before the performance or the match starts—just to experience that others are doing the same. As a consequence, waiting time may inflate. The real problem of this solution seems to be the following: There are guests who are going to El Farol only when it is full because they expect to be accepted by the crowd of brawling drunkards as equal company. Also, in this context we should ask “Who are the players?” What good does is the coordination among the lovers of Irish music if there are some fellows who, under the influence of alcohol, love to join in the singing of the band—missing the tones? Instead of arriving early and, nevertheless, being deprived of the pleasure of listening to Irish music, Arthur and a group of music lovers could rent El Farol each Thursday. Maybe the manager of El Farol, Juan M., starts to charge money for entering the music bar—not because he wants to deter potential guests or to collect some extra money but to make visits to his bar more attractive by reducing the numbers. Once, on a Thursday, the bar stayed empty. Juan M. was flabbergasted because on Thursday the week before many guests had gone home early in the evening, as the bar was so crowded and noisy that air was rarefied and the music was hardly identifiable. He was amazed when the bar was again crowded the following Thursday. Only when he talked to Patrick did he understand the problem. Whenever Patrick and all the others expected El Farol to be congested, it remained empty, and when they learned that it was empty, it became overcrowded on the following Thursday. The visits of costumers showed a business cycle with a few Thursdays of “balanced

6  Chaos, Too Many and Too Few     103

occupation.” As a consequence, Juan M. decided to charge an entrance fee and hoped that nobody would expect his bar to be overcrowded. Again, the question is relevant “Who are the players?” Is Juan M. a player?

6.4 Market Congestion Game Matrix 6.2 shows a Market Congestion Game. Firms 1 and 2 are possible suppliers on the Markets A and B, each firm being present on one of the two markets only. Thus, Market A and Market B are alternative strategies of the firms; they have to choose. Market A could be a market for luxury goods and Market B a market for the cheaper versions thereof. Neither firm can possibly supply goods on both markets without harming their reputations. Cheap suppliers, as well, may hurt their reputation if they have too much of an eye to higher price ranges. For instance, we were told that successful art galleries represent either high-price or low-price artists but not both. We assume that the payoffs of the firms are identical with their profits. This assumption seems to be straightforward, at least for some ranges of profits, and concurs with the standard assumption of profit maximization that dominates the textbook versions of the theory of the firm. Matrix 6.2  Market Congestion Game Firm 2 Market A

Market B

Market A

(1,1)

(2,1)

Market B

(1,2)

(0,0)

Firm 1

Market B is congested if both firms supply to it. Then profits are zero. On the other hand, let us assume that if both firms supply Market A, they can come to terms with each other such that both make profits. This could be achieved by voluntarily limiting the quantity or by differentiating on the quality of goods. The latter results if both firms have customers faithful to their products and therefore are able to pursue successful price policies on this market. These assumptions are highly stylized, but concur with empirical observations of markets of luxury goods.

104     M. J. Holler and B. Klose-Ullmann

If firm 1 leaves Market A and offers its products solely on Market B, firm 2 could profit from the fact that it now has a monopoly position on Market A, thus achieving higher profits than in the initial position. Matrix 6.2 assumes that firm 1 achieves the very same profit on Market B, if it is the only supplier to it, as with Market A, if selling jointly with firm 2. Perhaps there is no immediate economic basis for this assumption—although it is not implausible—but the strategic decision situation gets more interesting. In case both firms are active on Market B only, profits reduce to an amount below the level that they can achieve on Market A. It might be that on the low-price Market B, a differentiation of goods is impossible, and price competition leads to zero profit. This result is known as the Bertrand paradox: If there is an equilibrium, it is defined by “price = marginal costs = average costs.” Of course, “price = average costs” implies zero profits. The Market Congestion Game has the following three Nash equilibria in pure strategies: (Market A, Market A), (Market A, Market B) and (Market B, Market A). We can see that the equilibrium (Market A, Market A) is weakly inefficient: one of the two firms could improve without the other firm being worse off regarding its profits. However, a Pareto improvement, leading to an efficient outcome, presupposes that one of the two firms opts for Market B. But why should a firm, on the basis of the equilibrium (Market A, Market A), shift its supply to Market B? If there were incentives to do so (Market A, Market A) would not represent a Nash equilibrium. Why should a company ever decide in favor of Market B? Compared to doing commerce on Market A, it is not advantageous at all—but there is a risk that possibly the worst of all results materializes, i.e., the zero-profit situation corresponding to strategy pair (Market B, Market B). In order to find an answer to this question, we shall deal with this game again in the next chapter. Apparently, the concept of the Nash equilibrium does not suffice to find a convincing answer to the question which strategies the firms choose or should choose. Every strategy choice in this game can be justified by a Nash equilibrium. In game-theoretical terms, all strategy choices are rationalizable in this game. The concept of rationalizable strategies will be discussed in the following chapter (in Sect. 7.4).

6.5 Viruses for Macintosh The following story about the competitors Macintosh and Windows reflects the situation ten to 15 years ago, i.e., before “cyber warfare” entered the computer scene and has become a classic. The users of Microsoft’s Windows

6  Chaos, Too Many and Too Few     105

were daily experiencing a special kind of congestion: viruses. Usually, these viruses were getting caught through anti-virus software but now and then such an intruder succeeded in destroying the hard disk and sent ­non-authorized mails to all addresses listed in the email address book, or affected the system even more deleteriously. Although we do not have to pay for updating the anti-virus software, this reminds us nevertheless of the risks connected with using the computer. Someone has to pay for developing and updating such programs. The amount incurred worldwide for this work is said to exceed the gross national product of some countries. In view of such huge amounts, it is not unjustifiable to ask who is writing viruses and infiltrating them into the World Wide Web—and why does he or she do so? There are at least two answers to this question. One answer is the following. It goes without saying that it is entirely mistaken to assume that those supplying the market with anti-virus software and updates that can be bought or subscribed to, are responsible for releasing the viruses. No, these people must be “crazy nerds (kids).” These wizards want to demonstrate that they are the greatest in the IT world—and perhaps get an adequate job in the future. In order to impress as many computer users as possible, they write their viruses almost exclusively for the software of the market leader Microsoft and its Windows system, with the competitor Macintosh being largely spared, so far. IT-nerds tend to be Macintosh users. Matrix 6.3 is an attempt to show the decision situation of user and “virus producers” in its simplest form, the “user” being player 1, the “virus producer” being player 2. Furthermore, we assume b