319 55 10MB
English Pages 140 [75] Year 2017
Risky Curves
On the empirical failure of expected utility Daniel Friedman, R. Mark Isaac,
Duncan James, and Shyam Sunder
Routledge
Taylor & Francis Group
LONDON AND NEW YORK
Contents
First published 2014 by Routledge
2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN
and by Routledge
711 Third Avenue, New York, NY
10017
First issued in paperback 2017
Routledge is an imprint of the Taylor & Francis Group, an informa business © 2014 Daniel Friedman, R. Mark Isaac, Duncan James, and Shyam Sunder The right of Daniel Friedman, R. Mark Isaac, Duncan James, and Shyam Sunder to be identified as authors of this work has been asserted by them in accordance with the Copyright, Designs and Patent Act 1988. _
All rights reserved. No part of this book may be reprinted or reproduced or utilized in any form or by any electronic, mechanical, or other means, now
known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe.
List of figures List of tables Acknowledgments and permissions List of abbreviations
British Library Cataloguing in Publication Data
The challenge of understanding choice under risk
Library of Congress Cataloguing in Publication Data
Historical review of research through 1960
A catalogue récord for this book is available from the British Library Friedman, Daniel, 1947Risky curves : on the empirical failure of expected utility / Daniel Friedman, R. Mark Isaac, Duncan James, and Shyam Sunder.
vil Xl
xiii
Measuring individual risk preferences
20
Aggregate-level evidence from the field
54
What are risk preferences?
81
ISBN 13: 978-1-138-09646-2 (pbk) ISBN 13: 978-0-415-63610-0 (hbk)
Risky opportunities
96
Typeset in Times New Roman by Out of House Publishing
Possible ways forward
115
Index
133
pages cm ISBN 978-0-415-63610-0 (hardback) — ISBN 978-1-315-81989-1 (e-book) 1. Risk.
2. Utility theory.
HB615.F55 2014 338.5—-de23 2013030113
3. Decision making.
I. Title.
Figures
2.1 2.2 3.1 3.2 3.3 3.4 3.5 4.1 5.1 5.2 6.1 6.2 6.3 6.4 6.5 7A 7.2
A Friedman-Savage Bernoulli function A Markowitz Bernoulli function Alternative lotteries presented as pie charts Data from Isaac and James (2000) Dorsey and Razzolini decision interface with probabilities of winning displayed Clock format Dutch auction Tree format Dutch auction Friedman—Savage Bernoulli function and the optimal gamble Scatterplot of o (Stdev) versus X7 (Exp(loss)) for 121 lotteries (each with a Uniform distribution), with fitted regression line Scatterplot of o (Stdev) against X7 (Exp(loss)), with fitted regression line, for 100 lotteries on [-0.5, 0.5] governed by the beta distribution with differing parameters White cup/black faces in profile Estimates of Bernoulli functions for nine oil men
Smith and Stulz (1985) figure 1 Smith and Stulz (1985) figure 2 Net payoff functions (y = net payoffs; x = gross payoffs) Neuron activity over time (binned and averaged) across the five different probability of reinforcement conditions Mean coefficient estimates as a function of posterior winning probability, plotted in separate panels for left and right side of brain
28 35
42 43 44 57 91 92 97 99 101 102 106 123 125
Tables
3.1 3.2 3.3 3.4 3.5
The lottery menu of Binswanger (1980) Lottery choice frequencies in Binswanger (1980) Choice table proposed in Holt and Laury (2002)
3.6
Binary choices conducted in Calcutta from Cox, Sadiraj, Vogt, and Dasgupta (in press) Table 5 Calibrations for probability weighting functions reported in Cox, Sadiraj, Vogt, and Dasgupta (in press) Table 7 Lottery treatment payoffs from Binswanger (1981)
3.7 3.8
Treatments from Cox, Sadiraj, and Schmidt (2011)
22 22 27 33
Dasgupta (in press) Table 1
37
Small stakes binary choices from Cox, Sadiraj, Vogt, and
37 37 40
Acknowledgments and permissions
We gratefully acknowledge the following people who have given their time
to read earlier drafts of this book: Paul Beaumont, Sean Collins, James Cox, Shane Frederick, Dave Freeman, Antonio Guarino, Gary Hendrix, Ryan Oprea, and Mattias Sutter. Likewise, we are thankful for the helpful
correspondence of Peter Bossaerts. The four authors entered into this project after turning anew to the questions raised in two previous working papers:
James and Isaac and Friedman and Sunder (referenced in Chapter 6). We would like to thank all those who provided comments on that research in both earlier and more recent incarnations. Our sincere appreciation goes to Susan Isaac who copy-edited the manuscript prior to submission and to Tom Campbell and Qin Tan who prepared the bibliographies. The usual disclaimer applies; we alone are responsible for any and all remaining errors. Permission
to reproduce Figure
3.1
(from Abdellaoui,
M., Barrios,
C.,
and Wakker, P. P. (2007) “Reconciling Introspective Utility with Revealed
Preference: Experimental Arguments Based on Prospect Theory,” Journal of
Econometrics 138(1): 356-378, p. 370, Figure 5. Online. Available http://people.few.eur.nl/wakker/pdfspubld/07.1mocawa.pdf (accessed June 19, 2013)) is gratefully acknowledged from Elsevier Limited. Permission to reproduce Figure 3.2 (from Isaac, R. M., and James, D. (2000)
“Just Who Are You Calling Risk Averse?” Journal of Risk and Uncertainty 20(2): 177-187) is gratefully acknowledged from Springer Publishing.
Permission to reproduce Figure 3.3 (from Dorsey, R., and Razzolini, L.
(2003) “Explaining Overbidding in First Price Auctions Using Controlled
Lotteries,” Experimental Economics 6(2): 123-140) is gratefully acknowledged
from Springer Publishing. Permission to reproduce Figure 6.2 (from Grayson, C. J. (1960) Decisions Under Uncertainty: Drilling Decisions by Oil and Gas Operators. Cambridge, MA: Harvard University Press) is gratefully acknowledged from Harvard Business Publishers. Permission to reproduce Figures 6.3 and 6.4 (from Smith, C. W., and Stuiz, R. M. (1985) “The Determinants of Firms’ Hedging Policies,” Journal of Financial and Quantitative Analysis 20(4): 391405) edged from Cambridge University Press.
is gratefully acknowl-
xii
Acknowledgments and permissions
Permission to reproduce Figure 7.1 (from Fiorillo, C. D,, ‘Tobler, P.N., and Schultz, W. (2003) “Discrete Coding of Reward Probability and Uncertainty by Dopamine Neurons,” Science 299: 1898-1902) is gratefully acknowledged from The American Association for the Advancement of Science. Permission to reproduce Figure 7.2 (from Preuschoff, K., Quartz, S., and
List of abbreviations
Bossaerts, P. (2008) “Markowitz in the Brain?” Revue d’Economie Politique 118(1): 75-95) is gratefully acknowledged from Elsevier Limited.
ARA AREC BDM CAPM CARA CPT CRRA CRRAM CRT DM DMU EUT EWA FPSB fMRI HL Lid. PAI PAC PAS POR RRA UIP VNM
absolute risk aversion
Acme Resource Exploration Corporation (hypothetical entity) Becker, DeGroot, and Marschak
capital asset pricing model constant absolute risk aversion Cumulative Prospect Theory constant relative risk aversion
constant relative risk aversion model Cognitive Reflection Test decision maker diminishing marginal utility Expected Utility Theory
Experience Weighted Attraction first-price sealed-bid (auction) functional Magnetic Resonance Imaging Holt and Laury independent identically distributed pay all independently pay all correlated
pay all sequentially pay one randomly relative risk aversion
Uncovered interest parity
Von Neumann and Morgenstern
1
The challenge of understanding choice under risk
Life is uncertain. We hardly know what will happen tomorrow; our best-laid plans go awry with unsettling frequency. Even the recent past is often a matter of conjecture and controversy. Everyday decisions, small and large, are made without certainty as to what will happen next. It would therefore be comforting to have a well-grounded theory that organizes our observations, guides our decisions, and predicts what others might
do in this uncertain world. Since the 1940s most economists have believed they have had such a theory in hand, or nearly so with only a few more tweaks needed to tie up loose ends. That is, most economists have come to accept that Expected Utility Theory (EUT), or one of its many younger cousins such as
Cumulative Prospect Theory (CPT), is a useful guide to behavior in a world in
which we must often act without being certain of the consequences. The purpose of this book is to raise doubt, and to create some unease with the current state of knowledge. We do not dispute that the conclusions of EUT follow logically from its premises. Nor do we dispute that, in a sufficiently simple world, EUT would offer good prescriptions on how to make choices in risky situations. Our doubts concern descriptive validity and predictive power. We will argue that EUT (and its cousins) fail to offer useful predictions as to what actual people end up doing. Under the received theory, it is considered scientifically useful to model choices under risk (or uncertainty) as maximizing the expectation of some
curved function of wealth, income, or other outcomes. Indeed, many social
scientists have the impression that by applying some elicitation instrument to collect data, a researcher can estimate some permanent aspect of an individual’s attitudes or personality (e.g., a coefficient of risk aversion) that governs the individual’s choice behavior. This belief is not supported by evidence accumulated over many decades of. observations. A careful examination of empirical and theoretical foundations of the theory of choice under uncertainty is therefore overdue.
To begin with the basics: What do we mean by “uncertainty” and “risk”?
Economists, starting with, and sometimes following, Frank Knight (1921), have redefined both words away from their original meaning.’
2
The challenge of understanding risky choice
The challenge of understanding risky choice
3
In the standard dictionary definition, risk simply refers to the possibility of harm, injury, or loss. This popular concept of risk applies to many specialized domains including medicine, engineering, sports, credit, and insurance. However, in the second half of the twentieth century, a very different definition of risk took hold among economists. This new technical definition
enough free parameters to a given finite sample. That exercise is called “overfitting,” and it has no scientific value unless the fitted model can predict outside the given sample. Any additional parameters in a theory must pay their way in commensurate extra explanatory power, in order to protect against needless complexity.
inherent in a probability distribution. It is typically measured as variance or a similar statistic. Throughout this book we will be careful to distinguish the possibility-of-harm meaning of risk from the dispersion meaning.
generalizations have not yet passed this simple test in either controlled laboratory or field settings. These theories arrive endowed with a surfeit of free parameters, and sometimes provide close ex post fits to some specific sam-
refers not to the possibility of harm but rather to the dispersion of outcomes
Although
the notion
of risk as dispersion
seems
peculiar
to laymen,
economists acclimated to it easily because it dovetails nicely with EUT. For centuries, economists have used utility theory to represent how individuals construct value. In the 1700s Daniel Bernoulli (1738) first applied the notion to an intriguing gamble, and since the 1940s the uses of expected utility have expanded to applications in a variety of fields, seemingly filling a void. At the heart of Expected Utility Theory is the proposition that we each, individually or as members of a defined class, have some particular knowable attitudes towards uncertain prospects, and that those attitudes can be captured, at least approximately, in a mathematical function. In various contexts, it has been referred to as a value function (in Prospect Theory), or a utility of income function, or a utility of wealth function. Following the standard
textbook (Mas-Colell, Whinston, and Green [1995]) we shall often refer to it
We shall see in Chapter 3 that the Expected Utility Theory and its many
ple of choice data. The problem is that the estimated parameters, e.g., risk-
aversion coefficients, exhibit remarkably little stability outside the context in which they are fitted. Their power to predict out-of-sample is in the poorto-nonexistent range, and we have seen no convincing victories over naive
alternatives. Other ways of judging a scientific model include whether it provides new insights or consilience across domains. Chapter 4 presents extensive failures and disappointments on this score. Outside the laboratory, EUT and its generalizations have provided surprisingly little insight into economic phenomena such as securities markets, insurance, gambling, or business cycles.
After almost seven decades of intensive attempts to generate and validate estimates of parameters for standard decision theories, it is perhaps time to ask whether the failure to find stable results is the result. Chapter 5 pursues
as a Bernoulli function. Such a function maps all possible outcomes into a single-dimensional cardinal scale representing their desirability, or “utility.” Different individuals may make different choices when facing the same risky prospects (often referred to as “lotteries”), and such differences are attributed © to differences in their Bernoulli functions.
this thought while reconsidering the meaning and measures of risk and of
of risk. Because the curvature of the Bernoulli function helps govern how much an individual would pay to avoid a given degree of dispersion, economists routinely refer to curvature measures as measures of “risk aversion.” Chapter 2 explains the evolution and current form of EUT, and the Appendix to Chapter 2 lays out the mathematical definitions for the interested reader. Presently, we simply point out that in its first and original meaning, aversion to risk follows logically from the definition. How can one not be averse to the possibility of a loss? If a person somehow prefers the prospect
1.
In particular, the curvature of an individual’s Bernoulli function determines how an individual reacts to the dispersion of outcomes, the second definition
risk aversion. But does it really matter? What is at stake when empirical support for a theory is much weaker than its users routinely assume? We write this book because the widespread belief in the explanatory usefulness of curved Bernoulli functions has harmful consequences.
2.
of a loss over that of a gain, or of a greater loss over a smaller loss, in what
sense can the worse outcome be labeled a “loss” in the first place? By contrast,
under the second definition of risk as dispersion of outcomes, aversion to risk is not inevitable; aversion to, indifference to, and affinity for risk remain open
possibilities. It is a truism that to deserve attention, a scientific theory must be able to predict and explain better than known alternatives. True predictions must, of course, be out-of-sample, because it is always possible to fit a model with
3.
It can mislead economists, especially. graduate students. Excessively literal belief in EUT, or CPT, or some other such model as a robust charac-
terization of decision making can lead to a failed first research program, which could easily end a research career before it gets off the ground. We hope that our book will help current and future graduate students be better informed and avoid this pitfall. It encourages applied researchers to accept a facile explanation for deviations, positive or negative, that they might observe from the default prediction, e.g., of equilibrium with risk-neutral agents. Because preferences are not observable, explaining deviations as arising from risk aversion (or risk seeking) tends to cut off further inquiry that may yield more useful explanations. For example, we will see in Chapter 3 that, besides risk aversion, there are several intriguing explanations for overbidding in first-price sealed-bid auctions. It impedes decision theorists’ search for a better descriptive theory of choice. Given the unwarranted belief that there are only a few remaining
4
The challenge of understanding risky choice gaps in the empirical support for curved Bernoulli functions, many deci-
sion theorists invest their time and talent into tweaking them further, e.g.,
by including a probability weighting function or making the weighting function cumulative. As we shall argue, these variants add complexity without removing or reducing the defects of the basic EUT, and the new free parameters buy us little additional out-of-sample predictive power.
The question remains, what is to be done? Science shouldn’t jettison a bad theory until a better one is at hand. Bernoulli functions and their cousins have dominated the field for decades, but unfortunately we know of no full-fledged alternative theory to replace them. The best we can do is to offer an interim approach. In Chapter 6 we show how orthodox economics offers some explanatory power that has not yet been exploited. Instead of explaining choice by unobservable preferences (represented, ¢.g., by estimated Bernoulli functions), we recommend looking for explanatory power in the potentially observable opportunity sets that decision makers face. These often involve indirect consequences (e.g., of fric-
tions, bankruptcy, or higher marginal taxes), and some of them can be ana_ lyzed using the theory of real options. Beyond that, we recommend taking a
broader view of risk, more sensitive to its first meaning as the possibility of loss or harm. The interim approach in Chapter 6 has its uses, but we do not believe that it is the final answer. In Chapter 7 we discuss process-based understanding of choice. We speculate on where, eventually, a satisfactory theory might arise. Will neurological data supply an answer? What about heuristics / rule-ofthumb decision making? Can insights from these latter approaches be integrated with the modeling structure outlined in Chapter 6? We are cautiously optimistic that patient work along these lines ultimately will yield genuine advances. Notes 1 Knight said that a decision maker faced risk when probabilities over all possible future states were truly “known” or “measurable,” and faced uncertainty when these probabilities (or some of the possible outcomes) were not known. Knight himself noted that this distinction is different than that of popular discourse.
Bibliography Bernoulli, D. (1738) “Exposition of a New Theory on the Measurement of Risk,” trans. Louise Sommer (1964) Econometrica 22: 23-26.
Knight, F H. and Marx.
(1921)
Risk,
Mas-Colell, A., Whinston,
Uncertainty
and
Profit.
Boston:
Hart, Schaffner
M. D., and Green, J. R. (1995) Microeconomic
Oxford: Oxford University Press.
Theory.
2
Historical review of research
through 1960
This is an illustration of the ephemeral nature of utility curves.
C. Jackson Grayson, 1960, 309
The theory of choice under uncertainty is marked by two seminal works separated by more than two centuries, with remarkably little in the interim. These two contributions are D. Bernoulli's “Exposition of a New Theory on the Measurement of Risk” (1738) and John Von Neumann and Oskar Morgenstern’s Theory of Games and Economic Behavior (1943 [1953]). A metaphor of “bookends” can be used to describe seminal works that bracket
a large intervening literature. Because these two works stand almost alone, a
more appropriate metaphor might be that they are “virtual bookends” across 200 years.
In his “mean.utility [moral expectation]’ Bernoulli introduced many of the
critical elements of what is now called Expected Utility Theory. Although
formal expositions of theories of risk attitudes came later, the core of his argument is a parallel assertion about an individual’s nonlinear “utility” for money: specifically that individuals have diminishing marginal utility of income. Further, Bernoulli asserted a “moral expectation” that is a log function of income: The motivating factor for Bernoulli was the infamous St. Petersburg Paradox. This was a gamble that pays 2” ducats with probability 2* for every n = 1, 2, ... 0%, and so has an expected value 1 +1+1+...
=.
Apparently, neither Bernoulli nor his contemporaries could imagine trading
anything greater than a modest amount of ducats for this gamble. Bernoulli
“solved” the St. Petersburg Paradox by asserting that what mattered was not the mathematical expectation of the monetary returns of the St. Petersburg gamble, but rather the mathematical expectation of the utility of each of the outcomes, which implied only a modest monetary value of the gamble for most individuals. Even as Bernoulli’s logarithmic model became the foundation for the more general concave structure of the “utility function” that later writers used to axiomatize choice under uncertainty and to characterize risk attitudes,
6
Historical review of research through 19607
Historical review of research through 1960
Bernoulli recognized situations in which a utility-of-income function would not be concave. However, he saw these counterexamples only as “exceedingly rare exceptions” (25). Over two hundred years later, Von Neumann and Morgenstern developed an axiomatized structure of expected utility over “lotteries” that referenced Bernoulli’s logarithm function as a special case. This concept of expected utility was central to their exposition of a theory of games. They went on to describe a process by which such a utility function could be estimated from
data on an individual’s choices between a series of pairs of “prospects” with certain versus risky outcomes.
In hindsight, it is astonishing that Bernoulli’s idea of expected utility received little, largely negative, attention between 1738 and 1943. On the other hand, the publication of The Theory of Games and Economic Behavior ignited a veritable explosion of interest in expected utility and theories of attitudes towards risk. By the mid-1970s, after the publication of Pratt’s “Risk Aversion in the Small and the Large” (1964) and the series of papers in the Journal of Economic Theory by permutations of Diamond, Rothschild, and Stiglitz (1970, 1971, 1974) neo-cardinalist theories of expected utility and risk
preferences took the driver’s seat. Of course, taking a commanding place in economic orthodoxy does not necessarily mean that the theory has also created a useful empirical structure for explaining or predicting “Economic Behavior” (as the second part of the Von Neumann and Morgenstern title suggests). That is the point of our book.
However, before turning to that larger theme, it may be useful to contrast why
Bernoulli’s foundational exposition received so little attention for more than
two hundred years, and why Von Neumann—Morgenstern had such a revolutionizing impact on this aspect of economics.
To begin with, it seems commonplace to think of Bernoulli as modeling . some concave utility function. In fact, he explicitly suggested only a specific —
But the enthusiasm for Bernoulli, in spite of this rediscovery, was attenu-
ated by several factors. First, there was extensive discussion of how to deal
with the apparent.inconsistency between declining marginal utility of income and the widespread phenomenon
of gambling. According to Blaug (1968),
Marshall also accepted the general idea of a diminishing marginal utility of income and begged the question of building an economic theory of choice under uncertainty by simply attributing gaming at less than fair odds directly to the “love of gambling.” Specifically, in a footnote on page 843 of the eighth edition of his Principles (1890 [1920])? Marshall wrote: The argument that fair gambling is an economic blunder is generally based on Bernoulli’s or some other definite hypothesis. But it requires no further assumption than that, firstly, the pleasures of gambling may be neglected; and secondly 9”(x) is negative, where 9(x) is the pleasure derived from wealth equal to x. Furthermore, in a footnote on page 842, Marshall rederives Bernoulli’s logarithmic utility function (in today’s familiar notation) of:
U,(y) = K log (y! a) (where y is actual income and a
is a subsistence level of income). Contra
Bernoulli, Marshall suggests “Of course, both K and a vary with the temperament, the health, the habits, and the social surroundings of each
individual.” Marshall’s treatment of Bernoulli is pretty much limited to this and a discussion of the implications for progressive income taxation. Schlee, in a different and extended discussion of Bernoulli, Jevons, and Marshall, argues
logarithmic — concave function. Even with this more general (concave) inter-
that Marshall implicitly accepted the idea of some kind of expected utility calculations in valuations of durable goods, although he (Marshall) claimed that “this result belongs to Hedonics, and not properly to Economics.”?
Nature’s admonition to avoid the dice altogether.” Schlee (1992) argues that until the arrival of the early marginal-utility
movement to ordinal utilities soon gained traction, and indifference curves were described by utility functions defined only up to an increasing monotonic transformation. Changes of marginal utility per se were undefined in the new ordinal paradigm.
pretation, there were (and remain) empirical problems with this model in terms of such things as the existence of gambling and the form of insurance contracts. In the case of gambling, Bernoulli argues (29) that “indeed this is
theorists such as Jevons in the late nineteenth century, “few economists appear to have noticed” Bernoulli’s theory. Jevons (1871 [1931], 160)! cited
Bernoulli’s work and asserted, “It is almost self-evident that the utility of
money decreases as a person’s total wealth increases.” But Jevons argued that along the way to diagnosing a reasonable answer to “many important ques-
tions” (presumably including the St. Petersburg Paradox), “having no means of ascertaining numerically the variation in utility, [emphasis added] Bernoulli had to make assumptions of an arbitrary kind” (again, we presume Jevons means Bernoulli’s explicit adoption of the logarithmic form from the class of concave utility functions).
Second, even as Bernoulli was being cited by Jevons and Marshall,
the
Moreover, there was. Menger’s demonstration (1934) that Bernoulli did not
actually solve the St. Petersburg Paradox, he merely solved one member of the family of such paradoxes. Even with Bernoulli’s logarithmic function, there exist other St. Petersburg gambles with appropriately increasing payments that would lead to the same conundrum that Bernoulli sought to resolve. A
full solution required even more restrictive assumptions on the utility function (boundedness would be sufficient).*
Finally, Bernoulli’s “solution” of a logarithmic utility function was not the only one put forth to the St. Petersburg Paradox. As Bernoulli himself noted, Cramer offered one approach from an entirely different direction, suggesting
8
Historical review of research through 1960
Historical review of research through 1960
that the probabilities on the highest (outlying) payoffs were (either objectively or subjectively) treated as zero: “Let us suppose .... better yet, that I
can never win more than that amount, no matter how long it takes before
9
Utility
the coin falls with the cross upward.”5 In 1977 Shapley would reiterate a similar objection, in what he called the missing (and weak) link of the paradox: “One assumes that the subject believes the offer to be genuine, i.e., believes that he will actually be paid, no matter how much he may win” (440) (emphasis in the original). Shapley goes on to describe this assumption as “empirically absurd” (442).° One can argue that there was, in fact, no obvious empirical evidence to favor Bernoulli’s solution over Cramer’s (Shapley’s). In
fact, in terms of ad-hoc introspection, the conjecture that no gambler could afford to lose the infinitely large payoffs in the right tail of the St. Petersburg
Paradox hardly stretches credulity. The idea that individuals treat very small
probabilities as if they were zero lives on in modern ioral economics. With these limitations of Bernoulli’s proposal, extraordinary revival after the appearance of the Von Neumann and Morgenstern’s Theory of Games
risk analysis and behav-
how can we explain its second virtual bookend, and Economics Behavior?
Income
Figure 2.1 A Friedman—Savage Bernoulli function (schematic redrawn by authors based upon figure 3 in Friedman and Savage [1948, 297]).
Utility
What can be thought of as a “neo-cardinalist” revolution was launched in the mid-twentieth century with scant reference to the ordinalist foundation of
neoclassical economics built over the preceding decades,’ Within a quarter of a century, this revolution became, in at least some realms of economics, the new orthodoxy. One obvious argument is that the Von Neumann—Morgenstern (hereafter VNM) axiomatization was not tied to any specific functional form
that was required to hold constant across all individuals. In fact, it didn’t require even the most minimal assumption of concavity of the derived expected utility
Loss
Gain
function. (A more detailed mathematical discussion of the EUT axioms and
several related issues is included in the Appendix to this chapter). As Blaug suggests (336), this led Friedman and Savage (1948) and also Markowitz (1952) to play a game of “my utility function has more inflection points than your utility function” in terms of purported empirical explanatory power, albeit without any indication of magnitudes on the x-axis of their free-hand drawings. A Friedman-Savage individual (see Figure 2.1) had concave portions of his utility of income at relatively low and high levels of the income scale,
explaining “both the tendency to buy insurance and the tendency to buy lot-
tery tickets” (Blaug, 336). But it also had the uncomfortable implication that the very “poor” and the very “rich” will not accept fair bets. Friedman and Savage (288-289) seem to conceive of the convex section in the middle of the utility function as being rather large, because they motivate it in terms of a worker being able to change social classes. Markowitz proposed that the utility function had three instead of two
inflection points: a convex section followed by a concave section to the right
of the origin, and a concave section followed by a convex section to the left of the origin (see Figure 2.2). In addition, the “origin” on one of his utility graphs had a specific meaning for Markowitz, namely “the ‘customary’ level
Figure 2.2 A
Markowitz
Bernoulli
function
based upon figure 3 in Markowitz [1952, 152)).
(schematic
redrawn
by authors
of wealth” for the individual (155). In a Markowitz utility function (quoting Blaug) small increments in income yield increasing marginal utility, but large gains in income yield diminishing marginal utility; this accounts for people’s reluctance to accept large but their eagerness to accept small fair bets.” On the other hand, small decrements in income yield increasing marginal disutility, while large losses in income yield diminishing marginal disutility; hence the eagerness to hedge against small losses but a devil-may-care attitude to very large losses.*®
IQ)
distorleal review of research through 1960
We will return to a discussion of the Friedman—Savage and Markowitz utility functions in several of the later chapters. While Friedman and Savage and also Markowitz rejected the “shape” of the utility function of wealth proposed by Bernoulli, a standard convention that we will adopt is to refer to a utility function over some version of
aggregate final states (including, but not limited to money, wealth, income,
an aggregate measure of overall consumption, and so forth) that is also consistent with an axiomatic system of expected utility to be a “Bernoulli function.” What we now know as the orthodox treatment of an individual Bernoulli utility function was cemented in place by the quick succession of Pratt (1964), Rothschild and Stiglitz (1970, 1971), and Diamond and Stiglitz (1974). We note here two issues: (1) this high orthodoxy embraces the idea of at least a quasi-cardinal concept of utility because key parameters of the
utility function, such as various measures of “risk aversion,” are robust with
respect to some, but not all, monotonic transformations (see the Appendix to this chapter for more specifics), and (2) the equivalence relationships between risk aversion and underlying probability distributions highlight concepts of risk that were tied to the dispersion properties of the probability distribution of outcomes (“mean preserving” increases; see the discussion in the Appendix to this chapter). Likewise, Markowitz, through a different reasoning, ends up defining risk as dispersion. An interesting, but perhaps unintentional com-
mentary on this state of affairs can be gleaned from Bernstein’s popular book
on risk, Against the Gods (1996). The dramatic flow of the book builds through the development of the concepts of probability, the St. Petersburg Paradox,
normal distributions, and, eventually, Markowitz’s portfolio theory of mean and variance of returns. But the narrative of nonmathematicians who have to deal with risk on a daily basis is intertwined in Bernstein’s chapters, and it tells a story quite different from “risk as dispersion.” Specifically, in actual practice (as well as in at least the regulation, insurance, and credit aspects of economics) risk equaled harm: “house breaking, highway robbery, death by
gin-drinking, the death of horses” (90), fire (91), the loss of ships, and losses from hurricanes, earthquakes, and floods (92). On page 261, he discusses the investment philosophy of a former manufacturing executive turned trust manager: “In the simplest version of this approach, risk is just the chance of losing money.” In the extensive literature on insurance (c.g., Williams [1966]) this idea of “pure risk” was clearly distinguished from “speculative risk” in which uncertainty pertains to the possibility of losses as well as gains. This distinction, well-established until the 1960s, between pure risk and speculative risk has been ignored, if not lost altogether, in many parts of the recent eco-
nomics literature distracted by the algebraic convenience of dealing with the
combination of second moments and quadratic utility functions. Meanwhile,
back
in
the
academy,
it is important
to
note
that
the
neo-cardinalists did not displace the ordinalists from their commanding posi-
tion in neoclassical economics. Instead, an interesting, inconsistent, but ulti-
mately convenient live-and-let-live territorial division developed between the
Historical review of research through 1960
(1
two. Ordinalism controls the center of orthodoxy in theories of consumer choice under certainty, while neo-cardinalism holds sway. with regards to choice under risk or under uncertainty. Never mind that much consumer
choice is made under uncertainty. After the mathematization of economics in
which logical consistency is the commanding value, it is amazing that gradu-
ate seminars just skip over the inconsistency. The consensus view is well stated by Blaug (328):
These two types of utility scales [fully ordinal versus cardinal but unique up to a linear transformation] differ strikingly in one respect. Scales that are monotone transformations of each other vary together in the same direction: this is the only property they have in common. But scales that
are linear transformations of each other assert something much stronger: when the interval differences of one scale increase or decrease successively, the interval differences of the other scale increase or decrease successively to the same extent. ... Measurability up to a linear transformation involves knowledge not only of the signs of the first differences of the utility scales but also of the signs of the second differences: the first differences tell us about the direction of preference; the second differences tell us about the intensity of preference [emphasis in the original].
Of course the measures of “risk aversion” (the Arrow~—Pratt measures being
the widely acknowledged gold standard) depend crucially upon the second derivative of the expected utility function. A specific mathematical treatment is included in the Appendix to this chapter. In the neoclassical model of consumption, the broader concept of decreasing marginal utility? had been subsumed by the ordinal condition of strictly convex indifference curves (implying a declining marginal rate of substitution for any two goods). This condition is not enough to imply a concave utility function over money. Indeed, in certain cases the marginal utility of income (in effect, the Lagrangean multiplier on the budget constraint in the stan-
dard neoclassical set-up) can be found to be constant with respect to income.
But the more pressing question for neoclassical economics was not to debate Bernoulli’s concerns but to describe those conditions in which the marginal utility of income could be constant with regards to all prices and income (Samuelson. [1947]).
To appreciate the depth of this coexistence-by-mutual-nonrecognitionof-inconsistencies, consider the standard texts of graduate microeconomics.
Samuelson is straightforward: “Nothing at all is gained by the selection of indi-
vidual cardinal measures of utility” (173). Another thoroughly typical example is the chapter entitled “Utility Maximization” (95) in Varian’s graduate text (1992): “Utility function is often a very convenient way to describe preferences, but it should not be given any psychological interpretation. The only relevant feature of a utility function is its ordinal character [emphasis added].” Yet
Varian’s treatment of choice under uncertainty is equally orthodox in the VNM
12
&tstorteat review of research through 1960
Historical review of research through 1960
paradigm, concluding with the derivation that “an expected utility function is unique up to an affine transformation” which is, of course, a less restrictive assumption than a full ordinalism property of uniqueness up to any mono-
tonic transformation. Friedman and Savage (1952, 464) have asserted that Bernoulli functions “are not derivable from riskless choices.” Furthermore, there are many other areas in which, contra Samuelson, progress in microeco-
nomic modeling has been enabled only by discarding some of the restrictions of pure ordinalism in favor of an additive separable utility function of the form U,(x;,y,) = x; + v,(y,). Examples include mechanism design for public goods provision and the theory of bidding in auctions. And, of course, a large part of the theory of the firm assumes that firms maximize expected profits, which is to say that they have a VNM utility function that is linear in payoffs. From Bernoulli through Markowitz and subsequently, the empirical con-
tent of these assertions about the shape of typical utility functions, and hence about an individual’s neo-cardinalist measure of risk aversion (how these vary from individual to individual, and whether they are stable across individuals) was largely based upon appeals to assertions in the aggregate or what might
charitably be called “stylized facts” (or, less charitably, “urban legends”). This is somewhat surprising because this is not what is at the heart of VNM. Indeed, a second explanation for the power of VNM to change the intellectual playing field regarding expected utility was that they presented not simply a toy model for armchair theorists but also a process for mapping out the utility function of any given individual in a given place and under a given set of ceteris-paribus circumstances. If we
are to argue
that some
of the influence
of Von
Neumann
and
Morgenstern derives from their assertion of empirical content of individual utility maps constructed from their method, it could not take long before someone would undertake to make such maps (rather than simply appeal to
broad assertions about the behavior of people shopping or gambling as seen from looking out the windows of a faculty office). Indeed, the task was taken
up in earnest almost immediately, in the 1950s. Pioneering decision theorists Duncan Luce and Howard Raiffa, in their book Games and Decisions (1957) have a section on “Experimental Determinations of Utility.” The paper they list as having the earliest publication date is the report of the laboratory experiments of Mosteller and Nogee (1951), henceforth MN. MN elicited data from payoff-motivated choice experiments over sample “poker” hands to construct Bernoulli/VNM utility functions. One interesting finding of MN (386) was a demographic difference. Harvard student subjects tended to have utility functions that were “conservative” (Le., concave), whereas National Guard subjects tended to have utility functions that were “extravagant” (i.e., convex). Although the topic is beyond the scope of this chapter, it is worth noting for further reference Edwards’s 1953 paper which was motivated by MN but which continued on into the realm of determination of the subjects’ subjective probabilities rather than simply their utility over outcomes.
13
Also, at about the same time that Games and Decisions was in production, Raiffa was working with a graduate student, C. Jackson Grayson, whose dissertation, published in 1960 as Decisions Under Uncertainty: Drilling Decisions
.
by Oil and Gas Operators is an underappreciated classic in the realm of using similar techniques to estimate individual VNM utility functions in the field (in this case, among independent Oklahoma oil and gas company owners and their
employees). Grayson used linked hypothetical questions over different lotteries derived essentially as VNM had suggested. He obtained between 8 and 30 estimated utility points for each of 11 different people. Today, one would undoubtedly use a statistical process to create a “best fit” among the data.
Grayson apparently used a free-hand drawing to approximate a “best fit.” In the introduction to this part of his book, Grayson says that he believes that this experiment in extracting utility functions met with “mixed success” (297). Nevertheless, there are some notable regularities and narratives that come from this early effort. First, the curves estimated from choices made by various individuals showed widely different patterns of concavity, convexity, and inflection points. Perhaps surprisingly, given the results of later claims of an opposite characterization (such as those of Kahneman and Tversky [1979]), many of the individuals can roughly be described as risk-taking in gains and risk-averse in losses. Second, for one individual, Grayson had a chance to investigate the stability of the revealed utility function across time. After the passage of more than three
months in-between, the individual appears to have generated a more “conservative” (Grayson’s term for apparently more concave) utility function. Grayson (309) argues, albeit on the basis of this single subject, “This is an illustration of the ephemeral nature of utility curves.” A final fascinating feature of Grayson’s analysis of the usefulness of his labors was the frequency with which the individuals reported that they had difficulty making these (hypothetical) decisions as purely abstract constructs disassociated from the reality of their oil and gas exploration business: “There is no such thing as certainty in drilling wells.” “Who gets the intangibles?” (This is a question about the obvious but often overlooked issue in these elicitation processes as to whether gains are to be considered as before-tax or after-tax income). “Is it an isolated area?” “Is it gas or
oil?” (These are questions that relate to associated contracts and constraints).
“Am I the operator?” (This is an important question not only about entrepre-
neurial control but also about already embedded risk sharing). We will return to the importance of these questions from Grayson’s subjects in Chapter 6.
This chapter takes us up to the early 1960s. In the next chapter, we discuss the first wave of new theoretical work on utility and risk beginning in the period of the mid 1960s—1970s. This theoretical work in turn motivated a new effort at empirical mapping. Instead of the “mapping of utility functions” approach discussed here, empirical researchers turned to the elicitation of parametric representations of utility functions. The question we will pose next is: “To what extent did these elicitations yield dependable estimates of a
person’s propensity to choose under uncertainty, or to bear risk?”
14
stistorledl review of research through 1960
Historical review of research through 1960
Appendix. Mathematical details We collect standard mathematical formulations of key concepts introduced in Chapter 2. Later chapters will draw on them, but readers uninterested in mathematical formalities should feel free to skip this Appendix. We begin with some definitions, A lottery L =(M, P) isa finite list of mone-
15
automatically satis{y the expected utility property, and thus be representable via a Bernoulli function. Over the decades since the original results of Von Neumann and
Morgenstern, many different sets of conditions have been shown to be sufficient. Here we mention the set used in a leading textbook, Mas-Colell,
tary outcomes M ={m,,m,,...,.m,} CK together with a corresponding list of probabilities P = {p,, p,,...,p,}, where p,= 0 and x" Pp; =1. The symbol % denotes the real numbers (0,00). The space of ail lotteries is denoted £.
Whinston, and Green (1995). It consists of four axioms that an individual’s
A utility function function) is a strictly lotteries is a function Given a Bernoulli
2.
The expected value of lottery L =(M, P)isE,m= y"
Eu=
k
yum,
DM, .
over monetary outcomes (henceforth called a Bernoulli increasing function u:R 3K. A utility function over U: L3®. function u, the expected utility of lottery L =(M, P) is
).
preferences = over £ should satisfy:
1.
3.
Preferences = over any set refer to a complete and transitive binary relation. A utility function U represents preferences = if x= yes U(x)2U(y) for
all x, y in that set, where the symbol “ © ” means “if and only if.” In particular, preferences = over L are represented by U: £3 if L21's U(L)=U(L')forallL, Le L.
4.
Rationality: Preferences = are complete and transitive on L.
Continuity: The precise mathematical expressions are rather indirect (they state that certain subsets of real numbers are closed sets), but they capture the intuitive idea that U doesn’t take jumps on the space of lot-
teries. This axiom rules out lexicographic preferences. Reduction of Compound Lotteries: Compound lotteries have outcomes that are themselves lotteries in £. By taking the expected value, one obtains the reduced lottery, a simple lottery in £. The axiom states that the person is indifferent between any compound lottery and the corresponding reduced lottery. Independence: Let L,L’,L”Oandanybe®. Note also that the functions A(m) and R(m) are the same for v as they are
for u.
On the other hand, consider a positive and monotonic but nonlinear transformation, for example w(m) = u(m)*, with u(m) =m’ for 0 0 is -u
where R°(z,h)= aw" y)h> for some point y between z and z+h.
has constant relative risk aversion R(m) =r for
any r #1. Bernoulli’s original function u(m;1)=1In m has constant relative
risk aversion R(m)=1, and readers familiar with L’Hospital’s rule can verify that it fits snugly in the CRRA family. The constant absolute risk aversion
(CARA) family is u(m;a) = —e~*” for a > 0; it is straightforward to check that it indeed has constant absolute risk aversion A(m) = a. In view of the paragraph before the previous one, readers who prefer to keep utility positive on
defined, but they have no common names. In equation (2A.4), set z= m and m= z+h, and take the expected value of both sides. The linear term disappears because Eh = E(m—-m)=m—m=0. Hence we obtain
Eu(m) = u(im)+ i u” (m)Var[L]+ eu" (SH) +aqu 1
;
(m)Kur[L]+ E,R°.
PITT ff —
(2A.5)
That is, expected utility of the lottery is equal to the utility of the mean outcome plus other terms that depend on the derivatives of u and the corresponding moments of L. The leading such term is proportional to the variance of the lottery and to the second derivative of u evaluated at the mean of the lottery. Of course, that second derivative is negative for a strictly concave function
u, and as just noted, the Arrow—Pratt coefficient of absolute risk aversion A(m) simply changes its sign and normalizes by the first derivative (to make the coefficient the same for all equivalent u’s). That is, variance reduces expected utility to the extent that u is concave, as measured by A(m). As we willseein Chapter 5, some authors postulate signs for the third and fourth derivatives and interpret them in terms such as prudence and temperance. Notes
1 Accessed on May 31, 2013 at http://socserv.mcmaster.ca/econ/ugem/3113/jevons/ TheoryPoliticalEconomy.pdf. 2 Accessed on May 31, 2013 at http://files libertyfund.org/files/1676/Marshall_0197_
Bk.pdf.
18
Historical review of research through 1960
Historical review of research through 1960
3 This is a quote from Marshall as stated on page 736 of Schlee,
4 Samuelson
“Super-St. 5 The quote Bernoulli’s 6 Samuelson
(1977)
devotes
a section
of his
article
to extending
Menger-like
Petersburg” analysis. is in D. Bernoulli’s article but is in fact from Cramer in a letter to cousin. (1977, 27, emphasis in the original) says that he also agrees with those
who question the infinite wealth assumption, “The infinite game cannot feasibly be
played by both parties (or either party) whatever their utilities and desires, So the paradox aborts even before it dies.” 7 The term “neo-cardinalist” is used to distinguish the approach from the previous “cardinal” utility theories in which the cardinal scale in “utils” was integral. A “neo-cardinalist” utility function is invariant to some transformations, but not as
many as those of the ordinalist orthodoxy. See the Appendix to this chapter for more for specific details.
8 For a complete description of the hypothetical choices Markowitz offered to his
“middle income acquaintances” see page 153 of his 1952 article.
9 The roots are in Bentham but diminishing marginal utility reached full develop10
ment from Jevons; see Blaug’s.(1968) excellent treatment in his chapter 8.
On the other hand, the EUT’s conclusion is quite strong, and not consistent with
some actual choice data. As noted in later chapters, one reaction is to accommo-
date some of the anomalous data by weakening the axioms, usually the third or
fourth.
Bibliography Bernoulli, D. (1738) “Exposition of a New Theory on the Measurement of Risk,” trans. Louise Sommer (1964) Econometrica 22: 23-26.
Bernstein, P. L. (1996) Against the Gods: The Remarkable Story of Risk. New York: Wiley.
Blaug, M. (1968) Economic Theory in Retrospect. Homewood, IL: Richard D. Irwin,
Inc. Diamond, P., and Stiglitz, J. (1974) “Increases in Risk and Risk Aversion,” Journal of
Economic Theory 8: 337-360. Edwards, W. (1953) “Probability Preferences in Gambling,” American Journal of
Psychology 66: 349-364. Friedman, M., and Savage, L. J. (1948) “The Utility Analysis of Choices Involving
Risk,” Journal of Political Economy 56: 279-304.
Friedman,
M., and Savage, L. J. (1952) “The Expected-Utility Hypothesis and the
Measurability of Utility,” The Journal of Political Economy 60(6): 463-474.
Grayson, C. J. (1960) Decisions Under Uncertainty: Drilling Decisions by Oil and Gas
Operators. Cambridge, MA: Harvard University Press.
Jevons,
W.
S. (1871;
Macmillan.
Sth edn.
1931)
The
Theory
of Political Economy.
London:
Kahneman, D., and Tversky, A. (1979) “Prospect Theory: An Analysis of Decision under Risk,” Econometrica 47: 263-291. Luce, D., and Raiffa, H. (1957) Games and Decisions. New York: John Wiley and
Sons. Markowitz, H. (1952) “The Utility of Wealth,” Journal of Political Economy 60(2): 152-158. Marshall, A. (1890; 8th edn. 1920) Principles of Economics. London: Macmillan.
Mas-Colell, A., Whinston,
M. D., and Green, J. R. (1995) Microeconomic
19
Theory.
Oxford: Oxford University Press. Menger, C. (1934) The Role of Uncertainty in Economics; trans. Wolfgang Schoellkopf
in M. Shubik (ed.) (1967) Essays in Honor of Oskar Morgenstern. Princeton, NJ:
Princeton University Press.
Mosteller, F., and Nogee, P. (1951) “An Experimental Measurement of Utility,” Journal
of Political Economy 59: 371-404. Pratt, J. W. (1964) “Risk Aversion in the Small and in the Large,” Econometrica 32: 122-136. Rothschild, M., and Stiglitz, J. (1970) “Increasing Risk I: A Definition,” Journal of Economic Theory 2: 225-243. Rothschild, M., and Stiglitz, J. (1971) “Increasing Risk II: Its Economic Consequences,”
Journal of Economic Theory 3: 66-84.
Samuelson, P. A. (1947) Foundations of Economic Analysis. Cambridge, MA: Harvard University Press. Samuelson, P. A. (1977) “St. Petersburg Paradoxes: Defanged, Dissected, and
Historically Described,” Journal of Economic Literature 15: 24-55. Schlee, E. E. (1992) “Marshall, Jevons, and the Expected Utility Hypothesis,” History of Political Economy 24: 729-744. Shapley, L. (1977) “The St. Petersburg Paradox: A Con Game?” Journal of Economic Theory 14: 439-442. Varian, H. (1992) Microeconomic Analysis, 3rd edn. New York: W.W. Norton and Co. Von Neumann, J., and Morgenstern, O. (1943; 5th edn. 1953) The Theory of Games
and Economic Behavior. New York: John Wiley and Sons. Williams Jr., C. A. (1966) “Attitudes Toward Speculative Risks as an Indicator of
Attitudes Toward Pure Risks,” The Journal of Risk and Insurance 33(4): 577-586.
Measuring individual risk preferences
3
Measuring individual risk preferences
coefficients as “local” measures
21
of risk aversion, or “risk aversion in the
small.” This work addressed issues not only about the measures of risk aversion themselves (e.g., “When is one person more risk-averse than another?”) but also about relationships between risk-aversion parameters and other economic phenomena, such as willingness to pay insurance premia. Naturally, the new wave of theory had a major impact on subsequent empir-
ical work. Instead of sketching Bernoulli functions for specific individuals, applied researchers began to use statistical techniques such as least squares or maximum likelihood criteria to estimate ARA or RRA parameters for an
individual or for a specified population of individuals. Of course, such esti-
In the physical sciences, the prerequisites for admission to scholarly discourse include not only the precise definition of important concepts but also acceptable methods by which key variables are to be measured. Physicists, for example, have not, for some time, spent a great deal of energy debating the meaning of mass or the methods for measuring it. When a lump of platinum sitting in a controlled environment in Paris is no longer sufficiently stable for increasingly accurate technologies of measurement, physicists quickly propose, assess, and agree on alternative definitions, and then move
on. This chapter is an exploration of how a similar enterprise in economics has played out, specifically, the enterprise to furnish the empirical content to Von Neumann and Morgenstern’s theory on the concepts and measurement of decision making under risk and the construction of the corresponding Bernoulli function. Mosteller and Nogee (1951), Edwards (1953), and Grayson (1960) were among the earliest to seek to identify empirical content of the VNM theory. They asked subjects, in either laboratory or field, to choose one from a pair
of lotteries. Combining the choices across many pairs, the researcher sought a
Bernoulli function for each subject that best explained that subject’s choices — that is, to identify an increasing function whose expected value was larger at
each chosen lottery than at the lottery not chosen. Researchers then examined
these functions for evidence of concavity, convexity, inflection points, etc. These early investigations did not use statistical procedures, and instead
relied mainly
on
connect-the-dots
sketches.
Such
constructions,
together
with casual thought experiments, inspired Friedman and Savage (1948) and Markowitz (1952) to sketch the general shapes discussed in the previous chapter. A new wave of theoretical work on utility, risk, and uncertainty — includ-
ing the seminal articles of Pratt (1964), Arrow (1965), Rothschild and Stiglitz
(1970, 1971), and Diamond and Stiglitz (1974) — shifted attention to more economical parametric representations of Bernoulli functions. The focus was on the coefficients of absolute risk aversion (ARA) and relative risk aversion (RRA) discussed in the Appendix to Chapter 2. Pratt referred to these
mation of “risk aversion in the small” required antecedent assumptions about “risk aversion in the large,” regarding the overall shape or parametric form of the Bernoulli function, e.g., that it belonged to the CARA or CRRA families also described in the Appendix to Chapter 2. What progress have we seen in creating empirical content for the VNM theory? Choice data processed via either the old sketching procedures or the newer statistical procedures are guaranteed to yield some Bernoulli function that best fits the data. The scientific question is whether this function can predict behavior other than the data from which it has been estimated. In other
words, what counts is the out-of-sample predictive power of the estimated Bernoulli function. : In this chapter, we examine the variety of methods and instruments used to gather choice data, and the predictive power and consistency of the fitted Bernoulli functions. It is a long and tangled tale, with several surprises along the way.
3.1
Elicitation instruments
This section presents the panoply of procedures (or instruments or institutions) used to elicit individual risk parameters. The overall theme of the section is one of prior empirical failures motivating a search for better procedures— and asking whether any procedure has succeeded in isolating innate Bernoulli functions. Binswanger’s lottery menu
Binswanger’s pioneering fieldwork in India is a natural starting point. Binswanger (1980) recounts in detail the procedures and results used in eliciting risk-preference estimates from farmers in India in the late 1970s. He used two different procedures. One, adapted from Dillon and Scandizzo (1978), elicits certainty equivalents. In the second, of his own design, subjects choose one of seven lotteries from the list shown in Table 3.1. Money payoffs are given in Indian rupees. Only one of the two dominated lotteries (D* or D) was included in the menu presented to any subject (D* is dominated by a mix of B and C, and D by a mix of C and E). Of the eight runs, five shrank the money payoffs given in Table 3.1 by a factor of 100 (e.g.,
22.
Measuring individual risk preferences
Measuring individual risk preferences
Such interpretations get ahead of the evidence, and several caveats are in order. First, recall that Binswanger also used a preexisting procedure to elicit certainty equivalents. He notes that those results were very different from the lottery menu results for the same subjects, and moreover that the results varied from one trial to the next depending on who conducted the trial. One might conclude that there was a problem with the Dillon and Scandizzo certainty-elicitation procedure, but then one should also note that five of the seven available options in the
Table 3.1 The lottery menu of Binswanger (1980) Lottery
Payoff if heads
oO A B D* C D E F
Payoff if tails
50 45 40 35 30 20 10 0
50 95 120 125 150 160 190 200
Binswanger list procedure were consistent with risk aversion, so even a randomly
Note: Money payoffs (resulting from the toss of a fair coin) are given in Indian rupees. Subjects
chose among seven lotteries, excluding either D or D*.
Table 3.2 Lottery choice frequencies in Binswanger (1980) Stakes
Choice frequencies Omore RA
A
-
B
C
Eless RA
F(RNI RS)
Inefficient DID*
Nobs.
0.5 0.5 5 50
1.7 1.7 0.9 2.5
5.9 8.1 8.5° 5.1
28.5 14.5 25.6 34.8
202 29.3 368 39.8
15.1 21.3 12.0 6.8
18.5 16.6 8.5 1.7
10.1 8.5 7.7 9.3
119 235 118 117
Hypothetical
2.5
13.6
51.7
28.8
0
0.9
2.5
118
500
23
Note: Lottery choice frequencies are given as a percentage of the number of observations (Nobs.). RA = risk averse, RN = risk neutral, and RS = risk seeking. The first row corresponds to experiment (“game”) 2 from Binswanger (1980), second row to experiments 4 and 5 (pooled),
third row to experiment 7, fourth to experiment 12, and fifth to experiment 16.
Re. 0.5 instead of Re. 50) in which all 330 subjects participated. Two runs
shrank the payoffs by a factor of ten (with 118 participants), and one run used the payoffs as stated in the table (with 118 participants). Then 118 subjects participated in an additional hypothetical payment trial that multiplied the payoffs in the table by a factor of ten — but obviously did not actually pay
them out. The results are summarized in Table 3.2. Any choice from O to E is consistent with maximizing a concave Bernoulli function. The preponderance of such choices in this experimental procedure has long been interpreted as convincing evidence that innate risk preferences exist, can be measured, and that people are typically risk averse.
choosing robot let loose on the list would likely appear to be risk averse. (The general issue of list composition will recur later in this chapter.) Second, Binswanger performed regressions of risk preferences (an ordinal dependent variable derived from the lottery choices in the experiment) against theoretically suggested independent variables (such as asset ownership and tenant farmer status), along with numerous control variables (including the percentage of the households composed of working age adults, salary, land rented, early adoption/not of farming techniques, assets, schooling, village dummies, etc.). The estimated coefficients do not generally support the predictions from microeconomic models of risky farming decisions. For example, given the literature on tenant farming as a risk-sharing device, one would have expected the “land rented” independent variable to be associated with greater estimated risk aversion. Instead, the tenants registered lower estimated risk aversion than did landlords at low stakes, and indistinguishable estimates at high stakes. Analogous failures of compound hypotheses were observed for the “gambler/ not,” “percent working age adults,” and “assets” independent variables. Third, the independent variable which most consistently correlated with the ordinal risk measure turned out to be “luck” — that is, past coin flip realizations during earlier trials of the Binswanger procedure. This registered as statistically significant in five of the six regressions in which it was used, and was always negatively related to the degree of risk aversion. Does this suggest
some sort of learning? Reasonable people might disagree, but it looks like decision making on-the-fly. Bearing in mind all three caveats, it seems fair to conclude that the elicited risk-aversion parameters show little consistency over time and context. Binswanger (1980, 406) himself concludes:
If these results can be extrapolated to farming decisions, they suggest that differences in investment behavior observed among farmers facing simi-
lar technologies and risks cannot be explained primarily by differences in their attitudes but would have to be explained by differences in their con-
straint sets, such as access to credit, marketing, extension, etc. It is not the
innate or acquired tastes that hold the poor back but external constraints [emphasis added].
We will see in Section 3.3 below that subsequent investigators, notably Jacobson and Petrie (2007), would have even greater difficulty getting estimates from the Binswanger procedure to predict out-of-sample data.
24
Measuring individual risk preferences
Measuring individual risk preferences
Auctions
The baseline model of behavior in the first-price sealed-bid (FPSB) auction (Vickrey [1961]) assumes that bidders are risk neutral. A second assumption,
of independent private values, says that bidders know the distribution from which object valuations are drawn, but not each others’ actual draws. Combining these two assumptions allows the derivation of a Bayes—Nash equilibrium prediction of what each bidder in an auction would actually do, conditional on his own valuations of the object being auctioned. An isomorphism with the Dutch auction can also be established, such that one
would expect similar bidding behavior to the first-price auction in similarly parameterized versions of the Dutch auction. (The isomorphism is a key part of a general result known as revenue equivalence.) Clear-cut predictions such as these form an ideal place to apply the “institutions, environment, agents” framework in a recursive loop between theory and experiment, as discussed by Smith (1982). In such a loop, theory would
inform experimental design — for instance pointing out the need for exact matching of respective message spaces across different institutions when testing for revenue equivalence — and experimental observation would inform the next (updated) generation of theoretical models. In the experiments run by Cox, Roberson, and Smith (1982), it turned out that the data supported neither the risk-neutral Vickrey model for the FPSB auction — typical bids were too high — nor the prediction of revenue equivalence between the FPSB auction and the Dutch auction. This raised the challenge of reconciling these results with theory. Given the philosophy of science held by that research group, the logical next step was to modify theory to fit the facts, and to test the augmented theory through new experiments. One possible modification to accommodate overbidding (relative to the risk-neutral Vickrey prediction) observed in both the FPSB
auction and the Dutch auctions was to rederive the model, this
time assuming that bidders had nonlinear Bernoulli functions. Constant relative risk-averse preferences led to the constant relative risk aversion model
(CRRAM) bid function with a linear segment given by: b(y,7;)
=
Viewer + [(n
~ D)l(n
-1+
r)]
%- Viower) for
ve [Viewer
v*]
where bidders i =1, ... , m have values v, drawn independently and identically distributed (i.i.d.) from the uniform distribution on [Viper Vupperl, and individual RRA coefficients 7;.! Given some estimate of r,, one can use a known induced resale value v, and
the known number of bidders to predict what i will bid. Alternatively, and more interestingly for our purposes, one can use the actual bids and realized values (along with coefficient estimates of the CRRAM model) to estimate the implied value of r; for each bidder. (This is directly analogous to calcu-
lating implied volatilities using observed option prices and the Black-Scholes
25
formula.) The implied r, values (especially those that help predict later bids) tend to be on the risk-averse side of neutrality. Unlike Binswanger’s focus, estimating risk aversion was not the original intent of this research program. Adding curved Bernoulli functions to a model which had originally been based on risk neutrality just happened to be the modification the authors chose as a reasonable and intuitive possibility. The resulting model (CRRAM) was one that often generated good predictions in experimental replications of the FPSB auction. For example, Schorvitz (1998) found that the bidders’ estimated r; were generally not inconsistent across different numbers n of bidders. (For most subjects one could not reject a null
hypothesis that their respective estimated r; were the same across, say, n = 4 and n= 5 implementations of the auction.) There were some failures of prediction; Cox, Smith, and Walker (1988) found that imposing nonlinear payoff transformations generated data that violated the joint implications of CRRAM and
EUT. However, due to the compound nature of the hypothesis, it was difficult
to pinpoint the source of this failure. Of course, risk aversion is not the only possible explanation of overbidding. Cox, Roberson, and Smith (1982) also considered the utility of winning the auction and the misuse of Bayes’ Rule as alternative explanations. Approaches referring to foregone payoffs (opportunity cost) were yet another possibility, explored by Engelbrecht-Wiggans (1989). Friedman (1993) proposed learning in the presence of a rather asymmetric loss function.
One way to resolve the role of risk preferences in bidding is to look across dif-
ferent sorts of auctions. Kagel and Levin (1993) replicated overbidding in FPSB auctions, but also found that running third-price auctions could yield data consistent with either risk aversion or risk seeking. For example, with an n = 5 thirdprice auction, bids were consistent with risk aversion, but with » = 10, the same
third-price auction generated bids consistent with risk seeking. This raises the possibility that such deviations are at least in part due to the institution in use. Section 3.2 returns to explorations of this possibility. Becker-DeGroot—-Marschak procedure
Becker, DeGroot, and Marschak (1964, hereafter BDM) invented a popular method of eliciting individuals’ Bernoulli functions. It can be described as a special type of second-price (selling) auction that pits an individual subject, endowed with a lottery, against a single “robot bidder.” The robot’s offers are
drawn from a uniform distribution with replacement, variously as pseudorandom numbers generated by a program or balls drawn from a bingo cage.
Simultaneously, the subject submits her ask. If the subject’s ask (interpreted
as a certainty equivalent for a lottery she is initially endowed with) is greater than the robot bidder’s offer, the subject keeps the lottery, and plays it out according to its stated terms. If the subject’s response is lower than that of the robot bidder, she receives a dollar amount equal to the robot bidder’s offer. The
idea is to harness the incentive compatibility of the second-price auction in
26
Measuring individual risk preferences
Measuring individual risk preferences
an effort to get truthful revelation of the subject’s certainty equivalent for the lottery; such data then allow the researcher to calculate her risk parameters. Harrison
(1986) pointed
out possible manipulability
(and thus
loss of
incentive compatibility) of the BDM procedure when implemented as a sequence of lotteries where subject responses to earlier lotteries affect the lotteries offered later. Harrison (1989) presented estimates of r, from the BDM
procedure whose distribution differs from that of estimates obtained by other researchers from the FPSB auction. Specifically, the latter estimates tend to deviate from risk neutrality in the direction of risk aversion, and the former are slightly to the risk-seeking side. This evidence tells us little about’ the source of the asymmetric deviations from risk neutrality in the FPSB auction, or about the source of the difference in r, estimates between the FPSB auction
and the BDM institutions. There are also differences across variants of BDM. Kachelmeier and Shehata (1992) estimated risk attitudes from both the selling version of BDM (used by Harrison [1986], [1989] discussed above), as well as its buying version. Estimates from the selling version replicated Harrison’s (1989) findings, but the buying version generated r, on the opposite side of risk neutrality. In experiments run in 2006-2007, James (2011) extended the institutional setting of Kachelmeier and Shehata and reported results from experiments using duals to the buying and selling versions of BDM. The dual institution to the selling version of BDM endows the subject with cash, instead of a lottery, then elicits probability equivalents (instead of certainty equivalents) by means of a second-price buying auction using probability points to make bids. The dual to the buying version of BDM gives the subject no endowment, then elicits probability equivalents by means of a second-price selling auction using probability points to make offers. James found again that the direction of deviation from risk neutrality was reversed across different institutions, this time in moving between each institution and its dual. These reversals, and others, will be discussed further in
Section 3.2 below.
,
Holt—Laury procedure
Tiring of the controversy, occasionally acrimonious, over the inconsistency of estimates across different instruments, Holt and Laury (2002) set out to create a more reliable elicitation procedure. Their multiple row, a-little-of-Column A, a-little-of-Column B design asks subjects to mark an “X” in either the left (“A”) or right (“B”) column on each row within a spreadsheet, as in Table 3.3 below. This task seems transparent. In order to maximize any Bernoulli function, the subject would begin in column A in the first row and switch only once, at some row, to column B. For example, a person maximizing a linear (riskneutral) Bernoulli function would switch at row five, and a slightly risk-averse person would switch at row six. What could possibly go wrong?
27
Table 3.3 Choice table proposed in Holt and Laury (2002) Option A 1/10 2/10 3/10 4/10 5/10 6/10 7/10 8/10 9/10 10/10
of of of of of of of of of of
Option B
$2.00, $2.00, $2.00, $2.00, $2.00, $2.00, $2.00, $2.00, $2.00, $2.00,
9/10 8/10 7/10 6/10 5/10 4/10 3/10 2/10 1/10 0/10
of of of of of of of of of of
$1.60 $1.60 $1.60 $1.60 $1.60 $1.60 $1.60 $1.60 $1.60 $1.60
1/10 2/10 3/10 4/10 5/10 6/10 7/10 8/10 9/10 10/10
of of of of of of of of of of
$3.85, $3.85, $3.85, $3.85, $3.85, $3.85, $3.85, $3.85, $3.85, $3.85,
9/10 8/10 7/10 6/10 5/10 4/10 3/10 2/10 1/10 0/10
of of of of of of of of of of
$0.10 $0.10 $0.10 $0.10 $0.10 $0.10 $0.10 $0.10 $0.10 $0.10
Note: In each line, the subject indicates a preference for either option A or B.
Set aside for now the point (made in Jacobson and Petrie [2007], discussed further in Section 3.3) that elicitation procedures which minimize, or even rule out, the possibility of observing subject mistakes amount to a production system for censored data sets. It turns out subjects managed to find ways to make bizarre mistakes anyway. The now-notorious “multiple crossing” phenomenon in the Holt-Laury (HL) procedure occurs when subjects cross more
than once between Column A and Column B, thus violating every known deterministic model of decision making. For instance, Laury and Holt (2008) find that 44 out of 157 (~28 percent) of subjects engage in such behavior, a result representative of other studies using similar procedures. A more acute problem is that risk-parameter estimates produced by the HL procedure do not seem to generalize especially well, even within the HL family. Bosch-Doménech and Silvestre (2006) find that varying the number of rows changes the point at which subjects switch from Column A to Column B. For example, if the last three rows in the standard HL list are deleted, but the rest of the list is otherwise left unchanged, subjects switch over sooner,
thus registering a less risk-averse preference estimate than they do when making choices over the full list. . Lévy-Garboua et al. (2012) found that different implementations of the Holt—Laury procedure yield different estimates of risk preferences. The differences they examined are: (1) whether the list is shown to the subjects all at once, or one row at time; (2) whether the list is presented in the original order used by Holt and Laury, or in the opposite order, or random order; (3) varia-
tions of payoff scale; and (4) variations of subject experience. They find that subjects more frequently make inconsistent choices (i.e., multiple crossings) when rows are displayed one at a time; they also find that subjects register more risk-averse parameter estimates when rows are displayed one at a time. Taylor (2013) has found similar results.
28
Measuring individual risk preferences
Measuring individual risk preferences Proposition A
Proposition B
29
describe such a dutu set constitutes progress; characterizing the source(s) of “noise” would be further progress, and a topic to which we shall return in Section 3.3. , One justification for using pie charts (and implicitly a criticism of alternative procedures) is that choice among pie-chart representations of lotteries appears to be so transparent that it should be less prone to errors. However, as
seen in Hey and Orme (1994), people do still manage to contradict themselves in this format. An even starker example is in the work of Engle-Warnick, Escobal, and Laszlo (2006). They find that moving from a binary choice Choose the preferred proposition (A or B) and click on the corresponding button. Then please confirm your choice. *** We are only interested in your preferences. “™* There are no right or wrong answers. You have chosen Proposition:
| B
Confirm
Figure 3.1 Alternative lotteries presented as pie charts. The wedge labels (“FF”) denote the possible monetary outcomes in the lab currency called French Francs, and the wedge sizes are proportional to the probabilities, given in percentages. Source: Abdellaoui, Barrios, and Wakker (2007).
Pie-chart procedures
Another broad category of elicitation procedures is via choice among graphical representations of lotteries, most often pie charts, as in Figure 3.1. Preference reversals, one of the oldest choice conundrums, came out of juxtaposition of BDM and pie charts of binary lotteries (Lichtenstein and Slovic 1971). The original finding, since replicated by Grether and Plott (1979) among others, was that subjects often placed a higher dollar value on, say lottery A than lottery B, when selling prices were elicited using BDM, but then chose lottery B over lottery A (in an either/or choice) when the two lotteries were presented side-by-side as pie charts. A number of resolutions of this paradox have been attempted, of which dismissal of results involving BDM is one. However, other ways to elicit selling prices yield similar results. For example,
Cox and Epstein (1989) replace BDM with an alternative procedure wherein: (1) a subject nominates valuations for each lottery, (2) the lower valued lottery
is bought from the subject by the experimenter for a preannounced price (set by the experimenter), and (3) the subject plays out the higher valued lottery. The experiment generates the usual preference reversals.” Hey and Orme (1994) use a battery of choices among pie charts to estimate numerical values of risk-preference parameters. They note a large number of choice inconsistencies within their sample, which they attempt to harness to estimate additional parameters characterizing “noise” in subject choice. Acknowledgment that curvature of the Bernoulli function is insufficient to
between two lotteries, say A and B, to a choice among three — say A, B, and a third, dominated, lottery C — changes subjects’ choice distribution over A
or B. Moreover, subjects occasionally choose the dominated alternative C. So is the right number of pie charts to present to subjects two or three, ... or six (see Arya, Eckel, and Wichman [in press], a variant of Eckel and Grossman [2008]), or some other number?
Physiological measurements
In another emerging literature, researchers are attempting to tie risky choice data to physiological or biomedical data, with the goal of providing insight into risky choice. As interesting as the possibilities are, robust regularities so far are few. In one of the most
Zingales,
prominent
and Maestripieri
(2009),
recent
studies in this class,
examine
Sapienza,
correlation between. a risk
parameter estimated from a round of the Holt-Laury procedure and sal-
ivary testosterone. They do not find a statistically significant relationship between the two measures within their subsample of 320 men. They do find a statistically significant correlation within the subsample of 140 women. Pooling the data (for a sample of 460) and rerunning the regression yields a statistically significant relationship if gender is not included as an independent variable. When gender is added as an independent variable, the p-value on the coefficient for testosterone rises above 0.10. This suggests that the testosterone measure is relevant over the entire sample insofar as it reflects something else captured better by gender. A regression over the low-testosterone subsample reveals a barely significant relationship (coefficient = 0.144 and standard error = 0.079). Presumably a similar regression
on the high-testosterone subsample would show no significant relationship. None of the reported regressions had an R? above 0.06. Evidently the relationship between testosterone levels and Holt—Laury estimates of risk parameters is highly conditional at best. The
conditionality of this result is typical. Harlow
and Brown
(1990),
one of the earliest papers in this genre, found that a biochemical marker in the blood (enzyme monoamine oxidase) had some correlation with bidding behavior in first-price auctions, but only within the cross-section of men in the sample, not the women. Other physiological measurements had no significant
3000
Measuring individual risk preferences
Measuring individual risk. preferences
31
correlation with this or other laboratory choice data in the full saniple or var-
Garbarino, Slonim, and Sydnor (2011) do find a correlation between 2D:4D
et al. (2008), Schipper (2012a), and Schipper (2012b). Apicella et al. (2008) find a correlation (within a male sample) between salivary testosterone and the amount wagered in an investment procedure wherein the subject splits an endowment between cash and a lottery paying 2.5 times the staked amount in the high state, and zero otherwise. Schipper (2012a) finds a correlation
observed choices. Pearson and Schipper (2012) find no relationship between 2D:4D and bidding in the first-price auction for their overall sample, or for
1Ous reported subsamples. Other studies utilizing salivary testosterone measurements include Apicella
between salivary testosterone and Holt—Laury risk estimates, for males,} but
not females; in other words, a finding opposite to that of Sapienza, Zingales and Maestripieri (2009). However, Schipper (2012b) finds no correlation between salivary testosterone and bidding in the FPSB auction, whereas one would expect to find one if both Apicella et al. (2008) and a risk-aversionbased explanation of bidding were true. This finding also runs counter to Harlow and Brown (1990). In addition to salivary testosterone, other hormones
and responses in an Lickel and Grossman (2008) procedure; specifically, lower 2D:4D (greater prenatal exposure to testosterone) was correlated with riskier
most of the subsamples they investigate. The two subsamples which do return some kind of correlation are white subjects’ profits (but not bids) and 2D:4D,
for which the correlation has the opposite sign to that conjectured (1.¢., the measured effect goes in the “wrong” direction); and Asian male subjects’ bids (but not profits) and 2D:4D, which has a p-value of 0.106. The variability
in these results dealing with money-motivated risky-choice experiments also parallels much of the rest of the 2D:4D literature (including that looking at personality surveys). In the words of one active researcher in this field, “in the emerging field of 2D:4D research, a large number of reported relationships have been difficult to replicate” (Millet [2011, 397)).
and progesterone) have received the attention of investigators in the context of risky choice. The statistical correlations found in these studies are also
A further consideration is the possibility of variability in hormone levels over time, and of the possible endogeneity of such variation. While the 2D:4D measure itself is possibly the ultimate in predetermined variables, that
testosterone, estradiol, or cortisol. Schipper (2012a) finds, as already noted, a correlation between Holt—Laury (for gains) risk parameters for males (higher
might not be taking place, and those processes might well involve feedback loops. Coates (2012) is investigating the possible interplay between actions, consequences, and biometric data.
(cortisol, estradiol
hit-or-miss. Schipper (2012b) finds a significant positive correlation between progesterone and bids in the FPSB auction, but no significant correlation for
testosterone proportional to lower risk aversion), and also a “marginally significant” correlation between cortisol and Holt—Laury (for gains) responses
for females (higher cortisol proportional to higher risk aversion). However, there are, as Schipper points out, 16 possible correlations between different hormones and Holt—Laury responses; four hormones studied, two domains (gains and losses), and two genders; and only two of the 16 are at any point
significant. The reason for saying “at any point” is that Schipper goes on to
point out that with all the sorting, division, and combination of elements of the data set, one might well employ a correction for this in statistical inference. Using Bonferroni correction, the correlation between cortisol and Holt— Laury estimates (for gains, for females) becomes insignificant.
An alternative biometric approach to the measurement of salivary hormones 1s to measure the presumed trace of hormonal exposure prior to the subject’s birth. Pre-natal hormonal exposure is held to leave traces on the human body, including a link between prenatal exposure to testosterone and the ratio of the lengths of index and ring fingers — or the second digit-fourth
digit (2D:4D) ratio. This prenatal exposure might also be held to shape pre-
dispositions in such things as risky choice, and there is a literature that investigates this possibility. As with the salivary hormone
approach in this literature, the results are
mixed. Apicella et al. (2008), which found a positive correlation between salivary testosterone and risky choice in an “investment game” described above, finds no such correlation between such choice and the 2D:4D measure.
doesn’t mean
that other processes involving current, circulating hormones
Coates (2012) argues that there is a loop between circulating hormones (especially testosterone) and decision making, linked by the consequences of
prior actions. He invokes winner of a prior contest such contest. The winner logical capability, which
the “winner effect” from animal biology, wherein the (e.g., mating contest) is more likely to win the next effect is held to be due to actual changes in physioare (most immediately) activated by changes in cir-
culating hormones; the oxygen-carrying capability of blood is increased, due to an increase in hemoglobin associated with an immediately prior increase
in circulating testosterone. Coates also notes that preparation for contest can
generate increased glucose in the blood (supporting higher effort levels), due (most immediately) to an increase in circulating adrenaline, and also can generate increased dopamine (associated with rewards or pleasure), due (most
immediately) to an increase in circulating cortisol.
The various research programs using hormonal measurements offer economists a way out of their usual intellectual ruts, surely a valuable contribution,
but findings across these programs must be reconciled both with each other and with standard economic theory. First, note that point-in-time biometric measurements need not (if Coates’s conjectures are correct) be predictive of future actions very many steps removed, as the biometric data in question will change in the course of the dynamics of contest. (Perhaps this is a reason for the difficulty researchers have encountered in replicating correlations
between salivary testosterone, or measures of other circulating hormones, and responses in risky-choice tasks.) Second, according to the definition of the
32
Measuring individual risk preferences
Measuring individual risk preferences
winner effect, the link between prior action and subsequent hormonal change is “winning”; that is to say, victory in a contest is conjectured to trigger a hormonal change. But what constitutes winning in any given contest is in turn determined by the rules of the contest. Thus even in an explicitly biologically based research program one cannot escape the issue of how contracts and institutions are structured. We return to this discussion in Chapter 6.
Table 3.4 Treatments from Cox, Sadiraj, and Schmidt (2011) yi First choice
(which would shift the point of the Bernoulli curve at which lotteries were evaluated). It also was intended to eliminate portfolio effects: attempts to
diversify the total payoff earned across periods.
In contrast, like most of their predecessors, Cox and Grether (1996) pay for
all rounds by means of random draws conducted at the conclusion of each round, throughout the experiment. This protocol is now called pay all sequentially or “PAS.”> The justification is to avoid imposing a compound lottery, as would arise in random selection for a single round for actual payment. Cox, Sadiraj, and Schmidt (2011) demonstrate that this latter protocol, pay one
randomly or “POR,” can lead to cross-task contamination. Thus there are a variety of carefully considered alternative methods for
paying subjects, each intended to solve some incentive problem: However,
every such alternative has the potential to create new incentive problems of its own. Recent empirical and theoretical work attempts to systematically map out the properties (and potential failings) of subject payment protocols.
Empirically, Cox, Sadiraj, and for either/or choice procedures, Orme’s (1994) sequential binary subjects’ binary choices between
21
Option
S: 4 euro,
Schmidt (2011) examine the POR protocol e.g., Holt—Laury’s (2002) lists and Hey and pie charts. They find that the cross-section of S or R differ according to whether the choice
sequence comes from column 1, 2.1, 2.2, 3.1, or 3.2 in Table 3.4.
Second choice
Option
S’: 3 euro,
2.2 Option
S:4euro,
3.2
Option
Option
100% prob. Option R”:
100% prob. Option R:
S”:5euro,
100% prob. Option R’:
50% prob./0 euro 50% prob.
prob/Oeuro 50% prob.
none
Option
Option
100% prob. Option R:
100% prob. 100% prob. Option Option
50% prob.
prob./0 prob./0 euro 50% ~— euro 50% prob. prob.
12 euro, 50%
S: 4 euro,
10 euro, 50% prob./0 euro
100% prob. Option
3.1
100% prob. Option R: 10 euro,
Payment methods
Elicitation procedures define the context in which subjects make their choice. However, any experimenter running any elicitation procedure, in any setting — laboratory or field — also has to choose and implement a form of payment. The experimenter must also specify whether the form of payment is monetary, a consumable, or hypothetical. Further, a choice must be made if the payoffs are made for all or some subset of the rounds played, and whether the payment is handed out in each round or totaled and handed out at the end of the session in public or in private. There is now a literature, which we shall review in this subsection, which shows that standard theory indicates that these details matter (and laboratory experience suggests that even some theoretically irrelevant details might matter in practice).* Arguments have been made for and against all methods. In investigating the preference-reversal phenomenon, Grether and Plott (1979) had their subjects play many rounds, with the advance understanding that only one of the rounds would be randomly chosen at the end for the money payment. The point was to eliminate wealth effects of payoffs from the preceding rounds
33
R: 10
euro, 50% prob./0 euro 50% prob. S:3euro,
R’: 12 euro, 50%
8 euro, 50%
prob./0 euro 50% prob. Option
S:4euro,
R: 10 euro,50%
5S:4 euro,
10 euro, 50%
prob./0 euro 50% prob. Option
8”: 5 euro,
100% prob. Option R”:
8 euro, 50% prob./0 euro
50% prob.
That is, varying the lotteries that are presented in other rounds affects the choice between lottery S and lottery R when it is offered. This is evocative of both the results (over sequences of choices) of Huber, Payne, and Puto (1982) and the results (within a single round, varying the number and type of choices) obtained by Engle-Warnick, Escobal, and Laszlo (2006). Further work by Cox, Sadiraj, and Schmidt (2012) finds additional instances of correlation between payment protocol and instability in choice. In contrast, they do not find significant changes in the responses elicited using the “pay all sequentially” protocol. This may be of some relief to experimenters who have been using this protocol for auctions and games for decades — it is essentially the original payoff protocol used by Smith (1976). However, failure to reject incentive effects of a payment protocol empirically does not in itself prove that it is necessarily nondistorting. Azrieli, Chambers, and Healy (2012) prove in an axiomatic setting that under some conditions running (and paying) only a single round is the only protocol
that can be guaranteed not to create one or another of the possible incentive distortions. In the event that one were to be reduced to running an experiment consisting of one round — with neither learning effects, nor opportunity to learn, either on the part of the subject, or on the part of the experimenter about what the subject might be figuring out over the course of the experiment — what would one ask the subject? Would it be an either/or choice? If so, which one? Or would the experimenter ask the subject for a more articulated, strategy-method type response? Regardless, wouldn’t the logic that calls for a
34
Measuring individual risk preferences
Measuring individual risk preferences
35
single-round elicitation procedure also suggest that the estimates thus elicited would fail to generalize?
3.2
Counterexamples
The remarkable sensitivity of estimated risk-preference parameters to the elicitation instrument and to such details such as the payment protocol naturally spurred closer scrutiny. It was soon recognized that “context,” broadly considered, might shift the location of the distribution of parameter estimates or alter their dispersion. If a rank-preserving shift in parameter estimates was
measured risk aversion
the most troubling finding in the data, this sort of “context dependence” could
be accommodated by standard theory. The theoretical presumption is that context will not systematically affect the relative positions of individuals within
the distribution. But, as we will now see, empirical research does not support that presumption; there are, in effect, counterexamples to existing theory. BDM
ys. auctions
Isaac and James (2000) and Berg, Dickhaut, and McCabe (2005) each place their subjects in two different institutions: first-price sealed-bid auction (FPSB), and Becker-DeGroot—Marschak (BDM). Each isa natural institution
to look at in this context, because each potentially generates data that can be used to estimate risk parameters. FPSB does so through the lens of the constant relative risk aversion model (CRRAM)
mentioned earlier, whereas
BDM directly elicits certainty equivalents to lotteries. These papers. replicate prior work in finding that the two different institutions generate quantitatively and qualitatively different estimates of risk-preference parameters. What is novel is that they do so using a within-subjects (or own-subject control) design, wherein compatibility or incompatibility with Expected Utility Theory at the level of the individual can be verified. Not only are the average estimates from each institution numerically different but, in terms of economic interpretation, they tend to be on opposite sides of risk neutrality. That is, individual subjects act as if they are risk averse in FPSB and risk seeking in BDM. Furthermore, in many cases the same subjects cannot even be classified on the same side of risk neutrality on the basis of behavior they exhibit in the two institutions. Indeed, Isaac and James found that there was actually a negative correlation between the parameter estimates across institutions (for the cross-section of subjects).
As shown in Figure 3.2, the data “pivot” around the center of the graph,
the axis denoted by the number 1, itself significant of risk neutrality. That is to say, being’on the far risk-averse (seeking) side in one institution is corre-
lated with being on the far risk-seeking (averse) side in the other institution. Conversely, those near the center of the graph move least; that is, stability seems to prevail most in the-vicinity of risk neutrality. It is not immediately obvious how one could reconcile this finding with any existing theory of choice under risk or uncertainty. Perhaps it is simply a
Figure 3.2 Data from Isaac and James (2000). Note: x = 1 = risk neutral; x>1 = risk seeking; x
ee
G 2 &
a
affect subjects’ perceptions of the games. Figures 3.4 and 3.5 illustrate the
g
4
g
ec%
change — switching from the clock format to the tree format — lowers overbidding significantly to around $0.50 per round. When, in addition to using the tree format, the structure of the message space is such that subjects take
oN
‘SURREY a
NS
2
= &
turns at being able to act (as in the centipede game) the overbidding drops further and significantly to an average per subject of around $0.10 per round (which is furthermore attributable to a small number of rounds where a small
go
3
aN +
0
3
4
o s F
8oO
ao Nw”
=
oa
g SI
e
c
~o
Heuristics oN @ Tt
Te NC”
=
& Q
S
a S+
of
ae+ w =
2'U NW
‘
"8
2+ -
& g
= @ 6
3 3£
3=2S
8
®
x oS
a
Bs
e
|
0
initial
-
oN
SERA
Ss
arise from narrowly construed computational aids. In the meantime, it is clear that a researcher’s choice among these standard formats (rarely thought of as treatment and often relegated to the background) can affect bidding behavior, as well as the inferences one draws about risk preferences from such observations.
in
= @
ae
age bid deviation). It is a matter for further research whether these changes
g oS
;
EO”
percentage of the subjects overbid by a large amount, thus skewing the aver-
£3 eS =
oo
re
eS Q
gf 8 # si
oa
|
o
gS e
~ ao
WLS a
Figure 3.5 Tree format Dutch auction.
a
45
One way to follow up on the previous suggestions is to run simulations with automated agents that embody the conjectured limitations on perception and computation. Gode and Sunder (1993) is an early example, albeit with a different conclusion. They show that “zero intelligence agents” placed in the double auction market institution can produce efficient outcomes similar in some important respects to those observed with human agents. In the present context, the idea would be to introduce agents that have parameters controlling memory, perceptions of alternative payoffs, and so on. Learning models, such as the Experience Weighted Attraction (EWA) model of Camerer and Ho (1999), provide such a tool for investigating risky choice.
Indeed, James and Reagle (2009) present simulations investigating the interaction between EWA parameters and two institutions for eliciting risk parameters, the FPSB auction and the selling version of BDM. Their computerized agents reproduce the kinds of choice asymmetry observed in human subjects (in Isaac and James
[2000], and Berg, Dickhaut, and McCabe
[2005]) even
46
Measuring individual risk preferences
Measuring individual risk preferences
though the computerized agents have no Bernoulli functions, That is, overbidding in BDM (that declined upon repetition, as with human subjects) was
47
subjects) was observed; and thus BDM data mapping to risk seeking and FPSB data mapping to risk aversion were both observed, and for the same (parameterization of) agents. What was the parameterization which yielded
(2) subjects who do not employ the dominant strategy but then experience the cost of failing to do so! are more likely to move their response (in the second round) toward the dominant strategy response than subjects who do not directly and practically experience the cost of misrevelation; and (3) a model of subject response which incorporates the upper bound on the random number generator — the first-price auction optimal bid function — more closely
important thing for replicating the stylized facts from human subjects experiments from both BDM and FPSB while using a single EWA parameter set.
logit noise to dominant strategy play. This last result makes clear that subject behavior might not reflect just symmetrical, random noise, but asymmetri-
observed; overbidding in FPSB (of a more persistent nature, as with human
these results? In the parlance of EWA, low 5
(at or near zero) was the most
Low & corresponds to an inability to fully appreciate the payoff to courses of
action other than that taken in prior play. It was important in matching the human data for FPSB and BDM. Conversely, 5 at or near one corresponds to
full appreciation (and entry into the choice “urn”) of all strategies, even those
never sampled first-hand. If one were to apply this approach to the Dutch auction/centipede metagame, a difference in 8 due to institutional format may be a possible avenue to explore. In general, this approach to modeling subject choice might, all else being equal, presume a higher 6 for informationally richer institutional formats, and lower 8 for informationally sparser institutional formats. In case of the Dutch auction presented in clock versus tree formats, the clock format (displaying only resale value and current price reading) might reasonably be supposed not to do much to aid visualization of strategic possibilities over the course of the entire auction, and thus tend to impart low 5 to the participants. In contrast, the tree format’s exhaustive labeling of possible actions and payoffs throughout the entire game might impart high 8 to the participants. Clearly, this approach imputes a key behavioral determinant to the institution itself (or at least a typical subject’s ability to apprehend the possibilities inherent in that institution), and a rigorous and general mapping from institutional attributes to EWA parameters would ultimately be desirable. It is no easy task to figure out which heuristic(s), including potentially
counterproductive ones, subjects might be using. Some researchers are however trying to do just that. “Failure of game form recognition” is a possibil-
ity explored in Cason and Plott (2012) for the case of the BDM
procedure.
Cason and Plott implement a design consisting of two rounds of the selling version of BDM, where each round is a “test of understanding the domi-
nant strategy.” That is, the object being valued is not a lottery with p-« (0,1), but an induced value unit of certain resale value. As such, their 100-percent-
certainty induced value unit is even more explicitly labeled as such than what
James (2007) used, wherein lotteries with p = 1 or p = 0 for the high-state
outcome were used as tests. Cason and Plott proceed to explore the occur-
rence of what they contend can only be subject mistakes. Among their key results are: (1) subject responses are correlated with the upper bound on the
random number generator against which their response is compared in the
operation of BDM
(they shouldn’t be, according to dominant strategy play);
matches the response data than a “flat maximum”-type model which adds
cal, potentially recognizable mistakes. If so, then notions that asymmetry in behavior relative to some risk-neutral prediction is necessarily a sign of innate risk preferences would have to be revised. 3.4 What
Discussion have
we
learned
from
the vast
literature
on
field and
laboratory
experiments seeking to estimate parameters of Bernoulli functions of individuals or populations of individuals? First, the different ways of eliciting risk parameters in cash-motivated, controlled economics experiments yield different general results. For some elicitation instruments — notably bids in first-price auctions and various sorts of lottery menus — the results generally confirm the beliefs of most economists that most people are risk averse (i.e., prefer gambles with lower variance).
However, focusing on other elicitation instruments — for example, the sell-
ing BDM procedure, or the dual-to-buying version of BDM, or arguably the centipede game using independent private values — could lead to the opposite conclusion, that most people are risk seeking. a. Second, and perhaps more troubling, the different results across elicitation instruments are not simply a matter of more or less noise and more or less bias on a rank-preserving basis. Several studies reported that individuals found to be risk neutral according to one instrument tended to appear to be risk neutral according to other instruments, but individuals found to be the most risk averse by one instrument could be the most risk seeking according to another.
. Third, robust regularities of this sort (found across elicitation instruments) may be more suggestive of personal differences in problem-solving skills, learning, and adaptation to feedback than of the existence of personalized Bernoulli functions. Economists specializing in such matters readily acknowledge that measured risk aversion is “context dependent.” But this is an understatement of the current state of limitations in measuring individual subjects’ Bernoulli functions. And it is unclear whether the 70-year quest to measure Bernoulli function curvature will ever find its grail. But there are leads that may prove fruitful. In later chapters we will explore the possibility that simple risk neutrality — possibly warped in its expression
48
Measuring individual risk preferences
Measuring individual risk preferences
by its surroundings — might be a useful point of departure for future research. The old notion that “institutions matter” is far from played out. How and why do particular institutions produce particular characteristic results? There is a lot more that can be done in this regard. We may find that there are interac-
tions between institutional structure, on the one hand, and the mechanic s of coming to a decision on the other.
Before pursuing such leads, we must ask ourselves whether we have been focused too much at the level of individual behavior. Is there anything at the aggregate level — from the world of business or an industry such as insur-
ance, for example — that suggests the value of Bernoulli functions in economic
analysis? The next chapter explores that possibility.
Notes 1 The nonlinear segment occurs above: W*= Viewer + [(n — It rl(n-1 + 4,)] ( Vupper — Viower) (1—¥,) is the CRRA parameter for the least risk-averse (or most risk-prefer ring)
bidder from the distribution of (heterogeneous) bidder types.
2 Cox
and
Grether
(1996) report results. wherein
numerical
valuations
of lotter-
ies derived from auction results (from auctions with multiple human bidders) are paired with binary choice (involving choice over the same lotteries valued in the auctions). Preference reversals are not observed in this case, but this case is also no
longer an individual choice setting. 3 The author further reports that this correlation is only significant for gains, not
losses.
4 Paul Samuelson, in a 1960 article on the St. Petersburg Paradox, seemed well aware
that payment methods might be very important in eliciting information about Bernoulli functions:
As many scholars have observed, there is a dilemma in all small-prize experiments. If we make the prize small enough to minimize the changing marginal utilities (car-
°
49
sight of eight gains domain results (on differences in risky choice across difrent CRT scores) were statistically significant, as were three of five in the losses domain. Additional results comparing certain gains versus lower-expected-value gambles were also reported; none of these results showed a
statistically significant
i ip with difference in CRT scores. ; 7 The autho ty three different approaches to reclassification of inconsistent choices.
Also, the risk-aversion measures ultimately used in regressions are, in effect, a per-
centage of choices (on the list) which were the more risk-averse choice available in row. 8 The essential story in Myers (1977) is that once encumbered by debt (and its repayment), managers maximizing the value of firm equity will turn down positive Net Present Value, but low variance, projects in favor of higher variance projects in the
hope that a sufficiently positive realization will occur and allow the manager to both pay off the debt and have additional assets left over for the equity-holders. A less refined story that captures the intuition of the model is that of an embezzling accountant taking his ill-gotten gains to the racetrack, and betting on the longshot, in the hopes of replacing his theft and maybe also having something left over imself.
9 anaes
earlier result in this area is found in Isaac and Walker (1985). There,
bids in the first-price sealed-bid auction changed depending on whether or not the experimenters announced losing bids (a piece of information which would be considered irrelevant under either the risk-neutral Vickrey model or CRRAM). This
in turn implies that estimated r, would appear to change with whether or not losing ids are announced. 10 Por instance, this can happen when the subject responds with a number larger than the induced value — and then happens to be faced with a draw from the random number generator (the “robot bidder”) which is itself larger than the induced
value, but smaller than the number the subject gave as a response. In this case the
subject, having “outbid” the random number generator, keeps the induced value ’ unit, and turns it in to the experimenter for $2, rather than giving up the induced value unit for the draw greater than $2.
dinal or ordinal), we may destroy the motivation of the guinea pig to reveal to us his
true opinions. (He may be too uninterested to do the mental work to find his opinion; he may spite us; etc.) Indeed, operationally, who dares assert that his “opinion” exists if it is not in principle observable? This may be a social science analogue to the Heisenberg “uncertainty principle” in quantum physics. Just as we must throw
light on a small object to see it and if it is small enough must thereby inevitably
distort the object by our observational process, so we must to motivate a human guinea pig shower him with finite dispersion, whose effect may be to change his marginal utilities and contaminate his revealed probabilities. In principle, we can move to large scale dispersion experiments: thus, to determine whether Paul thinks rain more probable for July 4 than fair weather, let us threaten him with death if. his
guess is wrong and then observe which forecast he makes. Such “destructive tests” are expensive, even in these days of foundation philanthropy. Also, as Ramsey’s [1930, 177] discussion of “ethically neutral” entities shows, we must be sure that the pig does not think that dying on a rainy day sends him to Paradise; for such a belief would violate the implicit independence assumption we make in separating out a man’s probability beliefs from his evaluation of outcomes.
5 This forms part of a larger taxonomy of protocols including: paying for all rounds at the end of the experiment (i.e., not ona running basis) by means of independent draws conducted at the end of the experiment, one for each round (pay all indepen-
dently, or PAI); and paying for all rounds at the end of the experiment by means of a single draw applying to the results of all rounds (pay all correlated, or PAC).
Bibliography Abdellaoui, M., Barrios, C., and Wakker, P. P. (2007) “Reconciling Introspective Utility
with Revealed Preference: Experimental Arguments Based on Prospect Theory, Journal of Econometrics 138(1): 356-378, p. 370, Figure 5. http://people.few.eur.nl/ wakker/pdfspubld/07.1mocawa.pdf (accessed June 19, 2013).
Apicella, C. L., Dreber, A., Campbell, B., Gray, P. B., Hoffman, M.,
and Little, A.
C. (2008) “Testosterone and Financial Risk Preferences,” Evolution and Human Behavior 29(6): 384-390. ~ a. — Armantier, O., and Treich, N. (2009) “Subjective Probabilities in Games: An Application to the Overbidding Puzzle,” International Economic Review 50(4): 1079-1 102. Arrow, K. J. (1965) Aspects of the Theory of Risk-Bearing, Helsinki: Yrjo Jahnssonin Saatid; reprinted (1971) Essays in the Theory of Risk-Bearing, Vol. I. Chicago:
Markham Publishing Company. , Arya, S., Eckel, C., and Wichman, C. (in press) “Anatomy of the Credit Score,” Journal of Economic Behavior & Organization, forthcoming. — Azrieli, Y., Chambers, C. P., and Healy, P. J. (2012) “Incentives in Experiments: A
Theoretical Analysis,” Working Paper.
-
Becker, G. M., DeGroot, M. H., and Marschak, J. (1964) “Measuring Utility by a
Single-Response Sequential Method,” Behavioral Science 9(3): 226-232.
50
Measuring individual risk preferences
Measuring individual risk preferences
Benjamin, D., Brown, S., and Shapiro, J. (in press) “Who is ‘Behavioral"? Cognitive Ability and Anomalous Preferences,” Journal of the European Economics
Association. Berg, J., Dickhaut, J., and McCabe, K. (2005) “Risk Preference Instability Across Institutions: A Dilemma,” PNAS 102: 4209-4214. Binswanger, H. P. (1980) “Attitudes Toward Risk: Experimental Measurement in
Rural India,” American Journal of Agricultural Economics 62 (August): 395-407. Binswanger, H. P. (1981) “Attitudes Toward Risk: Theoretical Implications of an Experiment in Rural India,” The Economic Journal 91(364): 867-890.
Bosch-Doménech, A., and Silvestre, J. (2006) “Reflections on Gains and Losses: A 2 x . 2X 7 Experiment,” Journal of Risk and Uncertainty 33(3): 217-235.
Camerer, C., and Ho, H. T. (1999) “Experience Weighted Attraction Learning in Normal Form Games,” Econometrica 67(4): 827-874. Cason, T. N., and Plott, C. R. (2012) “Misconceptions and Game Form Recognition of the BDM Method: Challenges to Theories of Revealed Preference and Framing,” California Institute of Technology Social Science Working Paper. Castillo, M., Petrie, R., and Torero,
M.
(2010) “On
the Preferences of Principals
and Agents,” Economic Inquiry 48(2): 266-273. http://ideas.repec.org/a/bla/ecinqu/ v48y2010i2p266-273 html (accessed October 7, 2012). Coates, J. (2012) The Hour Between Dog and Wolf: Risk Taking, Gut Feelings and the Biology of Boom and Bust. New York: Penguin Press. Cox, J. C., and Epstein, S. (1989) “Preference Reversals Without the Independence Axiom,” The American Economic Review 79: 408-426.
Cox, J.C., and Grether, D. M. (1996) “The Preference Reversal Phenomenon: Response
Mode, Markets and Incentives,” Economic Theory 7: 381-405. Cox, J. C., and James, D. (2012) “Clocks and Trees: Isomorphic Dutch Auctions and
Centipede Games,” Econometrica 80(2): 883-903.
Cox, J. C., Roberson,
B., and Smith, V. L. (1982) “Theory and Behavior of Single
Object Auctions,” Research in Experimental Economics 2: 143.
Cox, J.C., Sadiraj, V., and Schmidt, U. (2011) “Paradoxes and Mechanisms for Choice
Under Risk,” (No. 1712). Kiel Working Papers.
Cox, J.C., Sadiraj, V.,and Schmidt, U. (2012) “Asymmetrically Dominated Choice Problems
and Random Incentive Mechanisms,” Working Paper, Georgia State University.
Cox, J. C., Sadiraj, V., Vogt, B., and Dasgupta,
U. (in press) “Is There a Plausible
Theory for Decision Under Risk? A Dual Calibration Critique,” Economic Theory, forthcoming.
Cox, J. C., Smith, V. L., and Walker, J. M. (1988) “Theory and Individual Behavior of
First Price Auctions,” Journal of Risk and Uncertainty 1: 61-99.
Dave, C., Eckel, C., Johnson, C., and Rojas, C. (2008) “Eliciting Risk Preferences: When Is Simple Better?” Available at SSRN 1883787. Diamond, P., and Stiglitz, J. (1974) “Increases in Risk and Risk Aversion,” Journal of Economic Theory 8: 337-360. Dillon, J. L., and Scandizzo, P. L. (1978) “Risk Attitudes of Subsistence Farmers
531
Dorsey, R., and Rugzzolini, L. (2003) “Explaining Overbidding in First Price Auctions Using Controlled Lotteries,” Experimental Economics 6(2): 123-140. Eckel, C. C., and Grossman, P. J. (2008) “Forecasting Risk Attitudes: An Experimental Study Using Actual and Forecast Gamble Choices,” Journal of Economic Behavior & Organization 68(1): 1-17.
Edwards, W. (1953) “Probability Preferences in Gambling,” American Journal of Psychology 66: 349-364.
Engelbrecht-Wiggans, R. (1989) “The Effect of Regret on Optimal Bidding in Auctions,” Management Science 35(6): 685-692. Engle-Warnick, J, Escobal, J. A., and Laszlo, 5. (2006) The Effect of an Additional Alternative on Measured Risk Preferences ina Laboratory Experiment in Peru. CIRANO.
http:/Aideas.repec.org/p/mcl/mclwop/2006-10.html (accessed October 7, 2012). Frederick, S. (2005) “Cognitive Reflection and Decision Making,” The Journal of Economic Perspectives 19(4): 25-42.
Friedman, D. (1993) “The Double Auction Market Institution: A Survey,” in D.
Friedman and J. Rust (eds.) The Double Auction Market: Institutions, Theories, and Evidence, Santa Fe Institute Proceedings 14: 3-25. Santa Fe, NM: Addison Wesley. Friedman, M., and Savage, L. J. (1948) “The Utility Analysis of Choices Involving Risk,” Journal of Political Economy 56: 279-304. Garbarino, E., Slonim, R., and Sydnor, J. (2011) “Digit Ratios (2D: 4D) as Predictors
of Risky Decision 42(1): 1-26.
Gode,
D. K., and
Making
Sunder,
for Both
S. (1993)
Sexes,” Journal of Risk and
“Allocative
Efficiency
of Markets
Uncertainty with Zero
Intelligence Traders: Market as a Partial Substitute for Individual Rationality,” The Journal of Political Economy 101(1): 119-137. Goeree, J K., Holt, C. A., and Palfrey, T. R. (2002) “Quantal Response Equilibrium and Overbidding in Private-Value Auctions,” Journal of Economic Theory 104(1): 247-272. Grayson, C. Operators. Grether, D. Preference Harlow, W.,
Tolerance:
J. (1960). Decisions Under Uncertainty: Drilling Decisions by Oil and Gas Cambridge, MA: Harvard University Press. M., and Plott, C. R. (1979) “Economic Theory of Choice and the Reversal Phenomenon,” The American Economic Review 69: 623-638. and Brown, K. (1990) “Understanding and Assessing Financial Risk
A. Biological
Perspective,’
Financial Analysts
Journal,
November/
December: 50-80.
Harrison, G. W. (1986) “An Experimental Test for Risk Aversion,” Economics Letters 21(1): 7-11. Hanson, G. W. (1989) “Theory and Misbehavior American Economic Review 79: 749-762.
of First-Price
Auctions,”
The
Hey, J. D., and Orme, C. (1994) “Investigating Generalizations of Expected Utility Theory Using Experimental Data,” Econometrica 62(6): 1291-1326. Holt, C. A., and Laury, S. K. (2002) “Risk Aversion and Incentive Effects,” American
Economic Review 92(5): 1644-1655.
in Northeast Brazil: A Sampling Approach,” American Journal of Agricultural
Huber,
“Are Risk Aversion and Impatience Economic Review 100: 1238-1260.
Isaac, R. M., and James, D. (2000) “Just Who Are You Calling Risk Averse?” Journal of Risk and Uncertainty 20(2): 177-187.
Economics 60(3): 425-435. Dohmen, T., Falk, A., Huffman, D., Sunde, U., Schupp, J, and Wagner, G. G. (2010)
Related to Cognitive Ability?” American
J., Payne,
J. W., and Puto,
C. (1982)
“Adding
Asymmetrically
Dominated
Alternatives: Violations of Regularity and the Similarity Hypothesis,” Journal of Consumer Research 9(1): 90-98.
52.
Measuring individual risk preferences
Measuring individual risk preferences
Isaac, R. M., and Walker, J. M. (1985) “Information and Conspiracy in Seuled-Bid Auction,” Journal of Economic Behavior and Organization 6(2): 139-159, Jacobson, S., and Petrie, R. (2007) “Inconsistent Choices in Lottery Experiments: Evidence from Rwanda,” Experimental Economics Center Working Paper Series, Georgia State University. James, D. (2007) “Stability of Risk Preference Parameter Estimates Within the Becker-
Degroot-Marschak Procedure,” Experimental Economics 10(2): 123-141. James, D. (2011) “Incentive Compatible Elicitation Procedures,” MODSIM Conference Proceedings 19(1): 1421-1427. http://www.mssanz.org.au/modsim201 1/D4/james. pdf (accessed October 7, 2012). James, D., and Reagle, D. (2009) “Experience Weighted Attraction in the First Price Auction and Becker DeGroot
MODSIM09. 19, 2013).
Marschak,” in 18th World IMACS
http:/Awww.mssanz.org.au/modsim09/D8/james.pdf
Congress
and
(accessed June
Kachelmeier, S. J., and Shehata, M. (1992) “Examining Risk Preferences Under High
Monetary Incentives: Experimental Evidence from the People’s Republic of China,” The American Economic Review 82(5): 1120-1141.
Kagel, J., and Levin, D. (1993) “Independent Private Value Auctions: Bidder Behavior
in First-, Second- and Third-Price Auctions with Varying Numbers of Bidders,” The
Economic Journal 103(419): 868-879. Kahneman, D., and Tversky, A. (1992) “Advances in Prospect Theory: Cumulative Representation of Uncertainty,” Journal of Risk and Uncertainty 5(4): 297-323. Laury, S. K., and Holt, C. A. (2008) “Voluntary Provision of Public Goods: Experimental Results with Interior Nash Equilibria,” Handbook of Experimental Economics Results 1: 792-801. Lévy-Garboua,
L., Maafi, H., Masclet, D., and Terracol, A. (2012) “Risk Aversion
and Framing Effects,” Experimental Economics 15(1): 128-144.
Lichtenstein, S., and Slovic, P (1971) “Reversals of Preference Between Bids and Choices in Gambling Decisions,” Journal of Experimental Psychology 89: 46-55.
Markowitz, H. (1952) “The Utility of Wealth,” Journal of Political Economy 60(2): 152-158. Millet, K. (2011) “An Interactionist Perspective on the Relationship Between 2D:4D and Behavior:
An
Overview
of (Moderated)
Relationships
Between
2D:4D
and
Economic Decision Making,” Personality and Individual Differences 51(4): 397-401.
Mosteller, F, and Nogee, P. (1951) “An Experimental Measurement of Utility,” Journal
of Political Economy 59: 371404. Myers, S. C. (1977) “Determinants of Corporate Borrowing,” Journal of Financial
Economics 5(2): 147-175. Neugebauer, T., and Selten, R. (2006) “Individual Behavior of First-Price Auctions:
The Importance of Information Feedback in Computerized Experimental Markets,” Games and Economic Behavior 54(1): 183-204. Pearson, M., and Schipper, B. C. (2012) “The Visible Hand: Finger Ratio (2D: 4D) and Competitive Bidding,” Experimental Economics 15(3): 510-529.
Pratt, J. W. (1964) “Risk Aversion in the Small and in the Large,” Econometrica 32: 122-136. Quiggin, J. (1982) “A Theory of Anticipated Utility,” Journal of Economic Behavior & Organization 3(4): 323-343. Rabin, M. (2000) “Risk Aversion and Expected Utility Theory: A Calibration Theorem,” Econometrica 68(5): 1281-1292.
53
Rothschild, M., and Stiglitz, J. (1970) “Increasing Risk I: A Definition,” Journal of Economic Theory 2: 225-243. Rothschild, M., and Stiglitz, J. (1971) “Increasing Risk II: Its Economic Consequences,” Journal of Economic Theory 3: 66-84. Samuelson, P. A. (1960) “The St. Petersburg Paradox as a Double Divergent Limit,” International Economic Review 1: 31-37. Sapienza, P., Zingales, L., and Maestripieri, D. (2009) “Gender Differences in Financial Risk Aversion and Career Choices Are Affected by Testosterone,” PNAS 106(36):
15268-15273. Schipper, B. C. (2012a) Sex Hormones California, Davis Working Paper.
and Competitive
Bidding. ,
University
of
Schipper, B. C. (2012b) Sex Hormones And Choice Under Risk. University of California,
Davis Working Paper. Schorvitz, E. B. (1998) Experimental unpublished.
Tests of Fundamental Economic
Theories,
Smith, V. L. (1976) “Experimental Economics: Induced Value Theory,” The American Economic Review 66(2): 274-279.
Smith, V. L. (1982) “Microeconomic
Systems
as an Experimental
Science,” The
American Economic Review 72(5): 923-955.
Taylor, M. P. (2013) “Bias and Brains: Risk Aversion and Cognitive Ability Across Real and Hypothetical Settings,” Journal of Risk and Uncertainty 46: 299-320.
Vickrey, W. (1961) “Counterspeculation, Auctions, and Competitive Sealed Tenders,”
The Journal of Finance 16(1): 8-37. Yaari, M. (1987) “The Dual Theory of Choice Under Risk,” Econometrica 55(1): 95-115.
Aggregate-level evidence from the field
4
Aggregate-level evidence from the field
55
having family history of heart disease, being post-menopausal, being nonCaucasian race, smoking, high level of low density lipoprotein, hypertension, obesity, diabetes, high levels of C-reactive protein, sedentary lifestyle, and stress.2 The chances of heart attack, not the dispersion of outcomes, are the focus of research on how statins alter the risk of heart disease. Similarly, risk factors for injuries are trauma, excessive load, poor technique, inappropriate equipment, and failure to warm up and cool down.’ Literature aimed at practitioners in these industries rarely (if ever) invokes the dispersion-of-outcome interpretation of risk. More importantly for our
purposes, that literature largely seems to ignore the possibility that individ-
ual choices might be described as maximizing the expectation of a Bernoulli function, or maximizing the weighted expectation of a value function.
Even if curved Bernoulli functions have not given us a better understanding of individual behavior, it could be argued that that might not be reason enough
to abandon them. Perhaps they may yet help us better understand and predict aggregate phenomena in important industries and markets. An analogy from
physics: myriad individual water molecules in a river move randomly in Brownian motion, yet important properties of the river are captured in the
laws of hydrodynamics. Our inability to predict the movement of individual molecules does not prevent us from designing and operating pumps and turbines that depend on the highly predictable behavior of aggregates. A biological analogy: the aggregate behavior of an ant colony is described by very different principles than the individual behavior of its members (Kirman [1993]. In economics, we know that market outcomes can exhibit a degree of rationality that exceeds that of individual traders (e.g., Gode and Sunder [1993], Jamal, Maier, and Sunder [2012)). Can curved Bernoulli functions help us understand and predict economic
and other social phenomena at aggregate levels? As we will see in this chapter, there has already been considerable research to help us examine this possibility. We will briefly review voluminous fieldwork on illicit drugs, sports gambling, engineering, insurance, real estate, and financial markets for stocks and bonds, and will ask whether positing nonlinear Bernoulli functions might
yield distinctive insights.
4.1
Health, medicine, sports, and illicit drugs
Discussions of risk in health, medicine, and sports invariably center on the
possibility of developing a physical condition that we would rather not have — €.g., encountering disease, harm, loss, or injury — and the associated “risk factors.” Mention of the dispersion notion of risk is conspicuously absent. For example, risk factors for drug addiction include family history of addic-
tion, being male, having another psychological problem, peer pressure, lack of family involvement, anxiety, depression, loneliness, and taking a highly addictive drug.! Risk factors for heart disease include being old, being male,
_ There is, however, a recent economics literature that tries to connect indi-
vidual elicited Bernoulli function parameters to that individual’s risky behavior, including smoking and drinking. For example, Barsky et al. (1997) uses answers to hypothetical questions about huge gambles (e.g., doubling lifetime income or reducing it by 30 percent) to classify respondents into four risk-tolerance categories. They find the power of this classification to predict behavior such as smoking and drinking to be only modest (mostly significant but rather small coefficients of the “correct” sign from a large sample of respondents). Subsequent studies in the same area report mixed results. For example, Picone, Frank, and Taylor (2004) report that in explaining the demand for preventive medical tests, patients’ risk-tolerance coefficients are either statistically insignificant or have the “wrong” sign. Anderson and Mellor (2008) estimate constant relative risk aversion (CRRA) coefficients for about a thousand subjects
from a nonhypothetical Holt-Laury procedure. These estimates have a small but significant (at the 5 percent level) power to predict subjects’ self-reported obesity, and are at best marginally significant (at the 10 percent level) in predicting self-reported smoking, drinking, weight, seat belt usage, and speeding. Perhaps the most definitive such study so far is Dohmen et al. (2005, 2011). Exploiting a large representative survey of German adults, they conduct a real-stakes lottery experiment with a representative subsample of 450 individuals. The survey includes a “general risk question” (roughly: please report your willingness to take risk in general, on a ten-point scale), and similar questions regarding willingness to take risks in particular domains such as health or car driving. The study tries to predict self-reported activities, e.g.,
smoking and participation in active sports. The abstract of their widely cited 2005 working paper concludes:
Strikingly, the general risk question predicts all behaviors whereas the standard lottery measure does not. The best overall predictor for any specific behavior is typically the corresponding context-specific measure. These findings call into question the current preoccupation with lottery measures of risk preference, and point to variation in risk perceptions as an understudied determinant of risky behavior.
360
Aggregate-level evidence from the field
Aguregate-level evidence from the field
57
The 2011 published version is more circumspect, and emphasizes the ability of the general and specific questionnaire responses to predict lottery choices
and self-reported behavior. It omits discussion of the inability of the lottery
choices (and fitted Bernoulli function parameters) to predict respondents’ choices regarding health, medicine, and sports. Sutter et al. (2013, 510) also
report that in contrast to the predictive content of laboratory measures of
impatience, “the experimental measures for risk and ambiguity attitudes are at best weak predictors of field behavior for the age groups we study” (10-18 year-olds).
4.2
/
Gambling
The gambling industry is sufficiently large and pervasive to warrant attention in any
systematic
account
of decision making
under
risk. The
National
Research Council (1999) noted that over $550 billion was wagered in the
United States alone. In 2009 US gaming industry revenue was $88.2 billion,
including $30.7 billion in casinos. Worldwide revenues the previous year were $358 billion (Ernst & Young [2011]). Some 80 percent of US adults report having engaged in gambling at some time in their lives, and a significant minority are heavy gamblers. Economists have invoked concave Bernoulli functions to explain insurance, and convex functions to explain gambling. For a strictly convex utility function, the certainty equivalent of any nontrivial gamble is strictly larger than its expected value. This fact can rationalize the acceptance of gambles with moderately negative expected values, such as many of those offered by the gaming industry. We saw in Chapter 2 that early theorists, such as Friedman and Savage,
favored this explanation, and their proposed Bernoulli functions included large convex segments. It is now time for some skeptical comments, emphasizing inconvenient implications.
Figure 4.1 shows the familiar shape proposed by Friedman and Savage (1948). The function is convex over the interval [b, c], presumably covering a wide range of possible lifetime incomes (or wealth) for a typical individual, or at least for some individuals. To maximize utility, such a person would choose the largest and most extreme gamble available with outcomes in this segment. For example, the 50-50 gamble with outcome either b or c gives expected
utility [u(b) + u(c)], the height at midpoint of the line segment connecting the two points on the graph above b and c. One can see that that utility far exceeds u(’A[b + c]), the height of the convex curve segment’s midpoint, which represents the utility obtained by receiving with certainty the bet’s expected value. The gap between those utilities represents the degree of unfairness the gambler is willing to accept. Gambles with outcomes closer together within [}, c], other things equal,
give lesser utility. Indeed, as first pointed out by Markowitz (1952b, 152-1 53) and later by John M. Marshall (1984), the optimal fair bet (or the optimal
a
b
cd
Figure 4.1 Friedman-Savage Bernoulli (schematic redrawn by authors).
x
function
and
the
bet with a fixed degree of unfairness) is even more extreme.
optimal
gamble
It involves only
the two possible outcomes a and d outside the convex interval [b, c], as shown in the figure. The optimal bet outcomes satisfy u'(a) = u'( d) and lie on the same tangent line. Such bets allow the gambler to achieve the concave hull of the Bernoulli function, i.e., to enjoy the modified Bernoulli function where the tangent line replaces the segment of the original curve above [a, d]. a ications The
— of course, are absurd. Hardly anyone seeks lifetime bets
with such extreme outcomes. If such people were common, we should expect to see casinos or other institutions find ways to exploit their willingness to accept any moderately unfair bet, the larger the better. Casual observation suggests that the casino industry’s preferred bet size is on the order of “minutes to weeks of their customers’ income, not years or decades. If a Friedman— Savage person were able to make a bet approaching her optimum, it seems
safe to guess that, win or lose, her Bernoulli function would afterwards have a
different shape. Anyone with a persistent demand for large unfair bets would _ find it hard to survive in a world that supplied them. An S-shaped Bernoulli function is even more problematic. Tt is the limit of a Friedman—Savage function when the inflection point b declines to negative infinity. The optimal bet for such a person involves an outcome a < b, which also tends to negative infinity. So this Bernoulli function predicts that people look for large lifetime gambles with an infinite downside. Again, this seems inconsistent with a viable gambling industry or with people who can function in a stable society. Of course, there is a second approach to explaining gambling. Perhaps gamblers are motivated by the entertainment value — the thrill, the heart rate and arousal, the bluff, the competition, the testosterone, and the show-off.
Economists since Alfred Marshall have taken this approach; more recent examples include Pope (1995), Anderson and Brown (1984), Wagenaar (1988),
McManus (2003), and Diecidue, Schmidt, and Wakker (2004).
58
Aggregate-level evidence fiom the field Indeed,
researchers
outside
of
economics
Aggregute-level evidence from the field focus
almost
exclusively
on
non-monetary motivations. Some of them rely on narrative rather than quantitative analysis. Freudians ascribe compulsive gambling to masochis -
tic self-punishment, autoerotic control, and even oedipal issues (Rosecra nce
[1988], 54). Other approaches point to events outside of the gambler’s personal control and influence (Dickerson [1984], a form of “safe” risk taking to relieve tension (Rosecrance, 58), and an “avenue of escape from routine and boredom” (Ezell [1960]). Ultimately, this is unlikely to be any more satisfactory than an approach looking solely at money end-states of gambles, but it
does illustrate the point that many people do recognize that the first approach is missing important aspects of gambling.
Even psychologists using quantitative methodologies do not necessari ly
use curved
Bernoulli functions
to explain gambling;
reinforcement
learn-
ing theory is a good example of alternatives. Lotteries in particular have been described as a “variable ratio” form of Skinnerian conditioning, offering reward at unpredictable intervals, resulting in the belief that winning is
inevitable (McCown and Chamberlain [2000], 69). This approach is boosted by studies showing that “state lotteries ... have increasingly changed their
structure to take advantage of cognitive biases and responses to reward.” For example, state lotteries now typically supplement the traditional single large,
low-probability prize with many smaller prizes with higher success probabil-
ities (National Research Council [1999]). Indeed, in the United States, some state lotteries have tickets that pay a prize less than the ticket price. Why
would someone be motivated to buy a lottery ticket by inclusion of some
outcomes that add only trivially to the expected value, and which yields a net
loss even when they are realized? Evidently the lottery designers believe that such “winners” are more likely to increase their subsequent ticket purchase s even though they experience a net loss (Wagenaar, 70).4 Relying on psychological literature (or perhaps on raw empirics), the gambling industry has devised ways to differentiate its products to appeal to people across age groups, socioeconomic classes, educational attainment, and gender. The first approach, convex segments of Bernoulli functions , seems confined to economics texts — and even there, convexity is more an inference from gambling than an explanation of it. Outside such nonexpla nations, curvature of utility functions is conspicuous by its absence in the gambling literature. Nor were we able to find serious attempts to (a) empirically isolate the monetary and nonmonetary consequences of gambling and (b) then recombine them into a unified, and comprehensive, analysis of the phenomenon. A final possibility is that gamblers believe that they will benefit financiall y.
Although a few hustlers and card sharks actually do make money on average,
they (and the casinos) only succeed to the extent that most gamblers are willing to accept bets with negative expected value. Some of these ordinary gamblers may delude themselves into thinking that the odds are in their favor. Others, however, may indeed obtain a net benefit, either from the entertainment value
59
or from an option value of the sort explored in Chapter 6. In either case, curved Bernoulli functions have contributed little to our understanding of gambling.
4.3
Engineering
To engineers, “risk” denotes the probability of failure or malfunction of a system. Analysis of risk involves assessing how various factors - design, environmental,
operational,
hardware,
or
software
— may
contribute
to
failures or malfunctions. For example, NASA (the National Aeronautics and Space Administration [http://www.nas.nasa.gov/projects/era.html]) describes engineering risk assessment as follows:
[It] quantifies system risks through a combination of probabilistic analyses, physics-based simulations of key risk factors, and failure timing and
propagation models. ERA develops dynamic, integrated risk models to not only quantify the probabilities of individual failures, but also to learn about the specific systems, identify the driving risk factors, and guide designers toward the most effective strategies for reducing risk.
Similarly, Paté-Cornell (2007) describes it as follows: Engineering risk analysis methods, based on systems analysis and probability, are generally designed for cases in which sufficient failure statistics are unavailable. These methods can be applied not only to engineered systems that fail (e.g., new spacecraft or medical devices), but also to systems characterized by performance scenarios including malfunctions or threats. I describe some of the challenges in the use of risk analysis tools, mainly in problem formulation, when technical, human and orga-
nizational factors need to be integrated. This discussion is illustrated by four cases: ship grounding due to loss of propulsion, space shuttle loss caused by tile failure, patient risks in anesthesia, and the risks of terrorist
attacks on the US. I show how the analytical challenges can be met by the
choice of modeling tools and the search for relevant information, includ-
ing not only statistics but also a deep understanding of how the system works and can fail, and how failures can be anticipated and prevented. This type of analysis requires both imagination and a logical, rational approach. It is key to pro-active risk management and effective ranking of risk reduction measures when statistical data are not directly available and resources are limited. Once again, engineers clearly focus on risk as the probability of harm, and
not on risk as dispersion of outcomes. And once again the goal is to identify and mitigate risk factors, not to elicit parameters of a Bernoulli function.
A risk engineer’s job is done when she convincingly reduces the probability
60
Aggregate-level evidence from the field
of a bad outcome below some conventionally acceptable threshold (eg., a “100 year event”), not when she maximizes some sort of weighted expectation of a value or Bernoulli function. 4.4
Insurance
The insurance industry is huge. In 2011 it collected $4.6 trillion in premiums world-wide, of which $1.2 trillion came from US customers (Insurance Information Institute [2014], 3). Almost all insurance contracts offer the customer a negative actuarial value, and economics textbooks often interpret
the fact that so many people purchase them as evidence of widespread risk aversion. That explanation can be questioned for several reasons. First, in their actuarial calculations as well as in their marketing, insurance companies focus on the possibility of harm, not on dispersion per se. Textbook risk-averse consumers would prefer contracts that eliminate deviations from the mean in the positive as well as the negative direction, and such contracts would be very inexpensive for suppliers. Yet few insurance policies aim primarily at the dispersion of outcomes in this sense. Instead, the usual contract covers just one tail of the outcome distribution — protecting the policyholders in case of a loss, but rarely penalizing them in case of a gain. A second line of questioning builds on the first. The usual one-sided contract resembles a put option, and its actuarial value does not fully capture its value to the consumer. We will argue in Chapter 6 that this observation leads to a simpler and more general explanation of customer demand for insurance. For now, suffice it to say that insurance contracts help reduce customers’ costs of contingency planning — they chop some branches off the decision tree that must otherwise be analyzed and planned for. An auto insurance policy matches reimbursement to the loss from an accident, allowing the policyholder to ignore that contingency after the premium has been paid. A life insurance policy matches the payout to the family’s loss of the breadwinner’s income, again allowing the family to ameliorate the worries about such a contingency. Indeed, loss contingencies constitute the standard sales pitch used by insurance salesmen everywhere. A third line of questions concerns the presumed shape of Bernoulli func-
tions. As noted earlier, the lower tail might be convex, not concave, as in the
S-shaped Bernoulli function of Fishburn and Kochenberger (1979). A major tenet of prospect theory is that individuals have convex preferences over losses, and this suggests that most people would not buy insurance to cover losses even at a moderately subsidized price.® Finally, demand for insurance might depend on the social context, not just on personal traits captured in a Bernoulli function. Explicit insurance policies do not exist in primitive societies, although some of their social arrangements can be interpreted as implicit systems of insurance. Formal insurance contracts developed only in certain specific civilizations. It is unclear whether
Aggregate-level evidence from the field
61
there is some intrinsic desire to purchase insurance for home, car, life, or
health, or whether the desire arises instead from some social learning process, possibly abetted by the industry’s marketing efforts and legal (e.g., auto) and contractual (e.g., mortgaged home) requirements.
Einav et al. (2012) is an interesting attempt to assess whether consumer risk preferences are consistent across domains, as reflected in choices regarding insurance and or investment. However, their measures of riskiness for various
kinds of insurance policies and investment options had several shortcomings
(as regards to the search for innate Bernoulli curves). Their measures were ordi-
nal (and constructed separately for each domain) and each ordering relied for construction upon a great deal of subjectivity by the authors. As a result these risk measures might have little to do with the Arrow—Pratt risk coefficients of any presumed corresponding Bernoulli functions. Einav et al. make this clear up-front by saying explicitly that their “model-free approach” does not examine “the stability of the absolute level of risk aversion” (2609). Similarly, they
conclude that “Our findings of a reasonable degree of consistency in individuals’ relative ranking of risk preferences across domains does not preclude a
rank preserving difference in the entire distribution of willingness to bear risk across domains” (2609). As for their specific findings, they report that higher (0.24—0.55) correlations occur between the constructed ordinal measures for the clearly related domains of health/drug/dental insurance, while the correlation between the 401(k) measure and the health insurance measure falls to the
range of around 0.06 (data from their table 3A, 2620). This dramatic context dependence supports the arguments and other findings of this book. 4.5.
Real estate
Real estate constitutes a large component of wealth and investment in most modern economies. A negative relationship between price variance and real estate development is a common result from many studies (Holland, Ott, and
Riddiough [2000]; Sivitanidou and Sivitanides [2000]; and Sing and Patel [2001)), which might be interpreted as evidence in favor of aversion to dispersion risk. However, there is a good alternative explanation. Investments in real estate are in large part irreversible: often one can’t recover much of the money sunk into developing land, or even fully recover the purchase price. The literature on real options (Dixit and Pindyck [1994]; Trigeorgis [1996]; and Brennan and
Trigeorgis [2000]) shows that even risk-neutral investors will delay an irre-
versible investment more when outcome dispersion increases. We develop this point more fully in Chapter 6.
Periodic overbuilding in the industry, often attributed to irrational behavior of builders, has been addressed in Grenadier (1996, 1653) who presents a
rational basis for bursts of construction as prices begin to fall: This: article develops an equilibrium framework
for strategic option
exercise games. I focus on a particular example: the timing of real estate
62
Ageregate-level evidence from
the field
Averenate-level evidence from the field
development. An analysis of the equilibrium exercise policies of developers provides insights into the forces that shape market behavior. The model isolates the factors that make some markets prone to bursts of concentrated development. The model also provides an explanation for
why some markets may experience building booms in the face of declining demand and property values. While such behavior is often regarded
as irrational overbuilding, the model provides a rational foundation for such exercise patterns.
Bulan, Mayer, and Somerville’s (2009) analysis of 1,214 condominium projects
in Vancouver, Canada during 1979-1998 finds that empirical evidence supports the risk-neutral predictions of real options theory: (1) both idiosyncratic
and systematic risk lead developers to delay new real estate developments; (2) as suggested by Caballero (1991), competition weakens or eliminates the negative relationship between uncertainty and investments; and (3) builders
appear more likely to build when prices begin to fall. 4.6
Bond markets
Risk analysis of bonds typically focuses on the probability of default and its
possible size. Thus, as in the insurance industry, practitioners evidently do not perceive risk as dispersion per se but rather as the possibility of default or loss.
Does this perception reflect actual market practice? That, of course, is the
key question. To answer it, we first consider the well-known bond ratings by
the big three rating firms, Standard and Poor's, Moody’s and Fitch. They all define credit risk as the likelihood of default and the associated financial loss not as the dispersion of outcomes. For example, Ratings assigned on Moody’s global long-term and short-term rating scales are forward-looking opinions of the relative credit risks of financial obligations issued by non-financial corporates, financial institutions structured finance vehicles, project finance vehicles, and public sector entities. Long-term ratings are assigned to issuers or obligations with an original maturity of one year or more and reflect both on the likeli-
hood of a default on contractually promised payments and the expected
financial loss (or impairment) suffered in the event of default. Short-term
ratings are assigned to obligations with an original maturity of thirteen
months or less and reflect the likelihood of a default on contractually promised payments.
(Moody’s [2012], 4)
Accordingly, in their Global Long-Term Rating Scale “obligations rated Aaa
are to be of the highest quality, subject to the lowest level of credit risk,” and
obligations rated Aa are judged to be of high quality and are subject to very low credit risk,” and so on. Parallel criteria apply to other rating services
offered by Moody’s
63
(5). Inputs to their rating services include (a) baseline
credit assessments, (b) loss given default assessments, (c) covenant quality assessments,
(d) speculative grade liquidity ratings, and (e) bank financial
strength ratings, etc. (27-31). Dispersion of outcomes that may lead agents with concave Bernoulli functions to demand
risk premiums
is not part of the discussion. Instead, the
ratings are a matter of judgment, and reflect mainly the agency’s assessment of the chances that the borrower will default on the payment of coupons and/ or the principal. Even risk-neutral investors demand a higher promised yield on lower-rated bonds simply because they must be compensated for a higher
expected default rate. Fisher (1959) postulated that the corporate bond yield spread (the yield to maturity in excess of that on US Treasury bonds) is determined by two factors: chances of default and marketability of bonds. His regressions explained almost 75 percent of the cross-sectional variation in the yield spread using three variables representing the first factor (earnings variability, time elapsed since previous default, and leverage) and one for the second factor (the value of publicly traded bonds outstanding). Concavity of investors’ Bernoulli func-
tions was not considered; earnings variability matters, not because it directly
enters the investor’s utility function, but rather because it affects the probability of default and the associated costs.
Altman (1989, 918, Table 5 and Figure 1) examined various ratings for the
realized yield net of actual defaults. He found that net yield is monotonically increasing in bond rating, except for the two riskiest categories (B and CCC) where the link breaks three years after the bonds are issued. This monotonic link is a puzzle from the perspective of standard dispersion-based theory (e.g.,
CAPM, described briefly in the next section): if bond defaults are mainly idiosyncratic and uncorrelated with the stock market returns, then there should
be no bond risk premium and thus no monotonic link between bond yield and bond rating. As with other inconsistencies between observed behavior of financial mar-
kets and the received dispersion-as-risk theory, many solutions have been proposed for the observed link. These include (a) investors hold undiversified bond portfolios, (b) defaults are correlated with market returns, (c) market inefficiency, e.g., demand limitations due to risk-class investment restrictions, (d) the value placed on liquidity, and (e) interactions with interest rate risk.
Explanations (a) and (b) require the additional but unsupported assumptions that bond ratings are related to dispersion measures of risk, and that investors have concave utility functions. The other explanations do not require assumptions regarding Bernoulli functions. 4.7
Stock markets
Theory
and practice in equity markets,
more
than in any other major
industry, have been influenced by the standard economic
theory of risky
Of
Aggregate-level evidence from the field
Aggregate-level evidence from the field
choice. The influence begins with Markowitz’s (1952a, 1959) portfolio theory, which primarily concerned these markets, Basic portfoli o theory is built on the assumption that investors consider the mean and the variance
of the probability distribution of returns, and no other characte ristics, when they choose a portfolio of investments, This assumption follows from the
expected-utility maximization in important special cases, e.g., the quadratic
utility (Bernoulli) function, or multivariate normal security returns,
Sharpe (1964) and Lintner (1965) worked out the equilibrium consequ ences of portfolio theory. The equilibrium results, commonly referred to as the capital asset pricing model (CAPM), are summarized in the equation
E(R) = Ry + B:* Pap
(4.2)
where E(R)) is the expected return or yield on any traded security i, while R, is the risk-free interest rate (often proxied by yield to maturit y on US ‘Treasury securities) and Py, = E(Ry) ~ R, is the market risk premium, the expected return on the aggregate supply of risky securities (often proxied by
the historical yield on a broad stock index such as the S&P 500) in excess of
the risk-free rate. Equation (4.1) says that securities differ in their expected
returns only to the extent that they differ in their systematic risk 8, = Cov (R, R au) / Var (Ry), the ratio of covariance of security i and market returns to the variance of market returns. Moreover, the relationship in (4.1) is linear. The predicted linear relationship between returns on securities and their market tisk, B, has been the subject of intensive and careful econome tric scrutiny. Early results were largely favorable, but since the 1980s the data have been much less supportive.
Two
illustrious researchers, Eugene
Kenneth French, summarize decades of empirical work as follows:
Fama
and
Like Reinganum (1981) and Lakonishok and Shapiro (1986), we find that the relation between 6 and average return disappears during the more
recent 1963-1990 period, even when B is used alone to explain average returns. The appendix shows that the simple relation between 8 and average return is also weak in the 50-year 1941—1990 period. In short,
our tests do not support the most basic prediction of the SLB (Sharpe — Lintner-Black) model, that average stock returns are positively related to
market fs.
(Fama and French [1992], 428) And more than a decade later:
The attraction of the CAPM
is that it offers powerful and intuitively
pleasing predictions about how to measure risk and the relation between
expected return and risk. Unfortunately, the empirical record of the
model is poor — poor enough to invalidate the way it is used in applica-
tions. ... In the end, we argue that whether the model’s problems reflect
65
weaknesses in the theory or in its empirical implementation, the failure of the CAPM in empirical tests implies that most applications of the model are invalid. (Fama and French [2004], 25) The business press has reacted to these findings with articles such as “Is Beta Dead?” (Wallace [1980]). Black (1993) reported a positive relationship for the period 1931-1991 but virtually no relationship during 1966-1991; the mean returns on portfolios did not vary with their Bs (market risk) during the latter 25-year period. Kothari, Shanken, and Sloan (1995) argue that beta
plays a significant role in determining equity returns, allowing room for other determinants also. A finance textbook summarizes the prevailing state of knowledge as follows:
Since William Sharpe published his seminal paper on CAPM (capital asset pricing model), researchers have subjected the model to numerous
- empirical tests. Early on, most of these tests seem to support the CAPM’s
main predictions. Over time, however, evidence mounted indicating that the CAPM had serious flaws.
(Smart, Megginson, and Gitman [2004], 210-212)
The textbook lists the difficulties of properly testing (read: finding empirical evidence in favor of the validity of) CAPM, which include: (1) unobservability of the key explanatory variable E(R,,), the expected return of the market
portfolio; (2) unobservability of 6, which typically is estimated as a regression coefficient using historical data; and (3) the difficulty of identifying the correct risk-free rate of return (e.g., whether returns on US Treasury bills of 1, 10, or 30-year maturities are more appropriate for this purpose).’? The list continues, but it does raise doubts about investors having well-defined
Bernoulli functions and risk being perceived by them as variance of returns. In an interesting twist, some financial economists shift the burden of proof. Turning the orthodox Fisher-Neyman-—Pearson tradition of statistical testing on its head, they assert that the CAPM is true unless it is overwhelmingly rejected as the null hypothesis. For example, Brealey and Myers (2003), one of the best-known textbooks in finance, after reviewing the literature, write: What is going on here? It is hard to say. ... There is no doubt that the evidence on the CAPM is less convincing than scholars once thought. But it
will be-very hard to reject the CAPM beyond all reasonable doubt. Since data and statistics are unlikely to give final answers, the plausibility of the CAPM will have to be weighed along with the empirical “facts.” (200-202)
Sunilarly, Chan and Lakonishok (1993, 60-61) “do not feel that the evidence for discarding beta is clear-cut and overwhelming.” A more defensible denial
66
Aggregate-level evidence from the field
Aggregate-level evidence from the field
of the empirical problem is the reminder, “Absence of (statistical) evidence is not evidence of absence of the relationship” (Ziliak and McCloskey [2008)). In summary, contrary to widespread belief, US stock market data hardly provide a ringing endorsement for CAPM, which is founded on the idea that investors have an aversion to dispersion in portfolio returns. A more direct implication of concave Bernoulli functions is that investors will diversify their portfolios. Students in undergraduate and MBA finance courses (or even in some probability and statistics courses) are shown how
diversification allows the investor to reduce portfolio variance without sacrificing expected return. This is a pedagogical tradition going back at least to Markowitz (1952a).
So what does empirical research tell us about diversification by individual investors? Numerous articles document astonishing departures from meanvariance efficiency, even for large and institutional investors for whom the cost of diversification is almost negligible. Holderness (2009, 1401) reports: Given that 96% of a representative sample of CRSP and Compustat firms
have large* shareholders and these shareholders on average own 39% of the common stock ... it is now clear that atomistic ownership is the excep-
tion, not the rule, in the United States.
A similar situation prevails outside the United States. Whether this worldwide
absence of stock diversification can be explained by control issues (see Shleifer
and Vishny [1997]) and the information benefits of concentration remains an
open question. ‘Examining
household
wealth
portfolios,
Worthington
(2009,
67
18) write: “The country studies find that the extent of diversification between and within risk categories is typically quite limited. ... So far, asset pricing models have had limited success in explaining limited asset market participation and portfolio diversification.” Such widespread failure of the prediction is troubling because diversifica-
tion is probably the closest thing to a free lunch in all of economics. It is a
-
virtually costless way to reduce dispersion, and investors with ity functions should be lined up around the block to do it. As are possible explanations for investors who are large enough to share of a firm’s equity, but the overall lack of diversification investors remains an embarrassment for standard theory.
4.8
concave utilnoted, there own a major by ordinary
The interest parity puzzle
Li, Ghoshray, and Morley (2012, 167) begin with the commonplace observation, “Uncovered interest parity (UIP) is one of the most important
theoretical relations used in analytical work in both international finance
and macroeconomics. It is also a key assumption in many of the models
of exchange rate determination.” We might add-that the UIP puzzle also
spotlights the main themes in this chapter. To explain, we begin with the equation:
Appreciation = a + b*InterestDifferential + error,
(4.2)
where Appreciation is the percentage gain (or loss, if negative) in the
15-16)
exchange rate of one currency (e.g., the Japanese yen) for another (e.g., the
One major finding is that the demographic, socioeconomic and risk attitude factors that so persuasively impact upon our heuristic measures of diversification bear little relation to the factors influencing the proportion of assets held in market assets (bank accounts, superannuation, equity and cash investments, life insurance and trust funds).
mean zero — that is, on average the currency appreciation exactly reflects the interest differential. The standard explanation is that if UIP did not hold,
reports: “Australian household portfolios have very low levels of asset diversification ... the behavior observed in household portfolios appears to bear
little relation to the central predictions of classic portfolio theory” and
Australia in this respect is typical, not exceptional. Numerous studies reach essentially similar conclusions regarding lack of household asset portfolio diversification: for the United States (e.g., Bertaut and Starr-McCluer [2000], 2) and Campbell ((2006], 1590), France (e.g., Arrondel and Lebfevre
[2001], 16), the Netherlands (e.g., Alessie, Hochguertel, and van Soest [2004], Table 1), the United Kingdom (e.g., Banks and Smith [2002], 3), Germany (e.g., Barasinska, Schaefer, and Stephan [2008], 2, 21), and India (e.g., Cole
et al. [2012], 3). In summarizing the contents of their edited volume regarding household portfolio diversification, Guiso, Haliassos, and Jappelli (2000, 8,
US dollar) over some interval of time, rence between the two domestic interest denominated Japanese government debt yield). Uncovered Interest Parity asserts
and InterestDifferential is the differates during that period (e.g., yenyield minus the US Treasury Bond that a = 0, b = 1, and the error has
then risk-neutral investors with rational expectations could on average earn arbitrarily large arbitrage profits with zero net investment. For example, if the expected appreciation over the next year is less than the interest rate differential, then investors should borrow the first currency and use the proceeds to buy the second currency and earn interest on it. At the end of the period, investors should sell the second currency and repay the borrowing cost and pocket profits to the extent that appreciation indeed falls short of the interest differential. Does it work in practice? Dozens of economists have run the regression suggested by (4.2), taking care to use appropriate econometric techniques and appropriate data. The results are startlingly bad. Estimates of b are seldom in the vicinity of 1.0 as required by UIP. Most of the estimates are of the
wrong sign, and a meta-study by Froot and Thaler (1990) reports an average
YO
Aggresdteniever evidence from the field
estimate of b = -0.88! This enduring empirical inconsistency is known as the interest parity puzzle.
It will not surprise most readers that economists trying to solve the puzzle usually invoke risk aversion. In a stationary world, risk-averse investor s with concave utility functions would limit their borrowing and not drive expected appreciation into complete alignment with the interest differential. Thus risk aversion can explain a nonzero value of the intercept term a as a risk premium, and can also explain values of b somewhat less than 1. But empirical work typically yields estimates of a and b that vary with the time period as
well as the currency pairs. It has thus become customary to refer to the UIP
residual, the gap between observed appreciation and the interest differen tial,
as a time-varying risk premium. It is a plug, not an explanation.
Aggregate-level evidence from the field
69
empirical estimates of the market risk premium Py = E(Ry) — R, in equation (4.1) to its theoretical determinants. The problem begins with the wide range of the market risk premium and the lack of consensus about its empirical magnitude. Every textbook on finance and investments includes chapters on how to use the CAPM in practice; to do so requires a numerical value of P,,. Fernandez, Aguirreamalloa,
and Avendano (2012) report a recent annual survey of the values used by professors and finance practitioners. In the United States 2,223 answers ranged from Py = 1.5 to Py = 15.0 percent with a mean of 5.5, median of 5.4, and standard deviation of 1.6 percent. Corresponding numbers for Germany are
281, 1.0 to 17.0, 5.5, 5.0, and 1.9, and for the United Kingdom they are 171, .
The phrase suggests that the residual is related to time series that reflect the amount (and direction) of risk or the degree of risk aversion . The published literature, however, contains remarkably little evidence support ing such a relation. Li, Ghoshray, and Morley (2012) is the bravest attempt that we’ve seen. It supplements the intercept term in equation (4.2) with a term of the form c*sigma(t), where the time varying quantity of risk sigma(t) isa GARCH -M estimate of the conditional standard deviation. Other things being equal, the coefficient c should be negative if investors are risk averse. The authors run the regression on data from ten countries, with mixed results. Estimates of a, the constant component of the risk premium , range from 0.13 (and significant at p = 0.01) for Russia to —0.04 (margin ally significant, p = 0.10) for the United Kingdom; the other significant (p = 0.01) constant risk premium estimates are for Japan at 0.06 and Thailan d at ~0.01. The estimate of b is anywhere near its UIP value of 1.00 only for Russia and Thailand, and is significantly negative at ~2.34 for the United Kingdom and —2.97 for Japan. The estimate of the time-varying risk paramet er c is consistent with standard risky choice theory — negative, large, and significant — only for Japan and Russia. For Brazil c has the right sign and statistical
1.5 to 22.0, 5.5, 5.0 and 1.9. The range of equity premia used by respondents to the survey was fairly tight in Lithuania (max — min = 2.3) but quite wide in Brazil (max — min = 28.2). Fernandez, Aguirreamalloa, and Avendano note that the wide range in part reflects different estimation methods. Some . respondents rely more on a priori theory, and those that rely mainly on historical data may use different time periods. Of course, ex post risk premia vary greatly from year to year, depending on the 12-month performance of the broad stock market index and the yield on government bonds? In their original identification of the equity premium puzzle, Mehra and Prescott (1985, table 1) begin with the observation that, on average over 1889— 1978, a broad index of US common stocks (Standard and Poor 500) earned
wrong sign for most of the other countries, significantly so for Malaysi a, and with large magnitude (1.88, with p = 0.05) for the United Kingdo m. The authors conclude that “emerging countries work better in terms of UIP than developed countries” and “including the risk premium in UIP improves the precision of the estimation, but it is stil] hard to explain the failure of UIP
tion maximization implies that P,, is at most 0.4 percent, an order of magnitude smaller than the empirical value. In the editor’s preface to Handbook of Equity Premium, Mehra (2008, xix) concludes:
significance but has only a tenth of the magnitude. The estimate of c has the
even using a sophisticated measure of risk” (168). We conclud e that the inter-
est parity puzzle endures and that, so far, concave Bernoulli function s and dispersion measures of risk have had little success in explaining it. 4.9 . The equity premium puzzle
Empirical studies of stock markets uncovered another enduring inconsistency, known as the “equity premium puzzle.” It refers to difficulties in reconci ling
6 percent per annum more than government bonds. So their empirical value
of Py is on the order of 6 percent; today the consensus might be closer to 5.5 percent as noted above. On the other hand, the standard theory of risk aversion suggests that P,,is a function of returns dispersion, Var (R,,), and the concavity of the marginal investor’s Bernoulli function. Mehra and Prescott assume that the Bernoulli function of a representative agent is of the constant relative risk aversion form U(c) = ch" / (1- r). Given historical values of Var (R,,) and other financial variables and plausible values of r and other parameters such as the discount rate, they show that intertemporal consump-
The puzzle cannot be dismissed lightly because much of our economic
intuition is based on the very class of models that fall short so dramatically when confronted with financial data. It underscores the failure of paradigms central to financial and economic modeling to capture the characteristic that appears to make stocks comparatively riskier.
There have been many attempts to explain away this puzzle. Some begin by
noting (as in the previous section) that historical average ex post returns are highly imperfect estimates of expected returns. Indeed, expectations are subject to continual change, and the ex post averages can only be estimated from
70
Aggregate-level evidence from the field
stale data whose relevance to the present can be questioned. Estimating Py
over a single decade yields estimates ranging between 0.3 percent and 19 per-
cent. Extending the sample into more distant history increases its susceptibil-
ity to parameter shifts. The US economy as a whole, and the New York Stock
Exchange in particular, have done exceptionally well over the past century,
not only to have survived but also flourished (Brown, Goetzmann, and Ross
[1995]). However, their past may not be representative of either the actual
future or current expectations about the future. Of course, this line of argu-
ment renders the theory essentially untestable. Other attempts to deal with the puzzle emphasize neglected theoretical assumptions and empirical complications. These include market failure due
to adverse selection and moral hazard, friction in the forms of transaction
costs and liquidity constraints, imperfect and incomplete markets, tax laws, individual irrationality, and inevitably, prospect theory. Yet, the puzzle has resisted all these attempts for many years (Kocherlakota [1996]; Mehra [2003]; and Mehra and Prescott [2003]). So far, each proposed explanation sticks to the idea of aversion to dispersion risk, and none considers questioning the theoretical or empirical underpinnings of that assumption.
Aggregate-level evidence from the field
4.10
7
Risk aversion in aggregate model calibrations
For various a priori reasons, Mehra and Prescott (1985) rule out assuming
that investors are extremely risk averse, but some later studies argue that the coefficient of relative risk aversion r for a typical agent may exceed
10.0, and that (in conjunction with other tweaks) this could rationalize the
observed market risk premium. Gabaix used r = 4, which is well beyond the range admissible in the preceding literature. Calibrated models of aggregate consumption are also used in labor economics and business cycles, and those models imply quite different levels of risk aversion, as we shall explain next. An influential strand of modern macroeconomics combines recursive (or dynamic) general equilibrium models with computational methods to perform policy simulations. Such computational macroeconomic experiments (to use Kydland and Prescott’s terminology), require a concave Bernoulli function to represent utility of consumption each period, and for computational purposes they usually assume it takes the CRRA form noted earlier, cl-r/(1—7r); (See, for example, Kydland and Prescott [1982)). As in Mehra and Prescott, a key parameter is the numerical value of r, the coefficient of relative
Gabaix (2012) presents a new version (due initially to Rietz [1988], and more recently to Barro [2006]) of the unobservable expectations argument.
risk aversion (see the Appendix to Chapter 2). Both author pairs appeal to previous authors and to stylized facts from the naturally occurring economies
other known puzzles in macro-finance. However, like other models with a CRRA representative agent, Gabaix’s model has the unfortunate implication
This is larger than the value of 1.5 that we used in our previous research. The problem with ... 1.5 is that the resulting calibrated value of 6 [a parameter of worker commuting time] exceeds one. This would be inconsistent with the theory.
He shows how to incorporate a time-varying belief about the imminence of rare disaster into a representative agent model of financial markets. He lists numerical values for 15 variables used in a simplified calibration model, including the value of 4 for the coefficient of relative risk aversion (CRRA, discussed more in the next section). The simplified model is able to produce an equity premium of about the right magnitude, as well as account for nine
that asset prices increase when the disaster probability increases. He is able to patch this by introducing additional preference parameters in the spirit of Epstein and Zin (1989). It is unclear to us how one could validate such a model from data. The equity premium puzzle has trickled down to the street where Investopedia’s explanation captures the current wisdom without questioning
the underlying theory:
The equity premium puzzle is a mystery to financial academics. According to some academics, the difference is too large to reflect a “proper” level of compensation that would occur as a result of investor risk aversion; therefore, the premium should actually be much lower than the historic average of 6%. More recent extensions to the puzzle attempt to offer a different rationale for explaining the EPP, such as investor prospects and macroeconomic influences. No matter the explanation, the fact remains that investors are being rewarded very well for holding equity compared to government bonds.
(e.g., the growth rate of per-capita consumption and the real return to physical capital) to conclude that 1 Ee
g E
°
DECREMENTS 200_ 760
101
G10
= = >
80
go
= 60
60
2
5
40 20 400 200 300 20 40
400 500
600 700 800 900 1000 INCREMENTS (Thousands of Dottars)
e
40
20 DECREMENTS 200 100
° °
20 40
60
60
80
80
° 100 200 300
400 500
600 700 800 900 1000 INCREMENTS {Thousands of Dollars)
E (1)
E (T:H)
PRE-TAX FIRM VALUE
POST-TAX FIRM VALUE
100 120 140 160
Figure 6.2 (cont.) E (V-T:H) E(V-T)
what those complications imply for the firm’s risky choices. Since tax liabilities are non-linear, they imply a post-tax value that is concave in pretax value.
ZZ
That is, given the tax laws, a firm that maximizes shareholder value (and thus
is “risk neutral”) chooses as if it had a concave Bernoulli function defined on pretax value. For example, it is willing to pay to reduce pretax variability. The argument is shown graphically in Figure 6.3. A similar argument, shown graphically in Figure 6.4, applies to another important contracting cost, for
PRE-TAX V;
VAV,): E(V): E(T): E(T:A): E(V-T): E(V-T:H): C*:
bankruptcy. The costs of bankruptcy are generally a nonlinear function of
pretax firm value, and they impart a hedging motive. Once again, they lead to a concave revealed Bernoulli function in pretax firm value, even though the firm remains risk neutral with respect to net value to shareholders. As bankruptcy — possible corporate death — shapes revealed risk preferences, so can the prospect of biological death. In his famous study of cooperation among vampire bats in Puerto Rico, Wilkinson (1984) obtains a concave revealed Bernoullj function for grams of blood consumed, assuming (in effect) that bats (or their “selfish genes”) have risk-neutral intrinsic preferences for
individual survival. The underlying idea is diminishing marginal utility: a hungry bat gains greater fitness from a given gram of blood than a well-fed bat. Caraco (1981) reports that foraging strategies of birds (dark-eyed juncos) reveal concave Bernoulli functions in terms of calories gleaned when the mean comfortably exceeds the threshold needed for survival, but when the mean level falls short of the survival threshold, then the revealed Bernoulli
functions become convex. An explanation is that calories that don’t push the total above the threshold are not worth much.* Similar reasoning applies to bankruptcy constraints, as we will see in Section 6.4 below.
E(V)-C* E(V)
V,.
FIRM VALUE
pre-tax value of the firm without hedging if state /[A] occurs. expected pre-tax value of firm without hedging. expecied corporate tax liability without hedging. corporate tax liability with a costless, perfect hedge. expected post-tax firm value without hedging. post-tax firm value with a costless, perfect hedge. maximum cost of hedging where hedging is profitable.
Figure 6.3 Smith and Stulz (1985) figure 1.
6.3
Context as an opportunity set
Stigler and Becker’s (1977) famous “De Gustibus ...” paper takes a strong stand on how to model context-dependent choice. It recommends against
postulating tastes that change from one context to the next. Since economists
have no theory for how preferences change, such an “explanation” has no economic content, and no predictive power. Instead, Stigler and Becker recommend that economists maintain the hypothesis that intrinsic preferences
are the same over time and across individuals, and that we focus our attention
on how contexts affect implicit prices and incomes (i.e., opportunities). This focus enables us to deploy economic analysis and make testable predictions.
102
Risky opportunities
Risky opportunities
103
To see how the suggestion can be implemented, consider a homeowner shopping for insurance. What additional costs would she incur in the event
BANKRUPTCY COSTS
of fire, theft, or accident? It’s not just the cost of replacement that matters,
E (B)
PRE-TAX
NET POSTE-TAX
FIRM VALUE
FIRM VALUE
7
/
/
7
|
7
E (Vy)
4
é
7 Vj
LX F
az
E(V)
VE
capture these important considerations, or even explain the purchase or structure of life insurance.
PRE-TAX
functions, but six decades of empirical search have not shed any light on stable preferences of that sort. The opportunity set approach to risk redirects attention to potentially observable considerations such as bailout options. For example, one might predict that a low-income member of a wealthy family is more likely to be a high roller because winning a large amount could give him more clout as well as wealth, while losing a large amount would only reinforce his current lowly status without seriously threatening his survival.
FIRM VALUE
E(V)-C* VAV,): F E(V): E(Vj): E( Vy): E(B): E(B:H): c*:
expands the opportunity set. It is hard to see how Bernoulli functions can
Or consider gambling. Pioneers such as Friedman and Savage (1948) thought it could be explained by convex segments of unobservable Bernoulli
/
E (Vq:H)
but also the time cost and aggravation of making temporary arrangements, and the increased difficulty in getting to work or school and, more generally, in meeting contractual obligations. Such considerations can be captured in contingent opportunity sets, and they lead to new predictions, e.g., that homeowners with larger mortgages will carry more life insurance and less discretionary fire insurance. More generally, insurance simplifies one’s life by reducing the number, diversity, and cost of contingency plans, and indirectly
pre-tax value of the firm without hedging if state /[K] occurs. face value of the debt. expected pre-tax value of firm without hedging. net expected posi-tax value of the firm without hedging. net expected post-tax value of the firm with a perfect, costless hedge. expected bankruptcy cost without hedging. expected bankruptcy cost with perfect hedging will be zero in this case. maximum cost of hedging where hedging is profitable.
Figure 6.4 Smith and Stulz (1985) figure 2.
Stigler and Becker illustrate their point by analyzing habits and addictions, and advertising and fads. Risk aversion and risk preference are the first suggestions in their list of possible further applications, but they never indicate how to proceed with those applications. This chapter can be read as an attempt to implement Stigler and Becker’s suggestion. Opportunity sets are potentially observable, while Bernoulli func-
tions (and subjectively weighted probability curves) are not. Different levels and kinds of risk change the opportunity sets available to a decision maker in different ways, yielding potentially testable predictions. Some of the opportunity set changes can be analyzed as real options, and that theory has advanced
considerably since 1977 (e.g., Dixit and Pindyck [1994]; Trigeorgis [1996]).
6.4
Net versus gross payoffs
To formalize this general perspective, we need to say more about payoffs, opportunity sets, and preferences. Net payoff refers to quantities that really matter to the decision maker (DM), eg., real disposable income for a breadwinner, or overnight survival probability for a vampire bat. Gross payoff refers to quantities arising directly from the DM’s choices, e.g., nominal pretax earnings for the breadwinner, or grams of blood consumed by the bat. Opportunity sets describe the feasible net payoffs given relevant constraints and contingencies, and can be specified in various ways. In this section we will consider opportunity sets defined by simple production functions that specify the net payoff y = f(x) as a function of gross payoff x. Such functions easily capture the impact of taxes and subsidies, and we will see that they also can capture more subtle contingencies such as contests, fiduciary responsibilities,
and social status, not to mention the metabolic constraints of vampire bats. Let the Bernoulli function U represent the DM’s cardinal preferences over net payoffs y. Those preferences are called intrinsic (or innate or true). Given
opportunities defined by a production function y = f(x), let revealed preferences (sometimes called induced or indirect) over gross payoffs be represented by the Bernoulli function u(x) = U(f(y)).
For the sake of parsimony, for the rest of this chapter we shall treat the decision maker as if she were risk neutral in net payoff, i.e., we shall assume
104
that her intrinsic Bernoulli function is linear, or (without further loss of generality) that U(y) = y. Even if the DM has more complex intrinsic preferences, the linear approximation should be quite good when the DM is dealing with small-to-moderate stakes and has access to reasonably efficient financial markets. For example, a gain or loss of $1,000 today implies a lifetime gain or loss of only a few nickels in daily consumption. Another advantage of assuming linearity is that it makes it much more plausible that a particular risky choice can be separated from the whole set of lifetime choices. A pleasant consequence of assuming intrinsic risk neutrality is that the revealed Bernoulli function precisely reflects the production function defining the opportunity set. That is, u(x) = U(f(x)) = f(x). In this case, the explanatory burden is 100 percent on the opportunity set (here the production function) and zero percent on intrinsic preferences, as recommended by Stigler and Becker. In the rest of this section we show how to construct revealed Bernoulli functions of any desired shape. Readers who see the point should feel free to skip over examples that don’t interest them. Concave cases As a
Risky opportunities
Risky opportunities
first illustration, consider the revealed Bernoulli function for a DM
with observed pretax income x 2 0 whose intrinsic utility is linear in after-tax income y. Then her revealed preferences depend on the tax regime. Suppose that initially the tax rate (x) is min{0.25, 0.01x’}, that is, it is quadratically
progressive but capped at 25 percent. The cap binds at x = 5, and at lower gross income her revealed Bernoulli function is u(x) = f(x) = x(1-t(x))
= x ~ 0.01x°. Her revealed coefficient of absolute risk aversion (ARA) here
is —u"(x)/u'(x) = 0.06x / (1— 0.03x?), which rises smoothly from 0 at x = 0 to 1.2 at x = 5. At higher gross income, the cap binds and the revealed
coefficient falls back discontinuously to 0, correctly reflecting the intrinsic ARA coefficient of -U"(y) / U'(y) =0/1=0. Now suppose that we change the tax regime, e.g., by increasing the cap
from 25 percent to 40 percent, or by decreasing the tax progressivity coefficient from a = 0.01 to a = 0.005, or by changing progressivity from quadratic
to linear. Clearly, the revealed ARA coefficient will change as well, and in a predictable manner. For example, if the new context is the same as the old
except that now a = 0.005, then the new revealed ARA at x = 5 will be 0.03x /
105
the monthly interest rate. Other obvious examples of z for householders include mortgage, rent, utility, and car payments. Examples for business firms include payroll obligations, debt service, and bond indentures. A biological example is the number of calories an individual needs to maintain normal activity; shortfalls will deplete fat stores or muscle tissue, and rebuilding them incurs additional metabolic overhead of at least a = 0.25 and often considerably more (¢.g., Schmidt-Nielsen [1997]). Panel A of Figure 6.5 shows the resulting net payoff y = u(x) with u(x) = x-z for x>z and u(x) = (1+a)(x — z) for x < z. The function is concave and piecewise linear. If x is not precisely known at the time the DM makes a risky choice, e.g., if some random cash flow might partly offset the contractual obligation, then the expected net payoff u(x) is strictly concave over the support of x. Thus, if there is an interval [x, x,] of possible values of x, then the revealed Bernoulli function will be smooth and strictly concave over that range, and will be weakly concave overall. Fiduciary responsibilities also lead to concave net payoffs for the DM if she is, for example, a trustee. When she obtains a gross payoff for the client far above the expectations, the trustee’s net payoff is only slightly higher than when barely meeting expectations. But when the gross payoff falls short of expectations, her net payoff is far lower, after taking into account the legal and reputation costs. The logic is the same as with progressive income taxes: the slope of the net payoff function y = u(x) is decreasing in x.
In all these cases, an uninformed outsider — one who observes only gross payoffs and casually assumes a linear net payoff function — might be tempted to infer a risk-averse DM. An informed observer, who sees the varying net payoff functions, will be able to use that variability to correctly predict how revealed risk aversion will depend on context. That observer will avoid the specification error of attributing context dependence to an unstable concave Bernoulli function. Convex cases
There are also plausible circumstances that lead to specification error in the opposite direction: a risk-neutral DM can appear to be risk seeking because
his net payoff function is convex in gross payoff. Following the analysis in Friedman and Sunder (2004), we consider the ways in which convex (net)
revealed ARA no longer drops to zero at slightly higher gross incomes, but
objective functions might arise, despite linear (innate) objective functions. For example, suppose that there is a tournament whose only prize P goes
As a second illustration, suppose that a person has some financial obligation z > 0. If he fails to meet the obligation, he faces additional costs that can be approximated as a fraction a ¢ (0, 1) of the shortfall. For example, if his credit card balance is z = $1,000 on the monthly statement and he pays only $600 by
his gross payoff independently from the cumulative distribution G (obtained, for instance, in a Nash equilibrium of effort choices). Then the expected net payoff is u(x) = G*/(x)P, which tends to be more convex the larger the number of contestants. Panel B of Figure 6.5 illustrates the example for three
(1 — 0.015x?)|,.-5) = 0.24, only a fifth of its value in the old context. Moreover,
instead continues to rise gradually until the cap binds at around x = 7.07.
to the DM
with highest x. Assume
that each of K > 1 contestants draws
the due date, then he will incur an additional cost of $400a, where a ~ 0.02 is
contestants and uniform distribution G.
106
Risky opportunities
Risky opportunities
A. Additional cost a>0 on shortfall from z
yex
y
ita
Again presence of a random component to cash flows would smooth out the graph and make y a strictly convex function over the support of the random component.
y(x)
Zz
Bailouts create convex net payoffs in a similar manner. The US savings and loan industry in the 1980s is a classic example (see White [1991]; Mason [2001]; and Black [2005]). While deposit insurance was still in effect (i.e., a > 0), rapid deregulation made a whole new set of gambles available to these banks. The convex net payoff created an incentive to accept risky gambles in x. Indeed, some of the gambles with negative expected gross value (Ex < 0) have positive expected net value (Ey > 0) after considering the proceeds from deposit insurance; again see Figure 6.5C.
x
C. Bailout a>0 for shortfall from z
y
=x
D. Means-tested subsidy (S-shaped)
yx)
y
= Y=X
wy)
_
y
B. Tournament payoff
107
E. Social climbing (Friedman-Savage)
Mixed cases
Certain opportunity sets would lead an intrinsically risk-neutral DM to reveal a non-linear Bernoulli function with both concave and convex segments, as suggested by Friedman and Savage (1948) or Markowitz (1952). Indeed, their intuitive justifications for these segments can be naturally reinterpreted as arising from opportunity sets that induce intrinsically risk-neutral people to behave as if endowed with complicated Bernoulli functions. For example, suppose the DM lives in subsidized housing with subsidy rate a > 0 if her gross income x is less than or equal to z,, and that she becomes ineligible for the subsidy if actual income (taking into account opportunities to disguise it) exceeds z, > z,. If ineligible, she spends fraction c > 0 of incremental income on housing. Then net income (after housing) is u(x) = y=y, + da(x — z;) for x < z,, while u(x) = y, + (x- 2) for z, < x < z, and u(x) = y, + Z>— Zz, t(1—e)(x — z,) for x > z,; see Figure 6.5D. After taking into account
uncertainties of cash flows (or uncertainties of being caught and evicted for excess income) she would reveal a smooth S-shaped Bernoulli function over
gross income. James and Isaac (2001) derive a similar Bernoulli function in
gross payoff given a progressive tax and a bankruptcy threshold.
Friedman and Savage (1948) motivate their famous Bernoulli function with
Figure 6.5 Net payoff functions (y = net payoffs; x = gross payoffs).
Business examples include decisions made in the shadow of bankruptcy, or bailout. As a complement to the Smith and Stulz example considered earlier, suppose that failure to meet a contractual obligation z > 0 results in bankruptcy proceedings, and that shortfalls to some degree are passed to creditors, as in Figure 6.5C. The net payoff again is u(x) = x —z for x > z but now u(x) = (1 — a)(x — z) for x < z, where a (0, 1) is the share of shortfall borne by other parties. This yields a piecewise linear convex relationship.
a vague story about the possibility of the DM moving up a rung on the social ladder. To sharpen their story a bit, suppose that z, is the threshold income at which the DM moves from the DM’s current working class neighborhood to a middle class neighborhood with better schools. Suppose that at a lower income z, the DM begins to put a fraction c > 0 of incremental income into private schools or other special expenditures that would be redundant in the new neighborhood. Finally, suppose that only at a higher income z,> z, does the family
blend in well in the new neighborhood; at intermediate levels one has to spend a fraction d > 0 of incremental income on upgrading clothes, car, and hiring a gardener, etc. Then the DM reveals the piecewise linear Bernoulli function shown in Figure 6.5E, which, after the usual smoothing, becomes a Friedman— Savage function as shown in previous chapters. But the characteristic nonlinear shape reflects the DM’s net payoffs y = u(x), not intrinsic preferences U(y).
US
KISKY Opportunities
Marshall (1984) obtains a similar shape for the indirect utility function for income. Intrinsic preferences are assumed to be concave in income and increas-
ing in an indivisible {0, 1} good such as residential choice. He mentions other
possible indivisibilities including fertility, life, and career choice, Hakansson
(1970) derives a Friedman—Savage type function in an additively separable multi-period setting. The net payoff is expected utility of wealth, given by a
Bellman equation for the consumption-investment plan, assuming that the Bernoulli function of consumption each period is CRRA. The gross payoff is the present value of endowed income. He derives the desired Bernoulli function explicitly from particular constraints on investment and borrowing. Masson (1972) drops the parametric assumptions and presents a streamlined, graphical argument in a two-period setting. Suppose the DM has standard general two-period preferences that are homothetic (hence consistent
with global risk neutrality), and that consumptions at the two dates have decreasing marginal rates of substitution. Masson shows that realistic capital
market constraints can create concave or mixed functions y = u(x), where x is first period endowment and y is maximized utility in a riskless world. For example, suppose the borrowing rate b exceeds the lending rate |. Then the DM will borrow so u’= 6 when realized x is sufficiently small, and will lend so
u’'= I when realized x is sufficiently large. For intermediate values of x the DM consumes all of the incremental first period endowment, and u’ = the marginal rate of substitution which decreases smoothly from } to 1. Thus risk neutrality
in y induces a Bernoulli function in x that is concave, and strictly concave over stakes such that it is not worthwhile to adjust one’s bank account. Masson
obtains Markowitz and Friedman-Savage type induced Bernoulli functions when the borrowing and lending rates are not equal. Chetty (2002) derives an even more complex shape for an indirect utility function of wealth. He assumes overall concave preferences with frictional costs of deviating from a commitment to the current level of consumption decisions for one good (e.g., housing) and no such costs for the other good. The resulting net payoff function inherits from the overall function its concav-
ity over the upper and lower extremes of gross payoff, and features increased local curvature for small changes in x from the base level, joined by kinks
(locally convex portions).
109
We believe that a more satisfactory resolution is to note that the opportunity set of the DM is bounded. The person offering the gamble must have finite ability (and willingness) to honor a promise; above some value 2” = B, say, he is likely to default. Thus even a risk-neutral gambler should be willing to play no more than the expected value of the first n(B) = [In B/ In 2] terms. In presence of upper
bound B= one million ducats, the willingness to pay is less than 20 ducats. Interestingly, this same idea is buried in Bernoulli, in the section in which he quotes a letter from Cramer to Daniel Bernoulli’s cousin, Nicholas, as fol-
lows (34): “Let us suppose, therefore, that or (for the sake of simplicity) above 274 = equal in value to 2* ducats or, better yet, amount, [emphasis added] no matter how with its cross upward.”
6.5
any amount above 10 millions [sic], 16777216 be deemed by him to be that I can never win more than that long it takes before the coin falls
Embedded real options
Opportunity sets can encompass more than the simple univariate production
functions y = f(x) considered so far. Embedded real options are an important
example. Opportunities that give a person the right (but not the obligation) to do something different can also lead an uninformed observer to misattribute risk preferences. For example, suppose that I turn down a job offer whose expected present
value clearly exceeds that of my current salary plus the adjustment costs asso-
ciated with the move. You might think that I am risk averse, and am deducting a sizeable risk premium from the offer’s present value. But instead I might believe that favorable new job offers are more likely if I remain in my old job than if I accept the current offer. In this case, accepting the current offer would extinguish a valuable wait option. Whatever my intrinsic risk preferences, I should deduct the value of the wait option from the current offer before deciding whether to accept the current offer. The options explanation can be elaborated in at least two different ways, under the maintained assumption that I am intrinsically risk neutral. Let v, be the known value of my old job, and let x be the current outside offer. First, suppose that if I reject x then new offers will continue to arrive via a Poisson process whose intensity parameter is normalized to one, but if I accept x then the intensity drops to a level normalized to zero. Say that the offers have mean
St. Petersburg revisited Recall that Daniel Bernoulli first proposed curved functions to resolve the following conundrum. A gamble that pays 2” rubles with probability 2” for
every n= 1,2, ... 00, has expected value 1+1+1+... = 00, But nobody will pay an infinite amount to play such a gamble. To resolve this paradox, Bernoulli (1738) proposed that a person’s willingness to play (ignoring base wealth Wo) is: E,u =¥ p,u(m,) = yo" in (2") = nono" n=l
Risky opportunities
n=]
n=]
= bn
1
‘=
—
1
= 2In2
~
values drawn identically and independently from a distribution with cdf F. Then, by classic search theory (e.g., McCall [1970]), my optimal decision is to reject if Ex < R and to accept if Ex > R, where R > y, is the reservation wage. Thus the explanation of my rejection is that the current offer had expected value in the interval [v,, R], where R is a known (implicit) function of potential observables. Holding constant the shape of F, the optimal R is increasing in the upper end of the range of F and decreasing in the cost c > 0 of waiting for a new offer. Thus I am more likely to reject a favorable offer the lower my cost of waiting (e.g., if offers arrive more frequently) and the more optimistic Tam about how favorable new offers could be.
110
Risky opportunities
Risky opportunities
Another way to elaborate the options explanation is to suppose thut the mean x(t) of the current offer follows geometric Brownian motion with drift « and volatility o > 0 in continuous time; see, e.g., Dixit and Pindyck (1994) for details. The optimal policy again is to accept the offer as soon as x(2) equals
(or exceeds) a threshold R = (1+w) v,, Where the markup w is an increasin g
function of a and o and a decreasing function of the discount factor in the
relevant region of parameter space. Thus the DM
is more likely to reject a
favorable offer the more positive the trend in offers, the greater their variability per unit time, and the more patient the DM is or the more confident that the offer process will not stop.
The usual risk aversion explanation has fewer, and different, observabl e implications. Again, let v, be the known value of my old job, and let x be the
current outside offer. The risk aversion explanation implies that I am more likely to reject an offer with expected value Ex > y, when Var[x] is greater, and when my intrinsic Bernoulli function is more concave. Of course, concavity is not observable, but perhaps one can observe Var[x], the degree of uncertain ty
concerning the actual value of the current offer. In contrast, the options explanations say that even with little uncertainty of that sort, a rational DM will reject apparently favorable current offers when the wait option is sufficiently valuable. The same logic applies to shopping decisions and other sorts of searches. For example, a DM might purchase information that reduces the perceived variance of future events. Before concluding that the purchase implies that
the DM is risk averse, we should consider the possibility that the DM is risk neutral and the information purchase gives her an embedded option. Here is a worked-out example. A risk-neutral DM owns a mineral resource.
This resource may
produce
ore that, when
the current state of the world, the DM distribution:
mined,
Probability
0 1 2 4
40 20 20 20
Now consider the following: Acme Resource Exploration Corporation (AREC) offers to sell the DM a sophisticated test. This test will, with complete certainty, distinguish the state of the world (Ore = 0 tons for sure) from the state
of the world (Ore = 1 million tons, 2 million tons, or 4 million tons each with a probability 1/3). That is, the test allows the DM to know, without error, either that the resource is “empty,” or that there is some ore underground (retaining the
same uniform probabilities from the table above if the test is “positive”). Hence enters the real option. If the AREC test reveals that the resource is empty, the DM will not pay the $80 million fixed cost. If, on the other hand, the AREC test excludes the “empty” state of the world, then the (conditional) expected value of paying the $80 million fee and exploiting the resource will be {0.333 * 20 million + 0.333 * $120 million + 0.333 * $320 million} = $153.33 million dol-
lars, so the DM will certainly develop the resource in this case. The key question for us is: “Will a risk-neutral DM be willing to pay a positive amount for this test?” All we have to do is consider the expected value of the resources if the DM purchases the test, and compare that to the cost of the test. Recall that what the AREC information channel provides to our DM is an option to forego expending the $80 million fixed development fee if the test is “negative.” Assuming the purchase of the AREC test, but prior to the realization of the test result, the expected value of the resource is {0.4 * (0) + 0.2 * ($20 million) + 0.2* ($120 million) + 0.2 * ($320 million)} = $92 million. The value of the resource has increased from $60 million without the AREC test to $92 million with it. So our risk-neutral DM could be observed paying
any price up to $32 million to AREC to conduct the test. This observed deci-
sion to purchase information has nothing to do with risk aversion and everything to do with the creation of a valuable real option.
In
knows the following probability
Amount of ore tons million tons million tons million tons
sells for $100/ton.
111
percent percent percent percent
Suppose that the cost of mining the ore is a fixed development fee of $80 mil-
lion, with no further extraction cost. Then, the expected value of the resource.
if developed, is {0.4* ($80 million) + 0.2 * ($20 million) + 0.2 *($120 million) + 0.2 *($320 million)} = $60 million. If the resource is worth nothing if undeveloped (ie., the opportunity cost of development is zero) then the DM will develop the property.
6.6
Discussion
We retain neoclassical economists’ standard operating procedure of representing choice as the solution of a constrained optimization problem. However, we depart from the tradition of the last 70 years of postulating a transparent opportunity set and a complicated personalized objective function. We strip out all the curves, inflection points, and kinks from the unobserved, intrinsic Bernoulli function, and simply assume that it is linear. Thus we postulate
a transparent
objective function
(expected
value maximization)
and put the entire explanatory burden on the set of potentially observable constraints. More sophisticated opportunity sets can incorporate embedded real options and the relationship y = u(x) between’ gross and net payoffs.° How far can that approach take us? We can’t claim that it will be able to explain all observed behavior, but it seems to us a way to make nontrivial progress. Applied economists have lots of experience using constrained optimization in other arenas, and they should be able to exploit these techniques to better understand and predict risky choice.
112)
Risky opportunities
Risky opportunities
Notes 1 This Don Johnson is not the actor from Miami Vice. 2 Johnson would eventually win $5.8 million over 12 hours, and the CEO of the Tropicana would be fired afterwards (Adams [2011]. 3 This might surprise readers familiar with Kahneman and Tversky’s (1979) seminal article on prospect theory. To justify their S-shaped value function, Kahneman and Tversky cite Fishburn and Kochenberg’s (1979) analysis of Grayson’s 10 subjects
plus another 20 subjects (also responding to hypothetical questions about sizeable gambles in real-life contexts) selected from other primary sources. Fishburn and Kochenberg found that the S-shape was a modal (but not a majority) pattern in the 28 observations they kept. 4 For a review of the considerable literature on animal foraging behavior under risk, see Kacelnik and Bateson (1996). They note considerable context-sensitivity:
both the presence and the direction of risk-sensitive preferences can be influenced by what could appear to be trivial differences in the choice procedure, the sched-
ule of reinforcement, the means used to deliver rewards or the index used to mea-
sure the subjects’ preferences.
(429) They go on to conclude:
To date none of the available theories adequately accounts for all the crucial behavioural observations in studies of the effects of variability. Risk-sensitive
foraging theory does not provide any insight into why risk-aversion should be
more common when variability is in amount and risk proneness when it is in delay. The fact that shifts in preference have sometimes been observed when bud-
gets are manipulated remains the single empirical observation that prevents us
from abandoning the theory entirely. It is therefore crucial that these key experiments are replicated and extended. ... We close by asking the question of whether risk-sensitive foraging theory is still a useful theoretical tool to guide behavioural research. Our tentative reply is that the jury is still out. (431) Also see Stephens (1981) and Roche, Timberlake, and McCloud (1997).
5 Another nice feature of the opportunity set approach is its interoperability with the notion of background risk — that a person’s risky choices may be altered when the resolution of other gambles is pending in the background. Kimball (1993) and
later authors argue that temperance and prudence (defined as ratios of higher order derivatives, as noted in Chapter 5) will govern the way in which the expected utility of the “foreground” gamble changes as some other “background” gamble is added
or deleted. The background risk approach and the opportunity set approach are
not mutually exclusive, and could be viewed as complements rather than substi-
tutes. If the revealed Bernoulli function resulting from constrained optimization and compound gambles are n-times continuously differentiable, there is nothing
stopping anyone from taking n and (n — 1)" derivatives, forming ratios, and draw-
ing comparative static inferences. However, if revealed Bernoulli functions really are
mutable, it becomes less clear how to apply results from the literature that assume constant prudence or temperance.
Bibliography Adams, G. (2011) “The Taking of Atlantic City: How High Roller Beat The House
Out of $5.8M in a 12-Hour-Long Blackjack-athon,” New
York Post, June 12,
113
2011. http://nypost.com/20| 1/06/12/the-taking-of-atlantic-city/ (accessed July 12, 2013). Bemoull D. (1738) “Exposition of a New Theory on the Measurement of Risk,” trans. Louise Sommer (1964) Econometrica 22: 23-26. oe Black, W. K. (2005) The Best Way to Rob a Bank Is to Own One. Austin: University of Texas Press. Caraco, T. (1981) “Energy Budgets, Risk, and Foraging Preferences in Dark-Eyed Juncos (junco hyemalis),” Behavioral Ecology and Sociobiology 8: 213-217.
Chetty, R. (2002) “Consumption Commitments, Unemployment Durations, and Local Risk Aversion,” Harvard University mimeograph. http://elsa.berkeley.edu/~chetty/ papers/commitments.pdf (accessed May 4, 2013). . Coase, R. H. (1937) “The Nature of the Firm,” Economica 4(16): 386-405.
Dixit, A. K., and Pindyck, R. S. (1994) Investment Under Uncertainty. Princeton, NJ: Princeton University Press. Fishburn, P C., and Kochenberger, G. A. (1979) “Iwo Piece Von Neumann
Morgenstern Utility Functions,” Decision Sciences 10(4): 503-518. Friedman, M., and Savage, L. J. (1948) “The Utility Analysis of Choices Involving Risk,” Journal of Political Economy 56: 279-304. _ Friedman, D., and Sunder, S. (2004) “Risky Curves: From Unobservable Utility to Observable Opportunity Sets,” Working Paper, University of California, Santa Cruz. Grayson, C. J. (1960) Decisions Under Uncertainty: Drilling Decisions by Oil and Gas Operators. Cambridge, MA: Harvard University Press.
Hakansson, N. (1970) “Friedman-Savage Utility Functions Consistent with Risk Aversion,” Quarterly Journal of Economics 84(August): 472-487. James, D., and Isaac, R. M. (2001) “A Theory of Risky Choice,” unpublished working aper.
Kacelnik, A., and Bateson, M. (1996) “Risky Theories: The Effects of Variance on
Foraging Decisions,” American Zoology 36: 402-434. 7 Kahneman, D., and Tversky, A. (1979) “Prospect Theory: An Analysis of Decision under Risk,” Econometrica 47: 263-291.
Kimball, M. S. (1993) “Standard Risk Aversion,” Econometrica 61(3): 589-611. McCall, J. J. (1970) “Economics of Information and Job Search,” Quarterly Journal of Economics 84(1): 113-126. Markowitz, H. (1952) “The Utility of Wealth,” Journal of Political Economy 60(2): 152-158. Marshall, J. M. (1984) “Gambles and the Shadow Price of Death,” American Economic Review 7A(1): 73-86. Mason, D. L. (2001) From Building and Loans to Bail-Outs: A History of the American Savings and Loan Industry, 1831-1989. Ph.D. dissertation, Ohio State University.
Masson, R. T. (1972) “The Creation of Risk Aversion by Imperfect Capital Markets,” American Economic Review 62(1-2): 77-86. Modigliani, F., and Miller, M. (1958) “The Cost of Capital, Corporation Finance and
the Theory of Investment,” American Economic Review 48(3): 261-297. Roche, J. P, Timberlake, W., and McCloud, C. (1997) “Sensitivity to Variability in Food Amount: Risk Aversion is Seen in Discrete-Choice, but Not in Free-Choice,
Trials,” Behaviour 134(15/16): 1259-1272. Schmidt-Nielsen, K. (1997) Animal Physiology: Cambridge: Cambridge University Press.
Adaptation
and
Environment.
Smith, C. W., and Stulz, R. M. (1985) “The Determinants of Firms’ Hedging Policies,”
Journal of Financial and Quantitative Analysis 20(4): 391-405.
14
Risky apportunities
Stephens, (1981) “The Logic of Risk-Sensitive Foraging Preferences” Behavior D.29:W, 628-629. oraging Preferences,”
Ani Animal
Stigler, G. J., and Becker, G. S. (1977) “De Gustibus Non Est Disputandum ” American Economic Review 67(2): 76-90. Trigeorgis, , L. (1996) Real Options: . Manageri gerial Flexibili ibili.ty and Strate i R Allocation. Cambridge, MA: MIT Press. * FN
7
Possible ways forward
White, L. J (1991) The S&L Debacle: Public Policy Lessons for Bank and Thrift Regulation. New York: Oxford University Press,
Wilkinson,
G.
S. (1984)
308(5955): 181-184.
“Reciprocal
Food
Sharing
Sam
in the
Vampi
”
whe EP ne Set” Nature
It is time to take stock. What is the current state of empirical knowledge regarding risky choice? Which future directions seem promising? In this final chapter we try to answer these questions, and to tie up a few loose ends.
Let us begin by summarizing the argument so far.
7.1
Limitations of existing approaches
Theoretical advances in the 1940s, 1950s, and 1960s gave luster to Expected Utility Theory, and a thousand research projects blossomed. Sadly, however, the empirical harvest has been meager so far. As documented in Chapters 3—5, estimates of Bernoulli functions failed to confirm a consistent simple concave shape as basic theory might suggest. The profession reacted by considering more complicated shapes (e.g., one inflection point in Fishburn and Kochenberger [1979], two in Friedman and Savage [1948], three in Markowitz [1952], and so on),' more complicated carriers of value (e.g., gains and losses in income instead of wealth), and more complicated contingencies (e.g., dependence of shape on individual identity and on context). Empirical researchers created new elicitation protocols for new
subject pools in the laboratory and in the field. Yet, as noted in Chapters 4
and 5, this work offers little insight into the functioning of industries that deal with risk, nor into the meaning of risk and risk aversion. Flexible shapes and contingencies allow us to fit almost anything ex post, but despite all the effort and talent devoted to the task, not ex ante. Out-of-sample prediction, the crucial scientific goal, remains elusive. The sensitivity to elicitation protocol is especially troubling. The evidence in Chapter 3 suggests that, by carefully selecting and tuning the protocol,
you can obtain pretty much whatever risk-preference parameter estimate you want. For example, the four versions of BDM
reliably generate population
mean estimates on both sides of risk neutrality, and different implementations of Holt-Laury generate a broad range of risk-averse estimates.” As noted in Chapter 3, the protocol effects are quite robust, and by comparison individual risk-preference parameter estimates seem ephemeral.
RE
In Chapter 6 we proposed an alternative appro ach that remains in the neoclassical style. It continues to treat choice as static optimization in the presence of external constraints, and seeks expla natory power in potentially observable constraints instead of in unobservab le intrinsic preferences, We believe that this modeling approach is an under exploited opportunity to make scientific progress, but that it is a way station, not a final destination. This constraint (or opportunity set) approach has its limitations. An objeclve opportunity set derived from external const raints is invariant to framing and is insensitive to elicitation protocol. Hence the approach would seem unabl e to account for the framing effects and protocol effect
s noted in Chapter 3.3 In Section 7.4 below, we note that emine nt theorists suggest capturing such effects via internal constraints and subjective opportunity sets. Of course, the Chapter 6 approach works only to the extent that the proposed constraints are themselves observable. , Which risk preferences and constraints may rende r decision makers vulnerable to a “Dutch book” or money pump (De Finett i [1931]; Ramsey [1931}) is an open question.‘ There is little reason to think that intrinsically risk-neutral decision makers are more susceptible to such schem es. However, we leave that for the reader to decide. The next two sections explore other possible ways forward. We mention a variety of approaches, not with the intention of endorsing any of them, but instead to broaden our horizons. .
7.2
Large worlds, evolution, and adaptation The choice models considered so far pertain to what Leonard Savage (1951) called a
“small world”: choice is over known altern ative actions, which have known consequences in known contingencies that have known probabilities. However, that is not the world we inhabi t. As we sometimes realize after the fact, there are often unrecognized altern ative actions and states, and
sometimes surprising outcomes occur whose probab ilities were not zero after all. In Savage ’s terminology, we live in a large world.
How do people act in such a world? It is, of course , the world in which
we evolved, and evolutionary theorists have developed several insights about it worth recounting here. The first insight concerns the nature of prefer
ences, Some behavior is more or less hardwired, e.g., to drop a hot potato. Other behavior is more contingent, e.g., to gather only certain kinds of root veg-
etables in certain seasons. It is possible to hardw ire contingent responses,
but the combinatorics are daunting, and chang es in the environment may make the
hardwiring obsolete before it can become estab lished via trial-and-error mutations over many generations. As Robson and Samuelson (2011, section 2.3) put it in a recent survey article:
A more effective approach may then be to endo w the agent with a goal, such as maximizing caloric intake or simply feeling full, along with the
Possible ways forward
117
ability to learn which behavior is most likely to achieve this goal in a given environment. Under this approach, evolution would equip us with a utility function that would provide the goal for our behavior, along with a learning process, perhaps ranging from trial-and-error to information collection and Bayesian updating, that would help us pursue that goal. What sort of risk preferences might evolve? Using an argument parallel to the one we used in Chapter 6, Robson (1996) concludes that revealed Bernoulli
functions over wealth should inherit their shape (presumably concave) from the production
function
of viable offspring.
In a rather abstract setting,
Sinn and Weichenrieder (1993) and Sinn (2002) argue specifically for the original Bernoulli function, u(w) = In(w). The intuition (although there
are mathematical
subtleties in a stochastic world) is that the log function
maximizes the expected growth rate.5 Evolutionary psychology seeks to explain all sorts of human behavior in terms of what might have produced more offspring in prehistoric times (e.g., Barkow, Cosmides, and Tooby [1992]). One example pertains to our concerns,
and is potentially testable. The prediction is that lower-status male adolescents
will tend to be risk seeking, because playing it safe in ancestral societies would likely result in zero offspring, i.e., the same “fitness” payoff as the downside of a risky bet. The upside of risk-seeking behavior could yield enough resources to greatly improve mating opportunities (see also Rubin and Paul [1979]).
The argument is again analogous to that in Chapter 6, with the individual’s
accumulated wealth as the gross payoff and the “fitness” (expected number of offspring) as the net payoff, which is obtained by the “selfish genes” (Dawkins [2006]) that control revealed preferences. 7.3
Heuristics and learning
. Preference maximization offers a flexible way to respond to contingencies but, as Robson and Samuelson noted, it must be combined with a learning
process to work well in a large world that changes over time. Indeed, they argue that our preferences are not primarily for biology’s ultimate goal, as many viable offspring as possible, because the long lags and limited number of observations make it too hard to learn how to achieve that goal directly. Instead, our preferences are for intermediate goods such as food, sex, status,
and wealth for which effective actions are easier to learn, due to prompt and frequent feedback. So how do we learn effective actions to increase expected wealth, in Savage’s large world? Decision makers of all sorts (not just humans) have developed cognitive shortcuts — dubbed heuristics by Herbert Simon
and others (e.g.,
Newell, Shaw, and Simon [1957]) — that usually work pretty well. In recent decades, Gerd Gigerenzer has been a prominent advocate of the heuristics approach. Gigerenzer and Gaissmaier (2011, 454) define heuristics as “strat-
egies that ignore information to make decisions faster, more frugally, and/
118
Possible ways forward
Possible ways forward
or more accurately than more complex methods.” Heuristics are simple for»
humans, not necessarily for computers, because they exploit our brain’s “core
capacities,”
For example, consider how to catch a ball sailing hundreds of feet before
it hits the ground. A computer-aided mobile device might try to estimate the initial velocity vector, use Newton’s laws and fluid mechanics to compute the likely point of arrival, and move as quickly as possible to that spot. Humans find that computation quite difficult, but find it easy to use:
UN)
Individual learning often plays a major role in selecting and honing heurisas in tics; see Rieskamp and Otto (2006) for a formal model. Social processes, Snook, (e.g., nt imitation and explicit teaching, are sometimes equally importa Taylor, and Bennell
[2004)).
a Le eeerenzer is srcbabiy best known for his perspectives on Bayesian y natura not are soning. Gigerenzer and Hoffrage (1995) argue that people la most in used adept at interpreting information in the “probability format” experiments. A representative example:
... the gaze heuristic, which works in situations where a ball is already high up in the air: Fixate on the ball, start running, and adjust your running speed so that the angle of gaze remains constant [emphasis in the original]. The angle of gaze is the angle between the eye and the ball, relative to
The probability of breast cancer is 1% for a woman at age forty who pat: ticipates in routine screening. If a woman has breast cancer, the probabil-
trajectory. He can get away with ignoring every piece of causal informa-
breast cancer?
the ground. A player who uses this heuristic does not need to measure wind, air resistance, spin, or the other variables that determine a ball’s
tion. All the relevant information is contained in one variable: the angle
of gaze. Note that a player using the heuristic is not able to compute the point at which the ball will land. But the heuristic carries him to the point at which the ball lands. (Gigerenzer [2005], 10) Gigerenzer, drawing on Shaffer et al. (2004), also points out that one way
of interpreting the conjecture that people actually calculate trajectories via the laws of physics would predict that they sprint straight to the calculated landing spot — but this is not observed.‘ According to the heuristics approach, humans are endowed with core mental capacities (such as object tracking, 3-D visualization, recognition memory,
frequency monitoring, and the ability to imitate) together with a collection of heuristics and building blocks that exploit these capacities: our adaptive toolbox (Gigerenzer, Todd, and the ABC Research Group [1999]). Contents of the toolbox may vary slightly across individuals and much more across species. For example, dogs use tracking capacities similar to our own to catch Frisbees, but when the task is instead recognizing people, dogs rely more on smell and we rely more on looking at faces (hopefully). Heuristics can be fast
and frugal when they use only core capacities already in place. The adaptive
toolbox is not static (Gigerenzer and
Gaissmaier
[2011)).
Humans can learn from personal experience, or from observing others, about when to deploy or switch between heuristics, or even how to acquire new heuristics. Heuristic selection can be (partly) hardwired by evolution, as in 3-D visualization (Kleffner and Ramachandran [1992]) and in bees’ collective decision about the location of a new hive (Seeley [2001]). Memory constrains which heuristics can be used, and the application (or “ecological rationality”) imposes further constraints, e.g., face recognition is not helpful in catching a
fly ball, but object tracking is.
not ity is 80% that she will get a positive mammography. If a woman does
have breast cancer, the probability is 9.6% that she will also get a positive mammography. A woman in this age group had a positive mammography in a routine screening. What is the probability that she actually has 9
686
The same problem can be rephrased in “frequency format”: Imagine an old, experienced physician in an illiterate society. She has no books or statistical surveys and therefore must rely solely upon her expe-
rience. Her people have been afflicted by a previously unknown and severe
disease. Fortunately, the physician has discovered a symptom that signals the disease, although not with certainty. In her lifetime, she has. seen 1000 , people, 10 of whom had the disease. Of those, 8 showed the symptom
of the 990 not afflicted, 95 did. Now a new patient appears. He has the (686-687) symptom. What isi the pro bability y that he actually has the disease?
Subjects give far more accurate answers to questions posed in the second format. The explanation, say the authors, is that “frequency formats correspond to the sequential way information is acquired in natural sampling
heuristics (684). Thus the second format encourages subjects to employ better
rst. an taptive heuristics can produce Bernoulli curves as well as balpiayes run-path ares. Friedman (1989) shows that the S-shaped value function, the central construct of prospect theory, can be derived as the outcome of a prois ae cess in which scarce attention (or sensitivity to outcome differences)
cated according to actual or expected gains and losses of each size. The mode supposes that the decision maker has some finite stock. S of attention (or sensitivity, as it is called in the paper) to devote to considering how important are incremental gains at each level of wealth. If she allocates that finite stock optimally (or learns to approximate an optimal allocation), then she will act as if optimizing not her true utility function U but rather a particular
PAU
Passtote ways forward
approximation of it, V, that puts attention where it is most useful. The fune-
tion V lies between
U and the cumulative distribution function F describing
the wealth levels that might arise from the choice opportunities the decision
maker will face. Indeed, as the attention constraint relaxes from S near zero
to S very large, the corresponding V morphs from a close approximation of F into a close approximation of U. Most of us face opportunities to choose among smail increments to wealth more often than large increments. If indeed the distribution of opportu nities
is unimodal around the zero wealth increment, then F is S-shaped, €.8., as in the cumulative unit normal distribution. Consequently, if either (a) “true
tisk preferences” U are approximately linear (risk neutral), or (b) the stock S of attention is small, then the revealed Bernoulli function V will be S-shaped around the status-quo (zero wealth increment), as in prospect theory. For
similar results (apparently obtained independently) Woodford (2012).
see Netzer (2009) and
Adaptive heuristics may well be part of the explanation for framing effects and protocol effects. Consider again the divergent risk estimates obtaine d from the first-price sealed-bid auction and the selling version of BDM reporte d in Isaac and James (2000) and in Berg, Dickhaut, and McCabe (2005). Could they arise from the simple heuristic to bid aggressively, so as to force a “win”?
Aggressive bidding maps to risk aversion in the FPSB auction, but to risk seeking in the selling version of BDM, consistent with the observe d protocol
effect. Opportunities to learn to use more profitable heuristics are limited in these experiments. Perhaps many subjects fail to comprehend the value of bidding less to get a better profit margin (albeit less often) in the FPSB auction, or of being paid the robot-bidder’s draw in the selling version of BDM rather than the lottery proceeds. To investigate, James and Reagle (2009) took an offthe-shelf learning model, EWA (Camerer and Ho [1999]}, and docume nted how variously parameterized EWA agents would behave across the two institutions. It transpired that agents parameterized for a kind of myopia with respect to opportunity cost (8 ~ 0) behaved in just the kind of grasping , yet self-defeating manner observed with human subjects. Why do subjects behave so differently across the different visual represen tations of the Dutch auction in Cox and James (2012), or between the statistic ally
aided and not statistically aided versions of the FPSB auction in Dorsey and
Razzolini (2003)? If subjects were using optimal bid functions (or even noisy versions thereof), these things wouldn’t matter. But if these differen t frames
invoke different heuristics, we may have the beginning of an explanat ion. Friedman (1998, 942) raises an important empirical issue concern ing heu-
ristics and learning. After presenting an experiment inspired by the notorious Monty Hall problem,’ he argues that economists should distinguish between two fundamentally different sorts of data. Static models of risky choice, such as EUT or CPT, should be tested on data reflecting settled behavior . By
contrast, data obtained during the process of learning and adaptation are
Possible ways forward
121
inappropriate lor testing slilic choice models, but can and should be used to test models of the learning or adaptation process itself. Precisely because it invokes maladaptive heuristics (such as “don’t get jerked
around” and the “principle of insufficient reason”) and has noisy feedback, the Monty Hall experiment produces data far more useful for testing learning models than for testing static choice models. Indeed, subjects in the experiment exhibit faster learning rates in treatments that give better feedback. The most
effective are the track-record and the comparative-results treatments, which summarize cumulative results in the frequency format treatment favored by Gigerenzer and Hoffrage (1995).
The Monty seen in the lab, Instead, it may learn and that
Hall article elsewhere points out that anomalous behavior e.g., ambiguity aversion, may not reflect intrinsic preferences. be the result of a protocol that allows little opportunity to invokes a widely useful heuristic that is maladaptive in the
chosen lab environment. The article concludes by calling for more research
on the adaptation process, e.g., to characterize what features of lab and field environments speed or retard the learning process. 7.4
Mental processes and neurceconomics
Psychologist Jerome Busemeyer has developed real-time dynamic models
of the choice process. For example, Pleskac and Busemeyer (2010) model a person contemplating a binary decision in terms of a continuous diffusion
process L(¢) with nonzero drift. The latent variable L(#) represents the person’s internal evaluation of net evidence favoring one of the choices. Given costs of correct and incorrect decisions, an upper and a lower threshold are set optimally for L, and the first (second) choice is made when L(Z) crosses the
upper (lower) threshold. . _ Such a model strikes us as a halfway house between economists familiar static choice and learning models, and emerging neuroscience models of brain activity dynamics. Busemeyer and coauthors have found that the model can nicely fit data on decision times and confidence as well as on the actual choices,
and that is sufficient justification to take the model seriously. However, it also invites a literal interpretation, and perhaps neuroscientists will someday find neural activity that corresponds to L. In the last few years, the emerging field of neuroeconomics has gained traction. Many of the newly launched research programs touch on tisky choice,
often indirectly. Perhaps the most direct point of contact appears in an article
titled “Markowitz in the Brain?” by Preuschoff, Quartz, and Bossaerts (2008),
whose abstract reads as follows:
We review recent brain-scanning ((MRI) evidence that activity in certain sub-cortical structures of the human brain correlate with changes in expected reward, as well as with risk. Risk is measured by variance of payoff, as in Markowitz’s theory. The brain structures form part of the
122
Possible ways forward
dopamine system. This system had been known to regulate learning of expected rewards. New data show that it is also involved in perception, of expected reward, and of risk. The findings suggest that the brain may perform a higher dimensional analysis of risky gambles, as in standard portfolio theory, whereby risk and expected reward are considered sepa-
B
2 spikes | p=0.0
400 ms
rately. That is, the human brain appears to literally record the very inputs
that have become a defining part of modern finance theory.
This conclusion is based on two lines of research. The first is a study by
p=0.25
Fiorillo, Tobler, and Schultz (2003) that reports single-neuron-spiking data in the ventral tegmental area (VTA) of the brains of monkeys. The data
are obtained in a long series of rounds in which the animals are presented with a Pavlovian cue image and then given a juice-squirt with probability corresponding to that cue image. There are five cue images, whose probabilities
of reinforcement are set at 0, 0.25, 0.5, 0.75, and 1.0. Figure 7.1 shows the corresponding neuron activity. Preuschoff, Quartz, and Bossaerts pay particular attention to activity between
the two dashed lines in Figure 7.1, after the initial large spike following presen-
tation of the visual cue, but before the time the cue is removed and the fixed
size juice-squirt is or is not received. They emphasize the ramp-like activation profile for the p = 0.5 condition, its absence for the p = 0 and p = 1.0 conditions, and for the intermediate profiles in the p = 0.25 and p = 0.75 conditions, They say, “Figure 5 [our Figure 7.1] provides the evidence. In it, the firing of all neurons under investigation is averaged, across both rewarded and unrewarded
trials. Notice that the delayed response appears to be increasing in risk; it is
maximal when risk is maximal (i.e., when p = 0.5)” (page 86). They measure risk as variance, which in this experiment is maximized in the p = 0.5 condition.
Single-neuron-firing studies, as remarkable as the medicine and technolog y involved may be, are not immune to controversy. Heilbronner and Hayden
(2013) dispute the interpretation of some previous studies (McCoy and Platt
[2005], Hayden and Platt [2007] [2009a], [2009b], Hayden,
Heilbronner, and
Platt [2010], So and Stuphorn [2010], and O’Neill and Schultz {2010]) which had been interpreted as showing that rhesus macaques are robustly risk seeking: With a variety of laboratories reporting the same result, one might conclude
that risk-seekingness is some generalizable preference of rhesus macaques — perhaps distinguishing them from humans and other animals. Here we argue the opposite: rhesus monkeys are not unique among animals, nor are
they even inherently risk-seeking. Instead, we argue that, for practical rea-
sons, the task design elements used by scientists who have studied risk atti-
tudes in monkeys are those most likely to encourage risk-seeking. The most
important of these elements are (1) decisions have very small stakes, a squirt or two of juice, (2) decisions are repeated hundreds or thousands of times with short delays (a few seconds) between trials, and (3) the reward structure of the task is learned through experience, rather than explained through
stimulus on
reward or stimulus off
Figure 7.1 Neuron activity over time (binned and averaged) across the five different
probability of reinforcement conditions.
Source: figure 3 in Fiorillo, Tobler, and Schultz (2003); this figure was brought to
our attention as figure 5 in Preuschoff, Quartz, and Bossaerts (2008).
124
Possible ways forward
Possible ways forward language. These elements were preserved across studies in several laboratories in large part because they are optimal for neurophysiological recording, and they show up in rhesus macaque studies because monkeys are generally trained to gamble for the purpose of neuronal recording studies. (Heilbronner and Hayden [2013] 1)
This strikes us as a cautionary tale.’ If risk seeking in rhesus macaques, thought to be a robust feature of intrinsic preferences, can instead be seen as a protocol effect, then perhaps we should be more cautious in interpreting data from such studies — including the ramp in Figure 7.1. The structure is most pronounced in the p = 0.5 condition, and binomial variance is indeed maximized in that condition. However, Heilbronner and Hayden’s third (numbered) point suggests an alternative interpretation.
Consider this: the probabilities from
each urn/cue pair are necessarily communicated to the animals via learningby-doing. Might the ramp for p = 0.5 represent the animal’s brain anticipating a potentially revelatory draw, one that is especially decisive in learning about the cue’s meaning? Smaller spikes for p = 0.25 and p = 0.75, and no spikes for p = 0 and p = 1.0 would also make sense, as draws from, the associated urns are easier to assess. The animal doesn’t know ex ante what the objective odds are, whether there is sampling with replacement or not, or whether the composition of each urn is otherwise changed over time; it could be straining,
readying itself to detect any pattern or advantage in cases such as p = 0.5, despite the experimenter knowing that objectively there is none to be had.
Could the ramp in neuronal spiking represent effort or attention in registering events, increasing in difficulty of learning, say, any tendency or pattern in the data generating process? Or something else again? Preuschoff, Quartz, and Bossaerts (2008) present a second line of research
supporting their Markowitz interpretation. As originally reported in the same authors’ 2006 paper, human subjects participated in the following task.
Ten cards numbered 1 to 10 where [sic] randomly shuffled. Two cards are to be drawn consecutively (and without replacement) from this deck. The
subject is asked to guess whether the first or the second card is going to be the highest. Subjects place their bet. About 3 seconds later, the first card is displayed. About 7 seconds later, the second card is displayed, from which the subject can immediately infer whether (s)he has won. The subject is then asked to confirm whether (s)he won or lost. Twenty-five seconds after the beginning of this trial, a new trial starts. ... Subjects played 3 sessions with 30 trials per session. Subjects start out each session with $25. They earn $1 if they guessed the right card. They lost $1 if they were wrong. If no bet was placed, they lost automatically. They also lost $0.25 if they incorrectly indicated whether they had won/lost or if they did not respond. At the end of the experiment, the subject selected one of the three sessions at random which determined their final payoff. (Preuschoff, Quartz, and Bossaerts [2008] 86-87)
125
| ify “| Hy
Th Figure
7.2 Mean
th coefficient
estimates
as
a function
of
posterior
winning
probability, plotted in separate panels for left and right side of brain. Dots represent binomial variance. Source: Preuschoff, Quartz, and Bossaerts (2008).
Since humans were the subjects, fMRI was employed rather the surgically invasive single neuron approach used with macaques. The researchers generated data which allowed regression of a measure of {MRI signal on independent variables including (objective) probability of winning (structured as a set of dummy variables exhausting the probability space), and dummy variables for time (to capture dynamics in the fMRI signal from one second after the first card was revealed, until the second card was revealed). Figure 7.2 above reproduces
their plot of the mean estimated coefficient of the time variables against the
conditional probability of winning given the value of the first card. The larger beta values for p = 0.5 echo the prominent “ramp” seen in the rhesus macaque study, and the authors interpret the values in the same way, as an encoding of variance. An obvious alternative interpretation runs as follows. Having observed, say, a one or ten as the first card, the subject knows
the result immediately. There is no need to pay attention from that point on, in that trial. Conversely, if the first card was in the middle of the pack, the result is still very much in doubt — and a subject might still have some interest in watching what the person dealing the cards (the experimenter) was doing. Preushoff, Quartz, and Bossaerts perform a Davidson—McKinnon test which
rejects (in favor of a smooth quadratic alternative fit) a “jump from 1, drop to 10, plateau in the middle” fit which is one (rather crude) way to implement the alternative just sketched. Another implementation, of course, might look more quadratic. Other alternative explanations also are consistent with a quadratic fit. One alternative focuses on the $0.25 penalty for failing to correctly recall the outcome, the only earnings component that is under the subject’s control; the
other components are exogenous random draws. Is a uniform effort required
126
Possible ways forward
Possible ways forward
to correctly recall the results of the round? If the first card is a one or ten, the subject can immediately and decisively conclude what has happened (as above). But let’s say the first card is a two or a nine. Then the subject doesn’t know the outcome with certainty, but only has to keep an eye out for one card (a one or a
ten, respectively). That would require more attention, and effort, than not hav-
ing to watch for any cards at all, but less than watching for two or three or four or five cards. Possibly differential effort and attention needed to log/recall an event might also generate the parabolic pattern seen in Figure 7.2. The papers just discussed are part of a larger debate about what the human brain encodes for choice under risk and uncertainty. Louie and Glimcher’s (2012) survey concludes that “value coding in a number of brain areas is context dependent, varying as a function of both the current choice set and previously experienced values.” A major point of the Preushoff, Quartz, and Bossaerts ° articles (among others) is that it seems more efficient for the brain to encode the mean and variance of gambles than to store a (perhaps context dependent) Bernoulli function and compute its expected value. However, Wu, Bossaerts,
and Knutson (2011, 1) find that “positively skewed gambles elicited more positive arousal and negatively skewed gambles elicited more negative arousal than
symmetric gambles equated for expected value and variance.” They interpret this evidence as unfavorable to cumulative prospect theory (and a fortiori, one presumes, unfavorable to expected utility theory), and argue that the brain encodes statistical third moments as well as first and second moments. Another possible interpretation, it seems to us, is that the brain might encode some direct measure of downside risk, perhaps one of those mentioned in Chapter 5. Further work eventually should be able to resolve which interpretations are valid, and we don’t want to prejudge the outcome. However, a general caveat might be in order. In human brains, unlike in real estate, location is not everything. Brains are quite plastic, so the same computations or decision-support activity could take place in different regions in different people or even in the same person at different times. For instance, Bach and Dolan (2012) survey work on brain activity in probabilistic situations and note that “BOLD fMRI studies have implicated more than ten distinct regions as representing out-
come uncertainty in different situations” (582), and “studies report different brain areas that carry uncertainty signals, even between studies using similar experimental manipulations” (583).° Alternatively, the metabolic activity recorded by {MRI may represent inhibition rather than activation: interpretation is often ambiguous. With regards to brain plasticity, the evidence from extreme cases (trauma, etc.) suggests that location of brain activity may not
be conclusive evidence of the kinds of “thought” taking place:
If there are no eyes, there will be no visual cortex at all because that same
region will be cannibalized by other input regions, such as those for hear-
ing or haptic touch in the battle for connectivity that takes place during development.
(Donald [2001] 209)
127
Gul and Pesendorfer (2005) propose far more sweeping caveats. In their defense of standard neoclassical economics against perceived overreach by neuroeconomists, these economic theorists argue that brain science and economics represent separate intellectual domains and that one has almost nothing to say about the other: The conceptual separation between probabilities and utilities is very important for expected utility theory. This separation need not have a physiological counterpart. Even if it did, mapping that process into the physiology of the brain and seeing if it amounts to “one [process] for guessing how likely one is to win and lose, and another for evaluating
the hedonic pleasure and pain of winning and losing-and another brain region which combines probability and hedonic sensations” is a problem for neuroscience, not economics. Since expected utility theory makes predictions only about choice behavior, its validity can be assessed only through choice evidence. If economic evidence leads us to the conclusion that expected utility theory is appropriate in a particular set of applications, then the inability to match this theory to the physiology of the brain might be considered puzzling. But this puzzle is a concern for neuroscientists, not economists.
... Evidence of the sort cited in neuroeco-
nomics may inspire economists to write different models but it cannot reject economic models ...
(25)
‘In our view, inspiring the construction of better models would be a major contribution. Indeed, we hope that brain imaging and other neuroeconomic studies will yield real insight into the. mental processes that underlie risky choice, and will lead to scientifically useful models. As we see it, the jury is still out, and probably won't return very soon. In passing, Gul and Pesendorfer suggest a modeling approach that could be used right away: The concepts of a preference, a choice function, demand function, GDP, utility, etc. have proven to be useful abstractions in economics. The fact that they are less useful for the analysis of the brain does not mean that they are bad abstractions in economics.
... Framing effects can be
' addressed in a similar fashion. Experimenters can often manipulate the choices of individuals by restating the decision problem in a different (but equivalent) form. Standard theory interprets a framing effect as a change in the subjective constraints (or information) faced by the decision maker.
It may turn out that a sign that alerts the American tourist to “look right”
alters the decision even though such a sign does not change the set of alternatives. The standard model can incorporate this effect by assuming that the sign changes the set of feasible strategies for the tourist and thereby . alters the decision. With the help of the sign, the tourist may be able to
128
Possible ways forward
Possible ways forward implement
the strategy “always look right then go” while without the sign
this strategy may not be feasible for the tourist. (26 and 27, emphasis added)
This strikes us as a recipe for integrating physiological and neural data into economics — but via the opportunity set, and not through ever more complex
preferences.
To the extent that these subjective constraints
can
be made observable, they will advance the research program advocated in Chapter 6. 7.5
Looking back and looking forward
Our main goal in this book is to cast doubt on business as usual. That goal
is met if the first five chapters of this book have created serious doubt in the reader’s mind that economists are on the right empirical track in the modeling of risky choice as the expectation of some curve. The goal would be exceeded, to our delight, if it diverted talented researchers away from constructing ever more ingeniously parameterized Bernoulli curves (or value
curves, or probability weighting curves) and into more productive activities. Likewise, we hope to divert empirical researchers away from seeking ever more contingencies upon which parameters might depend, and from seeking new instruments for eliciting those parameters. These research programs have
begun to exhibit many of the symptoms Lakatos (1970) mentioned in his descriptions of scientifically degenerate research programs. This may be a good juncture to acknowledge the possibility that we are
wrong. The empirical failure so far conceivably might simply reflect a lack of
focus on out-of-sample prediction, and its inherent difficulty. Indeed, ongoing
work by Wilcox (2012) decries the low statistical power of existing studies and
holds out the hope that larger, better designed studies may yet vindicate some variant of EUT. It is possible that someday someone will publish a reasonably general model of how parameters in a Bernoulli function vary with context and personal characteristics, and will demonstrate that it can out-predict
economically naive models’ in new contexts for new individuals. Should that
happen, we would have to echo Emily Litella (from the old Saturday Night Live show) and say “Never mind.” However, seven decades of empirical failure persuades us that it is not too early to raise doubts. The last few chapters of the book suggest alternatives to business as usual. The end of Chapter 5 suggests several risk measures that might improve on the usual variance-based measures. Chapter 6 suggests a neoclassical approach that shifts the explanatory burden from preferences to constraints. Of course, these are modest proposals that represent at best a partial or interim solution to the empirical problem. This final chapter mentioned some approaches that go beyond the neoclassical, and draw on evolution, learning, and physiology, in the hope that they might eventually help to construct better models of choice under risk and uncertainty. But we have no special insight here, and
129
frankly have no idea what a scientifically satisfactory theory of risky choice will look like or where it will come from. Our position is similar to those classical physicists circa 1900 who had no conception of relativity or quantum mechanics but were acutely aware of fatal empirical gaps in contemporary theory. Or, in our preferred metaphor, to chemists circa 1770 who perceived the empirical folly of phlogiston but had no concept of molecular reactions. Antoine Lavoisier came along and during the next decade provided a unified account of oxidation and reduction reactions. It was a crucial step in constructing chemistry as we now know it, but Lavoisier himself did not survive to see his theory come to fruition. He offended Robespierre’s henchmen during the French Revolution and was guillotined in Paris in 1794, aged 50.
So our closing note is a call for open minds and tolerance of dissent. As yet we have no scientifically validated understanding of behavior in the presence
of risk (or uncertainty), and treating it as a closed problem inhibits curiosity and new thought. We hope that our work will keep young applied econo-
mists from overconfidently applying existing techniques, and will encourage
young theorists to think more broadly about choice under risk and uncer-
tainty. Even more, we hope for a revolution in that realm, but one that lets its
Lavoisiers live. Notes
1 To our knowledge, the current record holder is Sadiraj (personal communication,
2012), whose Bernoulli function w(x) = ax + sin bx has a countably infinite number of inflection points fora > 1 and b > 0. 2 In recent correspondence to us, Shane Frederick suggested the following elicitation procedure. “Pick x = 1, or x = 2, ..., or x = 100 in the following gamble to be played once for real money: Get $100 / x with probability x / 100 and get zero otherwise.” Any risk-averse person would choose x = 100. Frederick reports a modal choice of x = 20. One might object that this procedure practically enforces “nonriskaverse” answers because there is only one risk-averse option (x = 100). But a similar objection applies — in the opposite direction — to a procedure such as the original
Binswanger list. 3 It might be argued, however, that these effects are only important to the extent that one needs to identify an intrinsic Bernoulli function, which of course is unnecessary in the constraint approach.
4 Specifically, Ramsey and De Finetti showed that someone able to predict changes
in a person’s Bernoulli function can exploit intertemporal intransitivities for riskless gain. The exploiter proposes a sequence of currently desirable trades, one per time period, that taken together result in a riskless transfer of money from the vulnerable
person. 5 For a lively debate of the issues of log utility in a stochastic environment, see Samuelson (1979). 6 Rather, players run more slowly, and sometimes in arcs. Might this latter observation inspire a researcher analyzing such data to just add a feature to the conjectured utility function of baseball players that captured their apparent fondness for running in ares, and leave it at that? We hope not. In this example, the observed nonlinearity is due to constraints arising from the cognitive process, not an exotic utility function.
130)
Possible ways forward
Possible ways forward
7 Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. |, and the host, who knows what’s behind the doors, opens another door, say No. 3,
which has a goat. He then offers you the option of switching your choice to No. 2. Few people accept the option, even though it doubles the probability of winning the car.
8 Hayden and Platt (2009a) find that applying a similar protocol (juice-squirts deliv-
ered directly to the mouth, urns whose composition is not announced, subject situated in an anechoic chamber, etc.) to human subjects yields data not unlike that seen with macaques.
9 Bach and Dolan (2012, 583) also point out that
A proliferation of BOLD fMRI studies in humans examining uncertainty across all levels of the proposed hierarchy [proposed by Bach and Dolan, and includ-
ing sensory uncertainty, among other things] are beset with interpretive problems ... and inherent difficulties in linking BOLD responses to precise neural mechanisms,
10 Examples might include discriminant analysis or machine-learning models. To be considered economically naive, such models should not include Bernoulli curves
or structures of comparable sophistication. They should operate directly on data describing the choice alternatives, context, and personal characteristics,
131
Friedman, M., and Savage, L. J. (1948) “The Utility Analysis of Choices Involving
Risk,” Journal of Political Economy 56: 279-304, Gigerenzer, G. (2005) “I Think, Therefore I Err,” Social Research 72(1): 195-218. Gigerenzer, G., and Gaissmaier, W. (2011) “Heuristic Decision Making,” Annual
Review of Psychology 62: 451-482.
Gigerenzer,
G.,
and
Hoffrage,
U.
(1995)
“How
to Improve
Bayesian
Reasoning
Without Instruction: Frequency Formats,” Psychological Review 102(4): 684. Gigerenzer, G., Todd, P. M., and the ABC Research Group (1999) Simple Heuristics
That Make Us Smart. New York: Oxford University Press. Gul, F., and Pesendorfer, W. (2005) “The Case for Mindless Economics,” Princeton University Working Paper. Hayden, B. Y., and Platt, M. L. (2007) “Temporal Discounting Predicts Risk Sensitivity in Rhesus Macaques,” Current Biology 17: 49-53. Hayden, B. Y., and Platt, M. L. (2009a) “Gambling for Gatorade: Risk-Sensitive
Decision Making for Fiuid Rewards in Humans,” Animal Cognition 12: 201-207.
Hayden, B.Y. and Platt, M.L. (2009b) “The Mean, The Median, and the St. Petersburg
Paradox,” Judgment and Decision Making 4: 256-265.
Hayden, B. Y., Heilbronner, 8. R., Nair, A. C., and Platt, M. L. (2008a) “Cognitive
Influences on Risk-Seeking by Rhesus Macaques,” Judgment and Decision Making 3: 389-395.
Bibliography
Hayden, B. ¥., Nair, A.C., McCoy, A. N., and Platt, M. L. (2008b) “Posterior Cingulate Cortex Mediates Outcome-Contingent Allocation of Behavior,” Neuron 60: 19-25.
Bach, D. R., and Dolan, R. J. (2012) “Knowing How Much You Don’t Know: A Neural Organization of Uncertainty Estimates,” Nature Reviews Neuroscience
Heilbronner,
13(8): 572-586.
Barkow, J. H., Cosmides, L., and Tooby, J. (1992) The Adapted Mind. Oxford: Oxford
University Press.
Berg, J., Dickhaut, J., and McCabe, K. (2005) “Risk Preference Instability Across Institutions: A Dilemma,” PNAS, 102: 4209-4214.
Camerer, C., and Ho, H. T. (1999) “Experience Weighted Attraction Learning in Normal Form Games,” Econometrica 67(4): 827-874. Cox, J. C., and James, D. (2012) “Clocks and Trees: Isomorphic Dutch Auctions and Centipede Games,” Econometrica 80(2): 883-903.
Dawkins, R. (2006) The Selfish Gene. Originally published De
University Press.
Finetti,
B. (1931)
1976. Oxford: Oxford
“Sul significato soggettivo della probabilita,” Fundamenta
Mathematicae 17: 298-329.
Donald, M. (2001) A Mind So Rare. New York: W. W. Norton & Co. Dorsey, R., and Razzolini, L. (2003) “Explaining Overbidding in First Price Auctions Using Controlled Lotteries,” Experimental Economics 6(2): 123-140. Fiorillo, C. D., Tobler, P N., and Schultz, W. (2003) “Discrete Coding of Reward
Probability and Uncertainty by Dopamine Neurons,” Science 299: 1898-1902.
Fishburn, P. C., and Kochenberger, G. A. (1979) “Two Piece Von Neumann Morgenstern Utility Functions,” Decision Sciences 10(4): 503-518. Friedman, D. (1989) “The S-Shaped Value Function as a Constrained Optimum,”
American Economic Review 79(5): 1243-1248. Friedman, D. (1998) “Monty Hall’s Three Doors: Construction and Deconstruction of a Choice Anomaly,” American Economic Review 88(4): 933-946.
Hayden, B. Y., Heilbronner, S. R., and Platt, M. L. (2010) “Ambiguity Aversion in Rhesus Macaques,” Neuroscience 4: 166. S. R., and Hayden,
B. Y. (2013)
“Contextual
Factors
Explain
Seeking Preferences in Rhesus Monkeys,” Frontiers in Neuroscience 7(7): 1-7.
Risk-
Isaac, R. M., and James, D. (2000) “Just Who Are You Calling Risk Averse?” Journal of Risk and Uncertainty 20(2): 177-187. James, D., and Reagle, D. (2009) “Experience Weighted Attraction in the First Price Auction and Becker DeGroot Marschak,” in 18th World IMACS Congress and
MODSIM09. 19, 2013).
http://www.mssanz.org.au/modsim09/D8/james.pdf
(accessed June
Kleffner, D. A., and Ramachandran, V. S. (1992) “On the Perception of Shape From Shading,” Perception and Psychophysics 52: 18-36.
Lakatos,
I. (1970)
“Falsification
and
the
Methodology
of
Scientific
Research
Programmes,” in I. Lakatos and A. Musgrave (eds.) Criticism and the Growth of Knowledge. Cambridge: Cambridge University Press. Louie, K., and Glimcher, P. W. (2012) “Efficient Coding and the Neural Representation of Value,” Annals of the New York Academy of Sciences 1251(1): 13-32. McCoy, A. N., and Platt, M. L. (2005) “Risk-Sensitive Neurons in Macaque Posterior
Cingulate Cortext,” Nature Neuroscience 8: 1220-1227.
Markowitz, H. (1952) “Portfolio Selection,” Journal of Finance 7: 77-91. Netzer, N. (2009) “Evolution of Time Preferences and Attitudes Toward Risk,” The American Economic Review 99(3)(June): 937-955. Newell, A. J., Shaw, C., and Simon, H. A. (1957) “Empirical Explorations of the Logic
Theory Machine: A Case Study in Heuristic,” Papers presented at the February 26-28, 1957 Western Joint Computer Conference: Techniques for Reliability. ACM.
O’Neill, M., and Schultz, W. (2010) “Coding of Reward Risk by Orbitofrontal Neurons Is Mostly Distinct From Coding of Reward Value,” Neuron 68: 789-800.
132
Possible ways forward
Pleskac, T. J, and Busemeyer, J. R. (2010) “Two-Stage Dynamic Signal Detection: A 'Pheory of Choice, Decision Time, and Confidence,” Psychology Review | 17(3)(July): 864-901. Preuschoff, K., Quartz, S., and Bossaerts, P. (2008) “Markowitz in the Brain?” Revue d'Economie Politique 118(1): 75-95. Ramsey, F. P. (1931) “Truth and Probability,” in F. PB. Ramsey and R. B. Braithwaite
Index
(co-authors) The Foundations of Mathematics and Other Logical Essays, reprinted
2000, London: Routledge. Rieskamp, J., and Otto, P. (2006) “SSL: A Theory of How
People Learn to Select
Strategies,” Journal of Experimental Psychology: General 135: 207-236. Robson, A. J. (1996) “A Biological Basis for Expected and Non-Expected Utility,”
Journal of Economic Theory 68(2): 397-424. Robson, A. J, and Samuelson, L. (2011) “The Evolutionary Foundations of Preferences,” in J. Benhabib, A. Bisin, and M. O. Jackson (eds.) Handbook of Social
Economics. Amsterdam: North Holland.
Rubin, P. H., and Paul, C. W, Jr. (1979) “An Evolutionary Model of Taste for Risk,” Economic Inquiry 17(4): 585-596.
Samuelson, P. A. (1979) “Why We Should Not Make Mean Log Big Though Years to Act Are Long,” Journal of Banking and Finance 3: 305-307. Savage, L. (1951) The Foundations of Statistics. New York: Wiley. Seeley, T. D. (2001) “Decision Making in Superorganisms: How Collective Wisdom
Arises From the Poorly Informed Masses,” in G. Gigernezer and R. Selten (eds.)
Bounded Rationality: The Adaptive Toolbox. Cambridge, MA: MIT Press. Shaffer, D. M., Krauchunas, S. M., Eddy, M., and McBeath, M. K. (2004) “How Dogs
Navigate to Catch Frisbees,” Psychological Science 15(7): 437-441. Sinn, H. (2002) “Weber’s Law and the Biological Evolution of Risk Preferences: The Selective Dominance of the Logarithmic Utility Function,” The Geneva Papers on Risk and Insurance Theory 28(2): 87~100.
Sinn, H., and Weichenrieder, A. (1993) “Biological Selection of Risk Preferences,” in
Bayerische Riick (ed.) Risk Is a Construct: Perceptions of Risk Perception. Munich: Knesebeck, pp. 67-83.
Snook, B., Taylor, P. J., and Bennell, C. (2004) “Geographic Profiling: The Fast, Frugal, and Accurate Way,” Applied Cognitive Psychology 18: 105-121.
So, N. ¥., and Stuphorn, V. (2010) “Supplementary Eye Field Encodes Option and Action Value for Saccades With Variable Reward,” Journal of Neurophysiology 104: 2634-2653.
Wilcox, N. T. (2012) “Predicting Risky Choices Out-of-Context: A Monte Study,” Working Paper, Chapman University.
Carlo
Woodford, M. (2012) “Prospect Theory as Efficient Perceptual Distortion,” American
Economic Review 102(3): 41-46. Wu, C. C., Bossaerts, P, and Knutson, B. (2011) “The Affective Impact of Financial Skewness on Neural Activity and Choice.” PloS One 6(2).
Abdellaoui, M. 84 absolute risk aversion 16-17, 20, 83,
88, 104 Against the Gods 10 aggregate-level evidence 54-73; aggregate model calibrations 71-2; bond markets 62-3; engineering 59-60; equity premium puzzle 68-70; gambling 56-9; health,
medicine, sports, and illicit drugs
54-6; insurance 60-1; interest parity puzzle 67-8; real estate 61-2; stock markets 63-7 Aguirreamalloa, J. 69 Altman, E. 63 Anderson, L. 55
Apicella, C. 30-1
Armantier, O. 43 Arrow-Pratt 11, 17, 61 auctions; BDM vs auctions 34-5; estimating risk aversion 25
Australia 66
Avendano, L. 69 Azrieli, Y. 33 Bailion, A. 84
Barsky, R. 55
Bayes’ Rule 25 Becker, G. 97, 101--3, 104
Becker-DeGroot—Marschak procedure
25-6, 39; auctions, and 34-5; differences 26 behavioural economics 8, 74f Berg, J. 45-6 Bentham, J. 85 Bernstein, P. 10 Bernoulli, Daniel; bond markets 62-3; consequences of 3-4; context, and 101; diminishing marginal utility, and
85-6; Expected Utility Theory, and 83; gambling, and 57-8; Grayson, and 98; health, medicine, sports, and illicit drugs, and 55; Hault—Laury 26-7; insurance 60-1; lessons of 47-8;
limitations of existing approaches,
and 115-16; logarithmic model 5-6; “lotteries,” and 6; measurement of risk 90; mixed cases, and 107-8;
negative attention of theory 6; new
wave of theory 21; net versus gross payoffs 103-4; Phlogiston, and 82; risk preferences, and 86; shape of utility function 10; solution to problem 7; St. Petersburg 108-9; stock market 2, 12,
16, 66
Binswanger, H. 21-3, 39-41 Black, F. 24, 64-5
Blaug, M. 7, 8, 11, 18
bond markets 62-3; default rate 62-3 Bosch-Domenech, A. 27 Brazil 68-9 Brealey, R. 65 Brown, K. 29-30 Brown, S. 39 Brownian motion 54, 110 Bulan, L. 61-2 Busemeyer, Jerome 121 Caballero, R. 62
capital asset pricing model (CAPM) 63-6, 69
Caraco, T. 100 Castillo, M. 40 certainty equivalent 21, 23, 25, 26, 34, 56 Chambers, C. 33 Chan, L. 65-6
Chetty, R. 72, 74, 108 Chew, S. 83-4
134
Index
Coates, J. 31-2
Cognitive Reflection Test 38 computational experimental
macroeconomics 71-2 constant absolute risk aversion (CARA)
16, 21, 83
constant relative risk aversion (CRRA)
16, 21, 24-5, 34-5, 48, 49, 55, 69-73, 74, 90, 108 contingency planning 60 counterexamples of risk preference 34-8;
BDM vs. auctions 34-5; large vs. small stakes 35-6
Cox, J. 24, 25, 28, 32-3, 35-8, 43, 45, 48, 120
CRRA see constant relative risk aversion Cumulative Prospect Theory 1, 36,
Index Fama, Eugene 64—5 Fernandez, P. 69 Fishburn, P. 60, 115 Fisher, L. 63 Fitch 62 Frank, S. 55 Frederick, S. 38-42, 129 French, Kenneth 64-5 Friedman, D. 85, 92, 105, 119-20 Friedman, Milton 8, 9, 10, 12, 20, 56, 85-6, 93, 103, 107, 115, 120-1 free parameters 3, 4, 73, 74, 82-4 Froot, K. 67 Gabaix, X. 70, 71, 73 Gaissmaier, W. 117-18
gambling 56-9; Bernoulli 57; close
metagame, and 46; frequency format 119; Gigerenzer, Gerd, and 117-21; individual learning, and 119;
simulations with automated agents 45;
subjects use of 46-7 Hey, J. 29, 32, 41 Holderness, C. 66 Holt, C. 26~7, 30, 39, 43 Holt—Laury, 26-7, 55; elicitation procedures 27; procedure 26-7 Huber, J. 33
impatience 56 India 21, 66 individual risk preferences 20-48;
auctions 24-5; counterexamples 34-8; elicitation procedures 27; Holt-Laury procedure 26-7; lottery menu 21-3; payment methods 32-4; pie-chart
measurements 29-32 injury 55 insurance 60-1; contractual, legal
Dickhaut, J. 34, 45-6
outcomes 56-7; economists, and 57-8; financial benefits of 58-9; other approaches to 58; psychological literature, and 58, 74; quantitative methodologies, and 58 Games and Decisions 12; Experimental Determinations of Utility 12 gaming industry revenue 56 Garbarino, E. 31
diversification 66-7
Germany 66, 69
Jacobson, S. 39
Dorsey, R. 42 drug addiction 54
Gigerenzer, Gerd 117-21; Bayesian
93, 126
Dasgupta, U. 35-8
Decisions Under Uncertainty: Drilling
Decisions by Oil and Gas Operators 13; lotteries, and 13 Diamond, P. 6, 20
Dillon, J. 22-3
Dohman, T. 39, 55-6
Eckel, C. 31
Edwards, W. 12, 83 Einav, L. 61
empirical failure 128-9
engineering risk analysis 59 entertainment value 57-8
Epstein, L. 70, 73, 74 Epstein, S. 28
equity of premium puzzle 68-70;
identification of 69; Handbook of
Equity Premium 69
Escobal, J. 29, 33
Expected Utility Theory 82-4;
alternative to 83-4; Bernoulli, and 83;
laboratory testing of 3; limitations of existing approaches 115; proposition
of 2; new proposals 83; published variations of 84; value function, and
10, 12, 14-15, 17, 18, 83, 86, 112
expected loss 89-91 Exposition of a New Theory on the Measurement of Risk 5
general risk 55
Ghoshray, A. 67-8
reasoning, and 119
Gitman, L. 65 Gode, D. 45 Goeree, J. 43 Grayson,C. 5, 13, 16, 20, 98-9;
opportunities, and 98
Grenadier, S. 61 Grether, D. 32 Grossman, P. 31 Guiso, L. 66
Gul, F. 127-8
Haliassos, M. 66
Handbook of Equity Premium 69
Harlow, W. 39-40 Harrison, G. 26 health, medicine, sports, and illicit drugs 54-6; Bernoulli 55; recent economic literature, and 55 Healy, P. 33 heart disease 55 heuristics 45—7, 117-21; computers, and 117-18; distribution of opportunities, and 120; Dutch auction/centipede
procedures 28-9; physiological
requirements 61; loss contingency 60;
marketing 61; social context 60 Isaac, R. M. 34-5, 49, 107, 120
James, D. 26, 34-5, 39, 41, 43, 45-6, 107, 120 Japan 67-8 Jappelli, T. 66 Jeanne, O. 72 Jevons, W. 6-7, 18 Johnson, Don 96 Journal of Economic Theory 6 Kachelmeier, S. 26 Kahneman, D.13, 82~3 Kagel, J. 25 Knight, Frank 1-2, 4 Kochenberger, G. 60, 115 Kothari, S. 65
Koszegi—-Rabin model 84 Kydland, F. 71-2
Lakonishok, J. 64-6
large worlds 116—17 Laszlo, S. 29
Laury, S. 26-7
Lavoisier, A. 81 Levin, D. 25 Levy-Garboua, L. 27 Li, D. 67-8
135
Lintner, J. 64 Lithuania 69 Ljunggqvist, L. 71
loss 2, 4, 9-10, 30, 36, 39, 48, 49, 54, 59-60, 62-3, 67, 71, 74, 83-4,
89-92, 93, 96, 104, 115, 119; aversion 83; expected 89-91; function 25;
probability 88; risk-averse in 13 lotteries: compound 15, 32; expected
value 17; format changing 45; FPSB
auction 43; independent variable
23; institutional design 42; lottery
menu 21-3; lottery treatment payoffs
40; money payoffs 21-2; perception of the institutions 42-5; pie-chart procedures 28-9; reduced 15; regressions of risk preference 23; risky prospects 2, 14; state 58; subject inconsistency 41; variable ratio form 58; VNM axioms 6 Luce, Duncan 12 Maestripieri, D, 39
Malaysia 68
Markowitz, H. 8-9, 12, 20, 56-7, 66, 87-8; “Markowitz in the Brain?” 121-2 Marshall, A. 7, 18, 57, 85 Marshall, J. M. 56, 108 Masson, R. 108 mathematical or problem-solving ability 38-42; Frederick, and 38-9; lottery treatment payoffs 40; results of 47-8; skill tests 39 Mayer, C. 62 McCabe, K. 45-6 McKenzie, A. 82 measurement of risk 88-92; counterexamples 34-7; data
from risk-aversion experiments
38-47; elicitation instruments 21-34; expected loss versus standard deviation 90-2; indirect attempt to
measure 89; list of 88; probability
distributions of 90-1; set of 88-9; VNM theory 20-1
Megginson, W. 65
Mehra, R. 69-70, 71 Mellor, J. 55 Menger, C. 7
Modigliani—Miller Theorem 98-100
Monsteller, F. 12 Moody’s 62-3; Global Long-Term Rating Scale 62-3
1360
Index
Morgenstern, Oskar 5, 6, 8, 12, 15, 20, 73 Morley, B. 67-8 Mosteller, F. 12, 16, 20 Myers, S. 65 National Research Council 56 Neocardinalism 18f Netherlands 66
net versus gross payoffs 103-9; bailouts 108; concave cases 104-5; convex cases 105-7; mixed cases 107-8; St.
Petersburg 108-9
neuroeconomics 121-8; different
regions of brain 126~7; emergence
of field 121-2; further work into 126; neoclassical economics, and 127-8; new wave of theory 20~1; research into 122; single-neuron-firing studies 122-4; studies with humans 124-6; studies with monkeys 122-4; value
encoding 126
New York Stock Exchange 70 Nogee, P. 12, 16, 20
opportunities 96-111; Acme Resource Exploration Corporation, and 111;
context, and 101-3; embedded real options 109-11; Grayson, and 98-100;
net versus gross payoffs 103-9; risky alternatives 96-7; shopping decisions, and 110; Smith & Stultz, and 98-100
option value 59 ordinalism 11
Orme, C. 29, 32, 41 parameter estimates 27, 29, 34, 39,
84, 115
Paté-Cornell, M. 59 payment methods 32-4; arguments 32;
lotteries 33
Payne, J. 33 Pearson, M. 31 Pesendorfer, W. 127-8
Petrie, R. 23-4, 27, 39-41 phiogiston 81-2, 129 physiological measurements 29-32;
2D-4D 30-1; hormones 30; payment methods 32-4; studies of 29-30;
research programmes 31-2; pie-chart procedures 28-9; salivary hormone, and 30~1; variability in hormone levels 31
Index = 137 Picone, G. 55 Placido, L. 84 Plott, C. 32, 35, 39, 46 Pratt, J. 6, 10
Silvestre, M, 27 Skinnerian conditioning 58 Sloan, R. 65 Slonim, R. 31 Smart, S. 65
Prescott, E. 69~70, 71 probability weighting 4, 8, 36, 38, 74, 83-4, 93, 102, 128 prospect theory 1, 2, 36, 60, 70, 74, 82-4,
Smith, C. 24, 98-102
social insurance 72, 74 Somerville, C. 62 standard deviation 88, 90-2; conditional 68 St. Petersburg Paradox 5, 7, 10, 108, 109 Standard & Poor’s 62 Stigler, G. 97, 101-4
93, 112, 119-20, 126 prudence 89
put option 60 Puto, C. 33
Rabin, M. 35-6, 84 Raiffa, Howard 12-13 Ranciére, R. 72 Razzolini, L. 42-3 Reagle, D. 45-6 real options 4, 61-2, 98, 102, 109-11 relative risk aversion 16 returns 5, 10, 63-6, 69, 71, 92 risk-free rate 64-5 risk: alternative to EUT 83-4;
Stiglitz, J. 6, 10
stock markets 63-7, 68, 72; Australia, and 66-7; Bernoulli, and 66; burden of proof 65; capital asset pricing model 64; “Is Beta Dead?” 65; linear relationships of 64-5; United States,
and 66 Stulz, R. 98-102, 106
diminishing marginal utility 85-6;
dispersion of outcomes 2, 10, 54-5, 59-63, 66-71, 81-2, 87-92; harm or injury 2, 4, 10, 54, 59-60, 81, 87, 89-91, 93; intrinsic nature of 86; measurement of 88-92; neutrality 61;
perception of 55, 87-8; preferences
81-93; tail 88 risk premium 63-4, 68-9, 71, 73, 109 Roberson, B. 24-5 Robson, A. 116-17 Rothschild, M. 6, 10 Russia 68 Sadiraj, V. 32-8
Sagi, J. 83-4
Samuelson, L. 11, 18, 116-17 Sargent, T. 71 Savage, Leonard 8, 9, 10, 12, 20, 56, 85-6, 93, 103, 107, 115-16 Scandizzo, P. 22~3
Schipper, B. 30
Schmidt, U. 32-3, 36-8 Scholes, M. 24 Selten, R. 42 Shaffer, D. 118 Shanken, J. 65 Shapley, L. 8 Sharpe, William 64, 65 Shehata 26
=
:
=! :
Theory of Games and Economic Behaviour 5, 6,8
The Rich Domain of Uncertainty 84 Torero, M. 40 Treich, N. 43 Tropicana Resort & Casino 96
Tversky, A. 13, 36, 82-3, 93, 112 United Kingdom 66, 68-9 United States 56, 58, 60, 63-6, 69-70, 107 Value at Risk (VaR) 88 Varian, H. 11-12 variance 2, 10, 17, 49, 61, 64-6, 87, 89, 92-3, 110, 121-2, 124-6, 128; co- 87; lower 47; semi- 89 Vogt, B. 35-8 Von Neumann, John 5, 6, 8, 12, 15, 20, 73 VNM paradigm 11-12
Sunder, S. 45, 105 Sutter, M. 56 Sydnor, J. 31
Wakker, PP. 84
Taylor, D. 27, 55, 90-1 Thailand 68 Thaler, R. 67 temperance 17, 89-90, 112
yield 63-4, 67, 69, 87
Worthington, Andrew 66 Yaari, M. 36
Zin, S. 70, 73, 74 Zingales, L. 29