265 96 12MB
English Pages 282 [283] Year 2020
Rational Responses to Risks
Rational Responses to Risks PAU L W E I R IC H
1
3 Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries. Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America. © Oxford University Press 2020 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above. You must not circulate this work in any other form and you must impose this same condition on any acquirer. Library of Congress Cataloging-in-Publication Data Names: Weirich, Paul, 1946– author. Title: Rational responses to risks / Paul Weirich, Philosophy Department, University of Missouri. Description: New York : Oxford University Press, 2020. | Includes bibliographical references and index. Identifiers: LCCN 2020009356 (print) | LCCN 2020009357 (ebook) | ISBN 9780190089412 (hardback) | ISBN 9780190089436 (epub) Subjects: LCSH: Certainty. | Risk. | Decision making. | Reason. Classification: LCC BD171 .W375 2020 (print) | LCC BD171 (ebook) | DDC 121/. 63—dc23 LC record available at https://lccn.loc.gov/2020009356 LC ebook record available at https://lccn.loc.gov/2020009357 1 3 5 7 9 8 6 4 2 Printed by Integrated Books International, United States of America
To Michèle
Contents Preface Acknowledgments
ix xi
Introduction
1
I . R I SK S A N D AT T I T U D E S T O T H E M 1. Types of Risk
19
2. Attitudes
41
3. Rational Attitudes toward Risks
64
I I . AC T S A F F E C T I N G R I SK S 4. Evaluation of an Act
87
5. Rational Management of Risks
112
6. Combinations of Acts
136
I I I . I L LU ST R AT IO N S A N D G E N E R A L I Z AT IO N S 7. Return-Risk Evaluation of Investments
161
8. Advice about Decisions
178
9. Regulation of Risks
196
10. Rolling Back Idealizations
221
Conclusion
248
References Index
251 261
Preface Risk is a prominent topic that many disciplines address. The natural sciences discover and assess risks. The social and behavioral sciences investigate our methods of responding to risks. The practical disciplines, such as finance and public affairs, present advice about handling risks. Philosophy contributes a theoretical perspective on risk. It not only characterizes risks and identifies normatively significant types of risk but also advances general principles of rationality that govern responses to risk, including attitudes to risks and acts to change risks. This book presents a philosophical account of risk that characterizes rational responses to risks and explains why these responses are rational. To accomplish this, it takes attitudes to risk as mental states that cause acts and, when rational, cause acts that are rational because of their normative relations to the attitudes. The explanations it offers arise within a normative model with idealizing assumptions about agents and their decision problems. The model belongs to a systematic research program for relaxing idealizations and generalizing principles of rationality to gain realism.
Acknowledgments I thank the University of Missouri for research leave during the academic year 2017–2018 and thank the Sorbonne University for office space and a congenial environment during the spring semester of that year. For helpful conversations, I am grateful to Mikael Cozic, Franz Dietrich, Igor Douven, Philippe Mongin, Cédric Paternotte, Wlodek Rabinowicz, Nils-Eric Sahlin, Jean Marc Tallon, and Peter Vallentyne. I received valuable comments during presentations of my work to the Science, Norms, and Decision (SND) research group at the Sorbonne University; the Philosophy Department at the University of Lund; the Medical Ethics Department at the University of Lund; the 2018 conference in Paris on Decision—Theory, Experiments, and Applications (D-TEA); the 2018 meeting in Nantes of the Society for Philosophy of Science (SPS); and the seminar on Decision, Rationality and Interaction/Collective Attitude Formation (séminaire DRI/ ColAForm) at the Ecole Normale Supérieure. Two readers for the Press provided detailed comments that prevented several faux pas.
Introduction Students adopt educational plans to reduce the risk of unemployment. Physicians advise their patients to reduce risky behavior, such as smoking. On behalf of the public, the Food and Drug Administration blocks the marketing of new drugs until tests show that the drugs are safe as well as effective. Individuals and societies try to manage risks rationally. Although a risk is bad in itself, taking a risk sometimes yields good consequences. We want to know which risks are rational to take. This book characterizes risks and explains how to respond rationally to them. Rational responses to risks, although they do not guarantee safety, improve expectations. Understanding risks, so that we can respond rationally to them, raises our prospects for good lives. The following chapters distinguish types of risk and present principles of rationality for attitudes toward risks and for acts that affect risks. On these foundations, they build a method of using expert information to help others address risks. This chapter outlines the book’s account of risks and rational responses to them. It explains how a philosophical account of risks enriches accounts that other disciplines offer.
I.1. A Philosophical Account of Risks Some questions about risks are theoretical. A complete theory of risk should say what risk is and how to handle it. A philosophical theory of risk addresses such questions with precision and rigor, and deepens our understanding of risk. It defines the various kinds of risk that rationality distinguishes and advances principles of rationality governing responses to risks according to their kinds. Its definitions of kinds of risk and its principles of rationality are general rather than limited to special cases. The principles for responses to risks cover attitudes to risks and acts affecting risks. These principles explain the rationality of responses to risks. Rescher (1983), Coleman (1992), and Lewens (2007) survey topics concerning risk, especially morality’s requirements concerning distributions Rational Responses to Risks. Paul Weirich, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190089412.001.0001.
2 Introduction of risks among people in a society, such as the risks that construction of a nuclear power plant imposes on people living near it. The philosophical literature on risk often targets moral issues, but I treat only the rationality of responses to risks. Although I do not specify morality’s demands, I acknowledge their priority and treat just cases in which rational responses to risks are also moral, such as cases in which a home owner in normal circumstances purchases fire insurance. The interdisciplinary literature on risk mathematically characterizes attitudes to risks, advances measures of risks, and investigates how people respond to risks. I treat risk taken in its ordinary, nontechnical sense, but also taken in a technical sense common in the literature on risk. Rational responses to risks in these senses yield rationality’s evaluation of the options in a decision problem. Because I characterize rational responses to risks rather than actual responses to risks, I use philosophical methods rather than the empirical methods of the behavioral sciences. I draw on mathematics and formal methods when they help establish philosophical points; formal or abstract methods may prevent distraction by inessential, concrete features of particular cases. A priori methods—applied with philosophical rigor—check definitions, principles, and justifications of principles. The book’s philosophical perspective on risk complements treatments of risk in the social and behavioral sciences. Philosophy constructs conceptual foundations that ground methods of risk management crafted in disciplines such as economics, psychology, finance, and public affairs. Its taxonomy of normatively significant kinds of risk and its explanation of rationality’s treatment of risks according to kind assist other disciplines in their accounts of risk. I review philosophical work on risk and also nonphilosophical work with philosophical implications, taking insights from the literature to construct a systematic explanatory account of rational risk management. To illustrate the account’s implications for treatment of risks, I address stylized versions of problems that other disciplines treat, such as the problem of justifying government regulations to reduce risks. Understanding risks and rationality’s requirements concerning them improves a free society’s means of publicly justifying its responses to risks to win the public’s support for these responses. Effective public discussion of responses to risks—such as risks from new biotechnology, new financial instruments, and new means of energy production—requires a means of explaining the rationality of a response to a risk.
Introduction 3
I.2. Risk An account of risk should consider risk’s theoretical role. A good account for a philosophical theory may differ from good accounts for theories in other disciplines. I characterize risk for a normative, philosophical theory of risk. This section characterizes risks briefly; the next chapter characterizes them more thoroughly. A hazardous staircase and a dangerous explosive are sources of risks rather than risks themselves. Also, being at risk is a consequence of a risk and not a risk itself. A risk, in its ordinary sense, is a chance of a bad event. Suppose that smoking doubles the risk of lung cancer. Then because the risk is a chance of lung cancer, smoking doubles this chance. Quantitative chances involve probabilities. A quantitative risk is a probability of a bad event, if probability is understood to be not a number but a state the number represents. A chance of a bad event need not be quantitative, however. The chance may be imprecise, and so not a probability, but just a possibility. I distinguish (1) risk as a chance of a bad event and (2) risk as variability in the utilities of an act’s possible consequences, that is, the act’s exposure to chance. An act produces just one risk of the latter type, and I call it an act’s risk, or an option’s risk when the act, perhaps not performed, is an option in a decision problem. A bet’s risk arises from the variability in the utilities of the bet’s possible consequences—the possibility of winning money combined with the possibility of losing money.1 Nature creates risks that people identify and mitigate if possible. Nature may impose the risk of an earthquake and the risk of a tornado. A volcano may threaten a nearby village, and the inhabitants may mitigate the risk by moving the village farther from the volcano. People commonly take steps to alleviate risks that nature imposes. People also act in ways that create risks, and they consider whether accompaniments of the risks they create compensate for the risks. Driving on icy roads creates a risk of injury, but a driver’s prospect of reaching her destination may justify the risk. A college graduate may start a new business and thereby risk her savings in hope of success; the prospect of success may compensate for the risk of lost savings. 1 Friedman, Mark, Duncan, and Shyam (2014), who criticize expected-utility theory as an empirical theory, draw attention to the distinction between a risk in the ordinary sense of a chance of a bad event and an option’s risk in the technical sense of the variability of the utility of an option’s possible outcomes.
4 Introduction To compare tolerating a risky situation with mitigating the situation’s risk, an agent may compare the expected utilities of the two acts. Such comparisons apply alike to risks that nature creates and to risks that humans create, so I generally do not distinguish risks according to their origins. The risk that I will lose a dollar if I bet it on heads is a risk that arises given a condition. A theory of risk handles such conditional risks. The theory’s treatment of conditional risks may build on probability theory’s treatment of conditional probabilities. In some cases, a conditional risk may involve the probability that a conditional proposition is true, and then its treatment may draw on an account of the truth conditions of conditional propositions.
I.3. Responses to Risks People respond to risks in various ways. Risks commonly cause fear and inhibit action. The risk of death in battle may terrify a soldier and stop his advance toward the enemy. In contrast, mountain climbers may find a thrill in the risk of a fall; the risk may stimulate the flow of adrenalin, sharpen the senses, and generate clarity of purpose. For these reasons, alpinists may seek the risk of climbing. Responses to risks may be attitudes to risks or actions attempting to change risks. Attitudes to risks may be aversions or attractions. These attitudes divide into those that have narrow scope and those that have wide scope. I distinguish intrinsic and extrinsic aversions to risk according to scope.2 An intrinsic aversion to a risk attends to just the risk’s intrinsic features, whereas an extrinsic aversion to the risk attends also to the risk’s consequences. A risk’s intrinsic utility indicates the strength of an intrinsic aversion to the risk, whereas its extrinsic utility (commonly called its utility, tout court) indicates the strength of an all-things-considered aversion, assuming it exists, to the risk. As Chapter 3 explains, rationality requires intrinsic aversions to risks but permits, and, in some cases, may require, extrinsic attractions to risks. Among intrinsic aversions to risks, some are basic and some derive from other intrinsic aversions. I often assume a basic intrinsic aversion to a risk.
2 Social and behavioral scientists, such as Bénabou and Tirole (2003), distinguish intrinsic and extrinsic motivations in some contexts. For example, they consider whether providing a worker an extrinsic motivation for doing a task well may diminish the worker’s intrinsic motivation for doing the task well.
Introduction 5 Risks may exist for nonhuman animals and even for machines, granting that death is bad for an animal and that malfunction is bad for a machine. Risk presumes that some events count as bad, but not necessarily bad with respect to human interests. Nonhuman agents may respond to risks of events bad for them. Mentally mediated responses to risks arise in agents with cognition. Animals are agents of this type and act for reasons of their own. An animal takes steps to avoid risks; it may stop moving to avoid the risk of detection by a predator. A garage door opener, although it acts, lacks cognition. Despite being subject to a risk of malfunction, it does not take mentally mediated steps to reduce this risk; it does not act for reasons of its own. This book’s theory of risk targets any agent for whom some events are bad, who has some autonomous cognitive capacity for evaluating and controlling its acts so that principles of rationality govern its attitudes and acts, and who faces decision problems with options that affect risks. Humans are the prime examples of agents of this type, and so the book’s theory of risk targets humans especially. Cognitive psychology and behavioral economics describe human responses to risks. Theorists in these fields, for example Kahneman and Tversky (1979), suggest that people systematically violate norms of rationality because of biases or reliance on imperfect heuristics for making choices. Psychology and sociology, as Fischhoff and Kadvany (2011) explain, describe behavioral patterns and social institutions that risks prompt. A theory of rationality, in contrast, evaluates, and does not just describe, a person’s and a society’s responses to risks.
I.4. Rationality Philosophical treatments of morality and rationality are both normative, that is, both consider how people ought to act. Morality sometimes demands more than does rationality. For example, morality, but not rationality, may demand that a person contribute to charity. However, rationality sometimes demands more than does morality. For example, rationality, but not morality, may demand taking the shortest route home. Because norms of rationality and morality differ, a normative account of risk should identify the type of norm it advances. As mentioned, I advance norms of rationality. Although we often force adults to meet their moral obligations, say, not to murder, we do not often force adults to meet rationality’s obligations, say,
6 Introduction to maximize utility. Justice requires enforcing certain moral obligations but does not require enforcing rationality’s obligations. Violating rationality’s obligations commonly carries its own punishment. The absence of social mechanisms for enforcing norms of rationality indicates that a person’s interests provide sufficient reasons to comply with the norms. Aristotle’s famous definition of man as the rational animal takes rationality to be a mental capacity. However, the norms of rationality I treat are evaluative standards for assessing the products of a person’s mental capacity, such as the person’s beliefs, goals, and acts. I treat rationality in its evaluative role. The standards of evaluation I formulate assess responses to risks. Although people often respond rationally to risks, a gambler may irrationally risk a fortune on a hunch that after a string of reds, a roulette wheel will now yield black. Standards of rationality take account of an agent’s abilities and circumstances. Rationality demands more of an adult than of a child, and more of an adult with time for reflection than of one hurried and distracted. Rationality in its ordinary normative sense imposes standards on free agents. To be rational, a free act must meet its standards. The standards are sensitive to an agent’s abilities and circumstances because failing to meet the standards, and so being irrational, is blameworthy. Limited abilities and difficult circumstances create excuses for not meeting standards that apply to ideal agents in ideal circumstances, such as the standard of utility maximization. However, an agent of any type has no excuse for not meeting standards that apply to him considering his abilities and circumstances. Some treatments of rationality give it a technical definition to facilitate proving theorems about rationality. Principles that adopt a technical definition of rationality have to establish their normative credentials. They do not inherit the normative force of rationality in its ordinary sense. If a theory defines rationality as maximization of utility, then, according to it, that rationality requires maximizing utility holds by definition and is not a normative principle. This book treats rationality in its ordinary sense so that the book’s claims concerning rationality are normative claims. Its principles of rationality have immediate normative force, because of the normative force of rationality in its ordinary sense, and do not require a demonstration of the normative significance of some technical sense of rationality.3 3 Allingham (2002: Chap. 2) defines rationality as utility maximization, taking a utility function as a representation of a preference ordering of options in a decision problem. He maintains that utility does not explain choices and that diminishing marginal utility is meaningless. To build a theory of
Introduction 7 Some theories of rational choice aim for an account of a decision’s rationality given the agent’s doxastic and conative attitudes, as expressed in probability and utility assignments. Because these accounts tolerate irrational attitudes, they give an account of only a decision’s conditional rationality, namely, its rationality granting the agent’s attitudes. A decision is not fully, or nonconditionally, rational if it springs from irrational attitudes. This book explains the rationality of a choice by establishing the rationality of the attitudes that produced the choice. Its theory of rational decisions incorporates an account of rational attitudes concerning risk and assumes rational attitudes to other possible consequences of the options in a decision problem so that it can explain a decision’s nonconditional rationality. Although its theory does not fully specify standards of rationality for doxastic and conative attitudes, its theory advances some standards for conative attitudes concerning risks, and not just standards for acts that affect risks.
I.5. Normative Models An explanation of the rationality of responses to risks, to simplify, may use a normative model. A model is a set of assumptions, or a set of possible worlds meeting the assumptions, or a single world trimmed of all but features the assumptions specify. Taking the model as a possible world, events such as acts occur in the model. A typical model includes idealizing assumptions about agents and their circumstances, because rationality’s requirements for responses to risks depend on an agent’s abilities and circumstances. Its assumption that agents are cognitively ideal and in ideal circumstances for responding to risks controls for factors that affect rationality’s demands, allowing the model to display the effect of other factors on these demands, in particular, the effect of an agent’s probability and utility assignments. Rational attitudes and acts in the model meet relatively simple standards of rationality that do not extend straightforwardly to realistic cases. However, rationality in the ordinary sense, Chapter 2 characterizes utility so that it explains choices and so that diminishing marginal utility is meaningful. Broome (2013), without describing the nature of rationality, advances some requirements of rationality, in particular, requirements of coherence. The requirements he advances are not meant to be exhaustive. Gilboa, Postlewaite, Samuelson, and Schmeidler (2017) define a person’s behaving rationally (in a subjective sense) as the person’s endorsing the behavior after an analysis of the behavior. This definition fails to cover some cases. Although reasons robustly support rational behavior, a person may endorse his irrational behavior after analysis because he turns a deaf ear to criticism.
8 Introduction some realistic cases approximate the ideal cases that a model treats so that results in the model are also approximately accurate in these realistic cases. A model’s assumptions ground precise principles of rationality for the agents in the model. The principles are a priori and hold in all worlds but have as conditions the assumptions of the model. An account of rationality in a model treats rationality given assumptions but, if the model is well designed, constitutes a step toward an account of rationality without the assumptions. The model draws attention to factors that influence rationality’s requirements for people in ordinary circumstances. After identifying norms for agents in a model, I sometimes relax the model’s idealizations and generalize the norms to move closer to treating real agents in ordinary circumstances. For instance, a simple model assumes that an agent in a decision problem with multiple options has precise probabilities and utilities for each option’s possible outcomes, and a more general model relaxes this idealization to allow for probabilities and utilities that are imprecise because of deficiencies in evidence or experience.
I.6. Rationality in a Normative Model In a normative model, principles of rationality regulate attitudes to risks and regulate acts that change risks. I attend to types of risk that general principles of rationality distinguish, in particular, the risk of a bad event and the risk that an act’s exposure to chance constitutes. Different general principles of rationality govern responses to these two types of risk. As Chapter 3 explains, rationality imposes stricter requirements on an attitude to a risk of a bad event than on an attitude to an act’s exposure to chance. The traditional principle of expected-utility maximization, taken as a standard of rationality, makes assumptions about agents, their circumstances, and their decision problems; it operates within a normative model with idealizations and restrictions that Chapters 4 and 5 specify. The principle surveys risks in the sense of chances of bad events that an option creates and uses them, along with prospects of good events that the option creates, to evaluate the option; the option’s expected utility is a probability-weighted sum of the utilities of the option’s possible outcomes. The principle handles risk in the sense of exposure to chance by counting it as a risky option’s consequence and part of a risky option’s outcome. Maximizing expected utility is not necessary for rationality unless an option’s possible outcomes include
Introduction 9 everything that matters to an agent, including the option’s risk in the sense of its exposure to chance. Despite widespread endorsement of expected-utility maximization, theorists have not settled on a philosophically rigorous explanation of the rationality of choosing according to options’ expected utilities, so Chapters 4 and 5 offer an explanation of the rationality of this behavior. When risks occur in combinations, principles of rationality consider their interaction. A combination of risks, each justified in isolation, may together constitute an unjustified risk. For example, although one bet of a dollar on a number at the roulette table may be okay, one hundred bets of a dollar on the same number on a single spin may be too risky. Conversely, a combination of bets not justified in isolation may be justified in combination. An agent may undertake a combination of risks selected so that some risks hedge others and make the overall risk less than any single risk taken in isolation. Although no single risk is justified, the combination of risks may be justified. The risks forming a combination may be undertaken simultaneously or in sequence. Principles of rationality distinguish the two cases despite their similarities. The principles evaluate simultaneous risks as a unit but evaluate sequences of risks by evaluating each risk in the sequence, as Chapter 6 explains. The rationality of a sequence of adoptions of risks arises from following principles governing each risk’s adoption, as these principles take account of the effect of one risk on another. Principles of rationality also distinguish a combination of risks undertaken by an individual from a combination of risks undertaken by individuals in a group. Perhaps one bank’s investment in financial derivatives poses no significant risk to society, whereas all banks investing in these derivatives risks collapse of the entire banking system. The group of banks may create the risk of collapse because of its failure to coordinate the members’ activities. If inadequate communication excuses this failure to coordinate, then the group’s creation of the risk is not irrational, despite being a mistake. Principles of rationality are more demanding for individuals, with unified minds, than for groups, without unified minds. As a result, in some cases a group rationally creates a risk, through a combination of its members’ acts, although an individual does not rationally create a similar risk through a combination of the individual’s acts. Although rationality’s standards for a group’s acts are less demanding than its standards for an individual’s acts, in some cases an act of a group, such as a government’s starting a war, is weightier than an act of an individual. In these cases, rationality requires a greater effort from the group than from the
10 Introduction individual to meet its standards for an act. This point about procedures for meeting standards for acts is compatible with the standards for acts being less demanding for groups than for individuals. Rationality requires of both a group and an individual greater effort in reaching a rational act when the act is consequential than when it is not consequential. Even if groups more often than individuals perform consequential acts, and thus more often face rationality’s strict standards for consequential acts, assuming that an individual’s act and a group’s act are equally consequential, rationality’s standards for the individual’s act are more demanding than for the group’s act because the individual has a unified mind.
I.7. Understanding Risk A normative theory of risk explains rationality’s requirements for responses to risks, such as attitudes to risks and acts that affect risks. It deepens understanding of risk, its types, and rationality’s requirements concerning its types. It justifies principles for responses to risks and uses the principles to explain what makes a response to a risk rational. The operationalist method, which some decision theorists employ, treats an agent’s acts and does not reveal an agent’s way of framing a decision problem. An agent’s acts do not reveal the agent’s deliberations, for example, the options the agent reviewed. Operationalism puts aside the mental states that provide the means of explaining the rationality of choices. An explanatory decision theory acknowledges these mental states even if they do not have precisely accurate operational definitions. A common approach to rational choice stops with an account of consistent choices. A normative theory justifies standards of rational choice that extend beyond standards of consistency. The explanation of a single choice’s rationality depends on the agent’s doxastic and conative attitudes as well as on its consistency with other choices. Binmore (2009: 4, 20) states that economics uses consistency of choices as the foundation of its evaluation of choices to avoid controversies surrounding additional requirements of rationality, even though this limits economics’ power to explain the rationality of choices. This book addresses controversies about rational choice so that it may explain what makes choices rational. One approach to rational choice focuses on the representation of preferences among the options in a decision problem as following expected
Introduction 11 utilities, taking their representability this way as a type of consistency. Given that preferences among options meet various conditions, representation theorems from measurement theory show how to represent the preferences as following expected utilities and then how to infer from the preferences probabilities and utilities of options’ possible outcomes. This representational approach derives attitudes toward risk from an agent’s preferences among risky options. It does not explain the rationality of attitudes to risk and their effect on the utilities of acts. It does not explain the rationality of choices that maximize expected utilities, in particular, the rationality of choices among options that involve risk.4 A preference- ranking of options may have many representations. Consider representing, for a state S that may hold, an agent’s indifference between a gamble on S and, at the same stakes, a gamble on ~S. One representation uses a credence function that assigns 0.5 to both S and ~S. Another uses an irrational credence function that assigns 0.4 to both S and ~S. As explanations, the two representations compete. Perhaps only the first representation explains the agent’s indifference, because only it accurately represents the agent’s credences. Mental states that are hard to access publicly, and thus perhaps an unsatisfactory basis for science, may explain the agent’s indifference.5 Because my treatment of risk has an explanatory objective, it does not stop with a representation of preferences among risky options. I use attitudes to risks to explain the rationality of preferences and do not treat the attitudes just as artifacts of the representation of preferences. I treat an option’s risk as a consequence of the option and show how rationality governs attitudes to this consequence. I show how rational assessments of an option’s risk and its other consequences combine to yield a rational assessment of a risky option, and then a rational decision concerning the option’s adoption. Chapter 4 advances a normatively strong version of the principle of expected-utility maximization that does not say just that choices should be “as if ” maximizing expected utility—that is, representable as maximizing expected utility using probability and utility functions constructed for the purpose—but also says that choices should actually maximize expected 4 Hampton (1994), Zynda (2000), and Meacham and Weisberg (2011) make this point about explanation. 5 Gilboa, Postlewaite, Samuelson, and Schmeidler (2017) note that an expected-utility representation of an agent’s preferences may represent an agent as having probability assignments that do not represent the agent’s degrees of belief.
12 Introduction utility, as computed using probability and utility functions that represent, respectively, an agent’s degrees of belief and degrees of desire. A maximization principle with this strong normative interpretation explains the rationality of a choice and not just its consistency with other choices. In some decision problems concerning risks, a rational agent may not form relevant probabilities and utilities, even with unlimited reflection. To accommodate these decision problems, I generalize expected-utility maximization to obtain a principle of rationality governing choices that rest on imprecise probabilities and utilities. The principle deems rational any option that maximizes expected utility according to some evidentially and experientially admissible pair of a probability assignment and a utility assignment for the possible outcomes of options.6 A normative account of risk explains why certain responses to risks are rational. I show that the prevalence of aversion to risk is not a coincidence but the result of a requirement of rationality. I explain the rationality of the utilities an agent attaches to a choice’s possible outcomes in a calculation of its expected utility and explain the rationality of combining these utilities to evaluate the choice. I use introspectively accessible mental attitudes to obtain an evaluation of an option so that an agent can do the same, and do not simply infer the option’s evaluation from the agent’s preferences among options. I ground principles for responding to risks in psychological attitudes toward risk defined independently of choices, rather than in probability and utility functions constructed from choices, and thereby increase the normative strength of principles of risk management. Psychological attitudes defined independently of choices, and independently of fundamental principles of rationality governing choices, ground probabilities and utilities that justify choices maximizing expected utility. Although I extract parts of my account of risk from the literature, as references indicate, I unify the parts and fill gaps to create a systematic theory of risk. I endorse familiar points about risk, such as the rationality of maximizing expected utility, but improve the formulation of, and argumentation for, these points to deepen their justification. As Hansson (2014) observes, the disciplines treating risk differ about the nature of risk and rational responses to it. I reconcile their perspectives and resolve disputes. To form a theory of rational responses to risks, I articulate and defend precise, general 6 Good (1952: 114) proposes this sort of generalization of the principle of expected-utility maximization.
Introduction 13 principles concerning the nature of risk, the rationality of attitudes toward risk, and rational action to alter risks.
I.8. Orientation To locate my theory of risk in the landscape of extant theories, I sketch (and in the following chapters elaborate) the features of my theory that distinguish it from its rivals. First, I treat risk as a consequence of an option in a calculation of the option’s expected utility. This departs from some prominent theories of rational choice. Savage ([1954] 1972) omits risk from an act’s consequences, and so from its possible outcomes. He counts on the utilities of other consequences, and the formula for an act’s expected utility, to handle any rational aversion to risk. Arrow (1965, 1970) and Pratt (1964) note that attitudes to risk may shape the utility curve for consequences such as money—aversion may make the curve concave when viewed from below— and Savage may rely on such effects to accommodate rational aversion to risk. Allais (1953) argues forcefully that Savage’s approach does not adequately handle rational aversion to risk. In response to the objection, Buchak (2013) formulates risk-weighted expected-utility theory, which modifies the expected-utility formula for an act’s evaluation to include risk weights for probabilities of possible outcomes, but she does not put an act’s risk among its consequences. Second, I adopt a version of the expected-utility principle that calculates expected utilities using probability and utility assignments defined independently of preferences among options. Savage and Buchak adopt a representational version of the expected-utility principle, with Buchak adding risk weights. Their principle holds that rationality requires preferences among options in a decision problem to be as if following the expected utilities of options, perhaps adjusted by risk weights. Jeffrey ([1965] 1990) also adopts a representational version of the expected-utility principle, while putting an option’s risk among its consequences. Such representational versions of the expected-utility principle are easier to defend than stronger, literal versions, such as mine, that require preferences among options to follow expected utilities defined independently of preferences among options. According to a representational version, preferences explain probabilities and utilities of possible outcomes; whereas according to a literal version, probabilities and utilities explain preferences.
14 Introduction Table I.1 Comparison of Theories of Rational Choice
Savage Jeffrey Buchak RRR
Expected-utility principle
An act’s risk as its consequence
representational representational representational literal
no yes no yes
$1,200 gamble
$1,200, the gamble’s risk, other consequences gamble
–$1,000 monetary consequences only
–$1,000, the gamble’s risk, other consequences risk included as a consequence
Figure I.1 Consequences Taken Narrowly and Taken Broadly
Table I.1 displays the distinctive features of the theories mentioned, with RRR standing for my theory.7 To illustrate differences among the theories, suppose that an agent must decide whether to take (1) a gamble that yields either a loss of $1,000 or a gain of $1,200, each with a probability of 50%, or (2) $100 for sure. The expected monetary-value of the gamble is $100, but the agent might prefer taking the $100 for sure because the gamble creates a risk of a loss. Arrow and Pratt handle an agent’s aversion to risk through the agent’s assignment of utilities to gains and losses of money, and Savage may do the same. Buchak handles aversion to risk through risk weights on probabilities that downgrade the evaluation of the gamble. My theory evaluates the gamble considering its consequences if it wins and its consequences if it loses. Each set of consequences includes the gamble’s risk, a consequence to which a typical agent is averse. The trees in Figure I.1, putting possible consequences at terminal nodes, show the difference in analyses of the gamble. According to Savage, the agent’s preference for the sure $100 to the gamble, and other preferences, make the magnitude of the utility of gaining $1,200 less than the magnitude of the utility of losing $1000. According to Buchak, 7 Bell and Raiffa (1988) present a view similar to mine. They treat risk as a feature of an option to which an agent may have an intrinsic aversion.
Introduction 15 the preference, and other preferences, make the risk-weighted probability of winning less than the risk-weighted probability of losing. These results make expected utility, and risk-weighted expected utility, less for the gamble than for the sure $100. However, simplifying the gamble’s consequences comes with the cost of constraining utilities of possible consequences, or complicating the expected-utility principle, and requires a weak, representational version of the expected-utility principle. On the other hand, putting risk in the gamble’s consequences simplifies the expected-utility principle and permits a strong, literal version of the principle. Later chapters elaborate and support these claims.
I.9. Chapters to Come Chapters 1 and 2 provide the conceptual foundations for a normative theory of risk. Chapters 3 through 6 build the theory, and Chapters 7 through 9 illustrate it. Chapter 10 sketches directions for the theory’s generalization. Chapter 1 characterizes risk and its types, and reviews causes and effects of risks, for example, the way a paucity of evidence concerning an option’s possible consequences augments the option’s risk. Chapter 2 distinguishes types of attitude to risk, and Chapter 3 formulates, within a normative model, rationality’s requirements for attitudes to risks according to their type. Chapter 4 presents structural relations among rational attitudes toward risks and shows how these structural relations simplify choices; it uses these relations to justify expected-utility and mean-risk evaluations of options in decision problems. Chapter 5 explains rational choice in a decision problem with options involving risks. Chapter 6 evaluates combinations of choices that generate risks, including sequences of choices, using points about the interaction of risks. In some cases, as in hedging bets, addition of a risk lowers total risk. Chapter 7 presents and then justifies financial management’s risk-return method of evaluating an investment by establishing, under common assumptions, a way in which the intrinsic utility of an investment’s risk is independent of the expected utility of the investment’s return. Chapter 8 uses the preceding chapters’ theory of risk to guide advice a member of a profession gives a client, and Chapter 9 uses the theory to guide a government’s regulation of risks. Chapter 10 explores ways of generalizing the theory’s normative model by removing idealizations.
16 Introduction
I.10. Summary of the Introduction Rationality regulates attitudes toward risks, evaluations of risky options, and ways of managing risks. Justifications of rationality’s requirements distinguish types of risk because a rational response to a risk depends on the risk’s type. A philosophical account of risk explains what makes a response to risk rational. It deepens understanding of rationality’s requirements for responses to risks.
PART I
RISKS A N D AT TI T U DE S TO T HE M Does a risk, such as the risk of failing a test, have a propositional representation so that it may be the object of a proposition attitude? This part argues that it does and consequently that an aversion to a risk resembles other aversions and may affect evaluations of acts as other aversions do. The main topics are what risk is, what attitudes an agent may have to a risk, and what attitudes are rational to have.
1 Types of Risk This chapter characterizes two types of risk. The first type of risk is a chance of a bad event, and the second type is an act’s exposure to chance. Within a theory of rationality, the two types of risk are normatively significant because, as later chapters explain, different general principles of rationality govern responses to them.
1.1. A Chance of a Bad Event As ordinarily understood, a risk is a chance of a bad event. If the chance is quantitative, the risk is the existence of a probability of the bad event, in particular, a probability strictly between zero and one, that is, the bad event’s having a probability in that range. Calling the risk a probability of a bad event, rather than the existence of a probability of a bad event, takes the probability to be, not a number, but a state of affairs that a number represents. Probability has a dual usage. Saying that the probability of rain is 50% may, according to context, take the probability to be either a number or the meteorological state of affairs that the number represents. Calling a risk a probability creates a context that adopts the second usage, as a risk is not a number but a state of affairs. This section fills out the definition of risk with supplementary accounts of chance, badness, and an event. It starts with badness.
1.1.1. Badness A risk threatens a bad event and so presumes that some event is bad; it presumes an evaluation of events. Standards of rational choice use an agent’s evaluation of events. So, I take badness to be relative to an agent and take an event to be bad for an agent if the agent is averse to the event. An event, if an agent has an aversion to it of a precise intensity, receives a utility assignment from the agent. Its subjective utility records the strength of Rational Responses to Risks. Paul Weirich, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190089412.001.0001.
20 Rational Responses to Risks the agent’s aversion. A bad event has a negative utility for the agent, assuming that indifference is the zero point for the utility scale. Sometimes the status quo serves as the zero point for a utility scale. Gains with respect to it receive positive utilities, and losses receive negative utilities. Although bad events typically are bad compared to the status quo, in some contexts the status quo itself may count as bad, and an improvement with respect to it may still count as bad. For example, a war’s continuing with fewer causalities may count as bad even if it is an improvement with respect to the status quo, including the current pace of the war. In such a case, a bad event that has a negative utility on a scale using indifference as a zero point may have a positive utility on a scale that uses the status quo as a zero point. Other accounts of an event’s badness for an agent are possible, for example, being contrary to the agent’s interests. These other senses of an event’s badness lead to other senses of risk. For an agent, a risk of a bad event is prudential if the event is bad because it is contrary to the agent’s interests. It is personal if the event is bad because the agent is personally averse to the event. I treat personal risks, but within a model that assumes that agents are fully rational and so have aversions appropriately responsive to prudential considerations in their ken. A completely objective risk involves an objective chance of an objectively bad event. In some contexts, for example, when evaluating a government’s regulation to reduce risks, an appropriate evaluation may consider only objective risks of harm to agents rather than subjective risks of realizing agents’ aversions. To reduce a subjective risk of realizing an agent’s aversion, the right tactic may be the agent’s eliminating the aversion rather than the government’s preventing the aversion’s realization. To prevent fright that bats cause, it may be better for people to overcome their fears than for the government to reduce the number of bats.
1.1.2. Chance A chance of a bad event may arise from either a possibility or a probability of the event. In Chapter 10, I treat risks that are possibilities of bad events, but elsewhere I treat risks that are probabilities of bad events, including risks involving probabilities that are imprecise. Probability theory, as in Walley (1991: Chap. 1), recognizes two types of probability, physical probability and evidential probability, that is, probability
Types of Risk 21 relative to evidence. During a courtroom trial, the physical probability that the defendant is guilty is either 0% or 100% because past events settle innocence or guilt. The physical probability is not sensitive to information. In contrast, the evidential probability for a juror that the defendant is guilty varies with information presented during the trial. At an early stage of the trial, the probability may be low. At the end of the trial, it may be high. A nonextreme probability that a court will punish an innocent person is evidential. Nonextreme physical probabilities, with either precise or imprecise values, exist in an indeterministic world. Decay of an atom of U235 during a year has a nonextreme physical probability according to physical theories that take the process of decay to be indeterministic. A nonextreme physical probability may exist even in a deterministic world if it is taken as an evidential probability that remains nonextreme after learning all that is humanly possible. The probability of heads on a coin toss may be physical in this sense. An evidential probability that is relative to a person’s evidence is personal. A cognitively ideal person has access to her personal, evidential probabilities because she knows the evidence she possesses. An evidential probability is impersonal when, instead of being relative to a person’s evidence, it is relative to a body of evidence, which may be collected from multiple sources. The evidence may then be inaccessible, even for an ideal person, because of obstacles to surveying the whole body of evidence. A smoker’s risk of lung cancer, taken as relative to all the evidence linking smoking and lung cancer, may be relative to evidence that is inaccessible to the smoker. Frequentist interpretations of probability target physical probabilities, and Bayesian interpretations of probability target evidential probabilities. A known physical risk yields an evidential risk of the same size, according to a principle of direct inference linking physical and evidential probabilities. The Principal Principle that Lewis ([1980] 1986) formulates is such a principle of direct inference, but a principle of direct inference need not adopt Lewis’s view that physical probabilities reduce to constraints on doxastic attitudes; they may be irreducibly physical in an indeterministic world. A probability of an event is a chance of the event. The probability may be physical or evidential, so a chance may be physical or evidential. Some theorists reserve the word “chance” for a physical probability, but I allow for a chance that is an evidential probability.1 A chance in the ordinary sense 1 The chance that in a poker game the next card dealt is an ace is an epistemic chance that arises from ignorance of the order of cards in the deck. Salmón (2019) uses chance in an evidential, and so epistemic, sense.
22 Rational Responses to Risks that I use is any probability, including an evidential probability, and not just a physical probability. A physical chance, if known, creates an evidential chance, but an evidential chance may exist without a corresponding physical chance. A rational agent may believe a danger exists that is not physically present. A probability of a bad event creates a risk, so corresponding to the two types of probability are two types of risk. One arises from physical probabilities and is independent of information; the other arises from evidential probabilities and is information-sensitive. A risk that is a physical chance of a bad event is partly but not entirely physical. The physical chance belongs to physics, but not the classification of the event as bad. Nonetheless, I classify risks as either physical or evidential according to whether the chances they involve are physical or evidential.2 Science reveals physical risks, such as the risk of cancer, which nature creates. In contrast, human uncertainty is a source of evidential risks. To identify evidential risks, an agent reviews epistemic possibilities. When you eat a wild mushroom, you run no physical risk if the mushroom belongs to a safe species, but you still run an evidential risk if you do not know that the mushroom is safe. Rides in amusement parks create the illusion of risk. One seems to be at risk, and this typically creates a thrill during the ride, although in fact one is not at risk. One faces neither a physical nor an evidential risk of injury. Although an evidential risk is relative to one’s evidence, it is relative to all one’s evidence and not just relative to a salient fragment of the evidence, such as the experience of a roller coaster’s speed. Suppose that a physician tells a patient that, because the patient’s brother has contracted colon cancer, the patient’s risk of colon cancer has increased. The physician has in mind evidential risk because the brother’s condition does not affect the patient’s physical risk of cancer. People sometimes speak of a perceived risk when they mean an evidential risk rather than a physical risk, but other times they mean that a risk appears to be, but may not be, a physical risk. In ordinary usage, a perception that a risk exists is a belief that a risk exists. Accordingly, a perceived risk entails a belief about a type of risk and is not a new type of risk. A technical definition of a perceived risk may make it a type of risk different from an evidential risk and also different from
2 Sahlin and Persson (2004) use the term “epistemic risk” instead of the term “evidential risk.” I favor the term “evidential risk” because being relative to evidence is a particular way for a probability to be epistemic.
Types of Risk 23 a physical risk, but the theory of risk I construct treats only risks in the ordinary sense, and so just evidential and physical risks.3 Principles of rationality treat differently physical and evidential risks. If the risk of a bad event involves a physical probability, and so is a physical risk, then changing the environment to reduce the risk is often an appropriate step. However, if the risk involves an evidential probability, and so is an evidential risk, then gathering evidence bearing on the bad event’s occurrence may be an appropriate step. Given that the risk of a meltdown at a nuclear power plant is physical, an appropriate course of action may introduce safety measures, such as containment walls, to reduce the risk. Given that a patient’s risk of leukemia from a new genetic therapy is evidential, an appropriate course of action may be gathering data concerning the therapy’s consequences. Changing physical risks generally requires new procedures that alter the physical environment. However, simply collecting data may change an information-sensitive, evidential probability and so may change an evidential risk involving the probability. An evidential probability depends on evidence, and I assume that evidence completely settles an ideal agent’s doxastic state, leaving no room for the exercise of epistemic tastes, so that when evidence is sufficient for a probability assignment to an event, it settles a unique probability assignment.4 An event’s evidential probability does not differ for two agents with the same total evidence, including experiential evidence. However, evidence when sparse need not settle a precise evidential probability for a proposition but may instead settle an interval of probability assignments to the proposition. Evidential probability, taken as rational degree of belief, is Bayesian in the sense of conforming to the standard axioms of probability and the principle of conditionalization, but is not a type of subjective or personal probability that may vary among agents with the same evidence. Neither is it an objective Bayesian probability, taken as a probability settled by the principle of maximum entropy, a version of the principle of indifference. An evidential probability represents an objective response to evidence following as yet unformulated principles of rationality. Because the formulation of these principles requires solving hard problems in inductive logic, my characterization of evidential probability lacks a general method of specifying a
3 Slovic (2016) offers several studies of the perception of risk. 4 White (2005) argues that permissiveness about a doxastic response to evidence has implausible implications.
24 Rational Responses to Risks proposition’s evidential probability relative to a body of evidence. However, such a general method is not necessary for my points about risk. When evidence is scant or mixed, an evidential probability may not have a sharp value. A set of probability assignments may then represent a rational ideal agent’s doxastic response to the evidence; they exhibit the constraints that the evidence imposes on rational degrees of belief. A representation of an evidential probability without a sharp value is the set of values assigned to it by a set of probability assignments that the evidence admits. The literature calls such an evidential probability an imprecise probability.5 Utility assignments for bets respond to evidence concerning states that make the bets win and states that make the bets lose. Imprecise utilities for the bets arise from imprecise probabilities for the states. In general, an imprecise utility for an act may arise from imprecise probabilities for the act’s possible outcomes. Also, when experience is sparse, and adequate information does not replace experience, a utility assignment to an act’s possible outcomes, no matter how finely specified, may be imprecise. Then a set of utility assignments to maximally specified possible outcomes may represent a rational agent’s conative response to his experiences. In the general case, a set of pairs of (1) a probability assignment and (2) a utility assignment represents an agent’s doxastic and conative attitudes, which arise from the agent’s evidence and experience. The representation may use multiple pairs because sparse evidence for probability assignments and limited experience for utility assignments generate imprecise probabilities and utilities. These imprecise probabilities and utilities may in turn generate imprecise evidential risks. An agent has a precise doxastic and conative state of mind, but in some cases, no single pair of a probability and a utility function represents it well. A set of such pairs of functions, with a precise characterization, may better represent the state but may still represent it imperfectly. Although standard terminology uses imprecise probability for a representation of a doxastic attitude that a precise probability does not represent well, the terminology may be misleading because a probability is a precise quantity and because the attitude and its representation are themselves precise. The imprecision consists
5 Kim (2016) argues that the theory of imprecise probability lacks a good method of updating imprecise probabilities when the immediate effect of a new experience is an imprecise change in the probability of a proposition. A full theory of evidential probability may need principles of updating besides the principle of conditionalization.
Types of Risk 25 simply in the representation’s failing to use a single probability. An analogous point applies to an imprecise utility. Principles of rational choice use personal, evidential probabilities, that is, probabilities relative to a person’s evidence. I take an evidential probability as a cognitively ideal agent’s rational degree of belief. Rationality imposes structure on the agent’s degrees of belief, in particular, as mentioned, conformity with the standard laws of probability. When using utility to represent an agent’s conative attitudes, such as desires and aversions, I assume that the agent is rational and ideal so that the agent’s conative attitudes have a structure that permits their representation by a utility function that conforms with an expected-utility principle asserting that an act’s utility equals its expected utility. Chapter 5 justifies this principle. A conditional risk is a risk relative to a condition. Conditional risks involve conditional probabilities, and conditional probabilities come in a variety of types. To supplement standard conditional probability, causal decision theory introduces a causal type of conditional probability. A risk of a bad event given an option in a decision problem may involve the event’s probability image with respect to the option, which Joyce (1999) explains. Weirich (2001a: 123–34) uses for an option’s evaluation a type of conditional probability more general than a probability image. The various types of conditional probability generate various types of conditional risk. I specify the type of conditional probability when necessary for understanding a point about a conditional risk.
1.1.3. Events A proposition is a basic bearer of a truth-value with a structure that its sentential expression indicates. An agent understands a proposition a certain way, say, by means of a sentence expressing the proposition. An event, as I take it, has a propositional representation. For an agent, an event’s evidential probability and subjective utility depend on a way of understanding the proposition representing the event. An agent’s attitude to a risk of the event therefore also depends on a way of understanding the proposition. A rational ideal agent who does not know that Peter Hempel is Carl Hempel may have different attitudes to the risk of losing a letter from Peter Hempel and to the risk of losing a letter from Carl Hempel, although the events, propositionally individuated, are the same. The different attitudes arise from
26 Rational Responses to Risks different expressions of, and so different understandings of, the proposition representing the event, as Section 10.2 explains. An agent’s attitude to a risk may depend on how the agent understands the risk, and this understanding may depend on the risk’s expression. Adoption of a normative model that regiments expression of propositions staves off inconsistency in attitudes to the same risk expressed different ways. I use a normative model that adopts the simplification, elaborated in Weirich (2010b, 2010c), that all propositions to which an agent assigns a probability or a utility are expressed with sentential names that the agent fully understands. Within the model, no cases arise in which a rational ideal agent assigns a proposition two probabilities or two utilities. Dietrich and List (2013, 2016a) treat preferences among options that an agent assesses using an option’s salient properties in its context; accordingly, a bundle of properties represents an option in a context, and a preference between property bundles represents a preference between options in the context. They take an option’s properties as the reasons for an agent’s attitude to the option.6 Distinctions among propositions match distinctions among contextually dependent property bundles. Suppose that the number and size of apples on offer influences the apple a polite person chooses; he prefers taking the second-biggest apple. Such contextual features also distinguish the proposition that the person takes a certain apple when there is another bigger apple available from the proposition that the person takes the apple when it is the biggest apple available. A rational agent uses reasons for acts to evaluate acts and also to form preferences among acts. Propositions may present reasons. Suppose that a person drinks to quench her thirst. That her thirst will be quenched is part of the outcome of drinking and so among her reasons to drink. So that probabilities and utilities alike attach to propositions, I take a utility to attach to a proposition. Reasons for a proposition’s utility assignment involve the propositions that would be true if the proposition were true. Because utilities 6 Dietrich and List (2016a) adopt the project of using an agent’s ranking of bundles of properties to explain, and not just to represent, an agent’s preferences among options. Their explanation of preferences among options can accommodate properties of options that are lotteries, as Dietrich and List (2017: App. D) note. Among the motivationally salient properties of an option that is a lottery may be the option’s being risky. This provides a way of explaining the preferences that constitute Allais’s paradox. Their project, although explanatory, differs from mine. They aim to explain preferences, whereas I aim to explain the rationality of preferences. Also, they do not explain the agent’s ranking of bundles of properties using the agent’s attitudes to properties, whereas I explain an agent’s ranking of outcomes of options using the agent’s attitudes to features of the outcomes.
Types of Risk 27 represent preferences, I also take a preference to compare a pair of propositions. Taking preferences among options as preferences among propositions that represent the options simplifies an account of the doxastic and conative attitudes that principles of rational choice employ.
1.1.4. Technical Extensions Evaluation of an option in a decision problem treats together chances for good events and chances for bad events. The expected-utility principle, for example, weighs similarly chances of both good and bad possible outcomes. The distinction between chances for good events and chances for bad events is therefore not normatively significant. A normative theory of choice may take risk in a general, technical sense to arise from chances of desirable events as well as from chances of undesirable events. Generalized, a risk is any chance that a good or bad event occurs. Given this technical definition, aversion to risk is aversion to chance. Analogies between aversion to chance and aversion to risk in the ordinary sense motivate the technical sense of risk. As rational people prefer certainty that a bad event will not occur to a chance that it will occur, they also prefer certainty that a good event will occur to a chance that it will not occur. Both preferences manifest aversion to chance and so aversion to risk in the technical sense. An option’s evaluation depends on the risks and prospects that the option generates and features of their combination. Calculation of an option’s expected utility may divide the option’s possible outcomes into those that are good all things considered and those that are bad all things considered. It may calculate the probability-weighted average of the utilities of the bad possible outcomes and the probability-weighted average of the utilities of the good possible outcomes. The sum of the two weighted averages equals the option’s expected utility. The probability-weighted average of the utilities of the bad possible outcomes evaluates the combination of the risks the option generates. However, a theory of rational choice does not need this evaluation. The separation of possible outcomes into those that are bad and those that are good produces no interesting benefit for a theory of rational choice. By adopting the generalized, technical sense of risk, principles concerning risk may dispense with a benchmark separating the desirable from the undesirable. Gaining $10 and gaining $10 by chance have the same monetary consequences, but gaining the money by chance results from running the
28 Rational Responses to Risks risk of not gaining it. Not gaining it involves an outcome bad using gaining $10 as a benchmark. Generalizing so that a risk may be a chance for a gain, as well as a chance for a loss, the somewhat arbitrary benchmark separating gain and loss has no significance. Taking a risk as a chance of a good or bad event therefore simplifies a normative decision theory’s treatment of risk. Another technical extension of the meaning of a risk in its ordinary sense allows a risk to involve a chance with an extreme value. A chance of a bad event may increase until the bad event is certain, and, for the sake of continuity, a theory of risk may call the 100% chance of the bad event a risk. Although chances for bad events, or risks, and chances for good events, or prospects, suggest uncertainty, principles for evaluating acts, to simplify, may count a 100% chance of a bad event as a risk and a 100% chance of a good event as a prospect. I generally take a risk of an event as a nonextreme chance of a bad event. However, occasionally, for convenience, I adopt the technical extension that includes chances for good events or the technical extension that includes extreme chances.
1.2. An Act’s Risk Volatility is risk in a technical sense common in finance. An investment is risky in this sense if it has possible outcomes of varying utility. Its volatility is roughly the variance of the probability distribution of the utilities of its possible outcomes, which is the expected value of the utility of its outcome’s squared deviation from the mean value of its possible outcomes, that is, E((U(o)–µ)2). In more detail, given n possible outcomes with a mean utility of µ, the variance is the probability-weighted average of the squares of the differences between the utilities of the possible outcomes and µ, that is, ∑iP(oi)(U(oi)–µ)2. Although an investment that produces a chance of a loss also produces volatility, its volatility differs from its chance of a loss. First, its volatility arises from chances of gains as well as chances of losses. Second, for an investor focused on an investment’s monetary outcome, an investment’s expected return evaluates its chances of gains and losses, but not its volatility, because expected return may be constant while volatility varies. This section takes risk in the sense of volatility as exposure to chance. It explains how exposure to chance may be an act’s consequence and how it
Types of Risk 29 emerges from an equilibrium involving the act’s other possible consequences. Lastly, it examines the sources of exposure to chance.
1.2.1. Exposure to Chance A patient’s treatment for an illness, given variation in possible outcomes, creates for the patient an exposure to chance. Being at risk entails that chance settles one’s fate; it entails exposure to chance. An act’s risk in the technical sense of volatility is the exposure to chance the act creates. The exposure to chance may be evidential. An act may generate several evidential chances for bad events that together with evidential chances for good events produce the act’s exposure to chance in an epistemic sense that makes the exposure relative to evidence. An act’s risk in this sense arises from variation in its possible outcomes, taking a possible outcome to be an epistemic possibility, that is, an outcome that may hold for all the agent knows.7 An act creates an exposure to evidential chance because of the distribution of the evidential probabilities of its possible outcomes. I treat mainly exposure to chance in this evidential sense because of its role in rational choice. An act’s exposure to chance arises from chances of good events, which I call prospects, as well as from chances of bad events, or risks. An act that may produce various good events, but no bad events, still creates an exposure to chance and so brings a risk in the technical sense of exposure to chance. An act’s risks and prospects may interact in ways that affect the act’s exposure to chance. For example, two risks may together produce a sure-thing. If an act produces a risk of losing a dollar given heads on a coin toss and also a risk of losing a dollar given tails on the same coin toss, the result is a sure loss of a dollar, and so no exposure to chance. An act may generate multiple risks in the sense of chances of bad events and may also generate multiple prospects, or chances of good events. The chances of good and bad events, and their interactions, generate the act’s exposure to chance. An act’s exposure to chance arises from (1) the risks of bad events that the act generates together with any interaction of these risks, (2) the prospects of good events that the act generates together with any interaction of these prospects, and (3) the interaction of all the risks and prospects that the act generates. 7 Kment (2017: Sec.1) surveys accounts of epistemic possibility.
30 Rational Responses to Risks Although an act may produce multiple chances of bad consequences, it produces just one exposure to chance. The owner of a new business risks failure from various sources, such as rising costs of the business’s operation and declining demand for its products. The owner also risks sleepless nights and inordinate demands for hours of work. All these risks contribute to the risk of starting the business. A probability distribution of utilities of possible comprehensive outcomes of an act represents the act’s combination of risks, prospects, and their interaction. To a first approximation, the act’s risk is the distribution’s variance.
1.2.2. Consequences Some economists, such as Samet and Schmeidler (2018), adopt a technical sense of an act’s consequence, according to which an act and state together yield a consequence, and a consequence is a possible world that resolves all risks. Savage ([1954] 1972), for the sake of practicality, takes an act’s consequence as a small world, with some uncertainties that only a grand world resolves. However, taken strictly, a consequence in the technical sense is a grand world. I put aside this technical sense of an act’s consequence, and use instead the ordinary sense, made precise by Gibbard and Harper ([1978] 1981) in their formulation of causal decision theory. According to their definition, an event is an act’s consequence if and only if the event would occur if the act were realized and, for some other available act, it is not the case that the event would occur if the other act were realized. An act’s risk comes with the act’s realization and is preventable by not performing the act. It counts as an act’s consequence. Moreover, an act’s exposure to chance counts as a risk although it comes with certainty given the act’s realization. It is a consequence of the act with a 100% chance of occurring given the act’s realization. A consequence need not resolve all relevant uncertainties. For instance, if someone buys a lottery ticket, a consequence is possession of a ticket with an uncertain value—a high value if the ticket is a winner, and a low value if it is a loser. In the sense I adopt, a consequence of an act may have as a representation a set of possible worlds rather than a single possible world. In a decision problem, aversion to an option’s risk may seem to be just a feature of a representation of preferences among options. However, some
Types of Risk 31 options are riskier than others. An option’s risk and a rival option’s risk explain the difference in the options’ riskiness. An option’s risk, along with the option’s other consequences, explains the option’s evaluation. Because an option’s risk has these explanatory roles, it is a feature of the option and not just a feature of a representation of preferences among options. Many representation theorems, such as Savage’s ([1954] 1972) and Buchak’s (2013), do not take an act’s risk, in the sense of its exposure to chance, as a consequence of the act. However, taking an act’s risk as the act’s consequence has many advantages. For example, it sets the stage for investigation of attitudes toward risks, as in Chapter 2, and their effect on attitudes toward an act’s possible outcomes, as in Chapters 3 and 4. A theory of rationality may introduce constraints on attitudes to risk and on risk’s effect on the utility of an act’s outcome. Taking all risks an act creates as consequences of the act, including the act’s overall exposure to chance, improves a systematic theory of rational decisions involving risk. The benefits include (1) a justification of the expected-utility principle, (2) a method of assessing risks and deciding whether to reduce or tolerate them, (3) an account of rational sequences of choices, and (4) an evaluation of one agent’s acting for another agent, such as a professional’s acting for a client and, acknowledging collective agents, a government regulatory agency’s acting for the public. I treat agents for whom an act’s risk is separable from the act’s other consequences. The assumption of separability is plausible for an idealized agent who, for example, cares only about risk and expected level of wealth. An act’s risk, in the sense of its exposure to chance, attaches to each possible outcome of the act, and the possible outcomes can be assessed omitting the act’s risk. The act’s utility is then the sum of its risk’s intrinsic utility, which Section 2.6 explains, and the expected utility of its other consequences. Its expected utility putting aside its risk equals the expected utility of the act’s monetary consequences. Chapter 4 establishes the separability, or independence, of an act’s risk from the act’s other consequences given that an agent has a basic intrinsic aversion to the act’s risk.
1.2.3. Equilibrium Given an aversion to chance, an option’s risk, in the sense of its exposure to chance, is a bad consequence, so that including it in the option’s possible outcomes decreases their utilities. By decreasing the utilities of possible
32 Rational Responses to Risks outcomes, including it may augment exposure to chance. Because an option’s exposure to chance depends on the option’s probability distribution of utilities of possible outcomes including exposure to chance, an option’s exposure to chance depends on an equilibrium involving it and utilities of the option’s possible outcomes. An act’s risk in the sense of its exposure to chance is among the act’s consequences and affects the probability distribution of the utilities of possible outcomes, which in turn affects the act’s risk. The act’s risk is the risk-component of its possible outcomes when the risk-component needs no adjustment in response to the probability distribution of utilities of possible outcomes. The risk-component of possible outcomes has an equilibrium value when calculating the risk-component using the possible outcomes yields the risk- component; that is, the equilibrium value of the risk-component is a fixed point of a function yielding the risk-component. Letting {oi} be the set of possible outcomes indexed by i = 1, 2, 3, . . . , letting r stand for the risk- component in a possible outcome oi, and letting U stand for the utility of oi including r and so discounted according to the value of r (assuming an aversion to risk in the sense of exposure to chance), the risk-component computed from D, the probability distribution of U(oi), using the function f equals r. Assuming that the risk-component is separable from the other components of a possible outcome and discounts the utility of each possible outcome by the same amount IU(r), the intrinsic utility of r, and letting {oir–} stand for the set of possible outcomes ignoring risk, r has an equilibrium value when f(D{U(oir–)–IU(r)}) = r. If in a special case variance is an adequate measure of risk, an equilibrium emerges immediately by subtracting a constant from the utilities of an act’s possible outcomes in response to the act’s risk. Subtracting a constant from the utilities of all possible outcomes does not change variance, so adjusting for exposure to chance does not change exposure to chance, and, without a change in exposure to chance, utilities of outcomes do not change. The equilibrium comes in one step if the risk-adjustment, given the expected utility of the option ignoring risk, is a percentage of the variance. Start with the probability distribution for utilities of possible outcomes ignoring risk. Then to accommodate risk, subtract the percentage of the variance from the utilities of possible outcomes. Because the subtraction does not change the variance, the discounted utilities of possible outcomes make the adjustment for risk that their probability distribution requires. The adjustment for risk creates for a possible outcome a risk-component that needs
Types of Risk 33 no adjustment. Thus, using variance v as the function f computing an option’s risk r from possible outcomes ignoring risk, and so from possible monetary outcomes assuming that the agent cares only about money and risk, r = v(D{U(oir–)}) = v(D{U(oir–)–IU(r)}) = v(D{U(oi)}). If in a case variance is the wrong measure of risk, then the risk-adjustment of utilities of possible outcomes may change the distribution’s risk. However, as long as the risk-adjustment is small compared to the act’s utility ignoring risk, the next risk-adjustment will be smaller. Adjustments will become smaller and smaller and will go to zero in the limit so that adjustments for risk return the same utilities for possible outcomes, and the utilities of possible outcomes reach an equilibrium. It is a fixed point at which adjusting the exposure to chance considering the exposure’s inclusion in possible outcomes yields the same exposure, and adjusting the utilities of possible outcomes considering their inclusion of the option’s exposure to chance yields the same utilities. Adjusting utilities for risk, and adjusting risk for utilities, returns each to the same point. Given plausible assumptions about risk and its evaluation that settle the dynamics for risk-adjustments to the utilities of the option’s possible outcomes, an equilibrium emerges in the limit from repeated adjustments. For example, adjustment reaches an equilibrium in the limit if for a certain percentage, each adjustment in the series of adjustments is smaller than its predecessor by the percentage.
1.2.4. Sources of Exposure to Chance An option’s exposure to chance depends on the probability distribution of utilities of the option’s possible outcomes and also depends on the evidence and experience grounding the distribution. An act’s risk taken as its exposure to chance comprehends risk arising from weak evidence for the probability distribution, so that the probability distribution may easily change as new evidence arrives.8 The literature on risk distinguishes aversion to risk from aversion to uncertainty, or ambiguity, with the former being, roughly, aversion to variance in the probability distribution of utilities of possible outcomes, and the latter being aversion to having weak evidence for the probability distribution, or having no probability distribution to guide choice. Agents with an 8 Hansson (2009) proposes measuring uncertainty using lack of robustness with respect to new evidence.
34 Rational Responses to Risks aversion to ambiguity prefer, other things being equal, options with probability distributions supported by strong evidence to options with probability distributions supported by weak evidence, and prefer, other things being equal, options with precise probability distributions to options with imprecise probability distributions.9 The decision principles I advance treat aversion to uncertainty as a type of aversion to risk in the sense of exposure to chance. Although aversion to risk and aversion to uncertainty differ, both these aversions affect the utility of an option through the utility of the option’s possible comprehensive outcomes. An option’s exposure to chance, taken to comprehend the distribution of utilities of possible outcomes and weak evidential support for the distribution, lowers the option’s utility, assuming an aversion to exposure to chance. Because the decision principles treat the same way exposure to chance arising from the distribution and from the distribution’s evidential basis, I combine exposures to chance arising from these two sources. With respect to the decision principle of expected-utility maximization, exposures to chance arising from the two sources do not form distinct normatively significant kinds of risk. Rationality does not treat differently exposures to chance coming from the two sources. The remainder of this section elaborates this point, drawing on a brief treatment of aversion to risk. It considers intrinsic aversion to risk and intrinsic aversion to a particular risk that an act’s exposure to chance constitutes. Section 2.3 treats aversion to risk more generally. Intrinsic aversions may be underived from other intrinsic attitudes and so basic. In a rational ideal agent, an intrinsic aversion to an act’s risk may be basic. It is a response to an act’s probability distribution of utilities of possible outcomes but need not derive from the agent’s attitudes to the chances for bad events and for good events that the act offers. The agent’s attitudes to these chances create the act’s risk but need not ground the agent’s intrinsic attitude to the act’s risk. Also, an aversion to exposure to evidential chance is a form of aversion to ignorance but need not derive from this aversion. Reflection on the nature of exposure to evidential chance may by itself generate an intrinsic aversion to it.
9 Within the Choquet expected-utility framework that Schmeidler (1989) constructs, convexity of the capacity, a nonadditive probability measure, offers a characterization of aversion to uncertainty, or weak evidence. This convexity differs from the concavity of the utility function for amounts of a commodity that offers a characterization of aversion to risk.
Types of Risk 35 An agent’s intrinsic aversion to an act’s risk does not depend on the agent’s information about the world. New a posteriori information does not alter the considerations prompting the agent’s intrinsic aversion toward the risk. The aversion does not depend on the world’s being such that realizing the aversion is a means of realizing another aversion. Consequently, a rationally held intrinsic aversion to risk, and the aversion’s strength, are relatively insensitive to changes in information. New information and experience do not provide reasons to change the aversion. Although an intrinsic aversion to risk in general is insensitive to new information, an intrinsic aversion to a particular evidential risk is sensitive to new information, not because new information may affect the reasons for the intrinsic aversion, but because new information may affect the risk itself. If new information dispels a particular evidential risk, then an intrinsic aversion to it also vanishes. For example, learning that a gun is not loaded removes the evidential risk of its causing an injury and ends an intrinsic aversion to the risk. An agent’s intrinsic aversion to risk in general is an attitude to the proposition that the agent experiences risk, rather than to a proposition specifying a particular risk. An intrinsic aversion to risk in general need not ground an intrinsic aversion to a particular risk. As a sign of this, notice that the strength of an intrinsic aversion to risk in general and the strength of an intrinsic aversion to a particular risk need not match. The strengths of two intrinsic aversions to two particular risks may differ, and then cannot both match the strength of an intrinsic aversion to risk in general. Also, an intrinsic aversion to risk in general may not have a precise strength, even though an intrinsic aversion to some particular risk does. For psychological purposes, one may wish to divide an intrinsic aversion to exposure to chance into an intrinsic aversion to variance and an intrinsic aversion to having weak evidence as a ground of probabilities. Suppose that an agent has a basic, underived intrinsic aversion to exposure to chance. The agent may also have a basic, underived intrinsic aversion to exposure to chance coming from variance and also coming from weak evidence. For an intrinsic aversion to qualify as basic, the agent must not have another intrinsic attitude as a reason for the aversion. An exposure to chance arising from weak evidence grounding a probability distribution entails an exposure to chance in the general sense. However, an intrinsic aversion to exposure to chance is not thereby a reason for an intrinsic aversion to the exposure arising from weak evidence. The entailment provides a reason for an extrinsic aversion to the exposure to chance arising from weak evidence because such
36 Rational Responses to Risks an exposure to chance creates an exposure to chance in the general sense. However, the intrinsic aversion to exposure to chance need not be a reason for an intrinsic aversion to exposure to chance due to weak evidence. An agent’s intrinsic aversion to exposure to chance due to weak evidence may hold independently because of the agent’s reflection on the nature of such exposure to chance. For comparison, consider an intrinsic aversion to a pain you now feel and an intrinsic aversion to pain. You may have a basic intrinsic aversion to each. That the pain you now feel entails pain makes an aversion to pain provide a reason for an extrinsic aversion to the pain you now feel, but it need not be a reason for your intrinsic aversion to the pain you now feel. The intrinsic aversion to the pain you now feel may arise independently, considering just the nature of the pain you now feel. Entailment relations among intrinsic aversions to exposures to chance from various sources do not prevent intrinsic aversion to uncertainty from being a type of intrinsic aversion to risk. Moreover, they do not prevent an agent’s having a basic intrinsic aversion to an act’s risk in the sense of the act’s exposure to chance.
1.3. Measures of Risk Measures of risk are useful for evaluating options in a decision problem because principles of proportionality call for attitudes to risks that are proportional to a risk’s size and because attitudes to the risks that an option generates contribute to the option’s evaluation. The literature proposes measures of both a risk in the ordinary sense of a chance of a bad event and a risk in the technical sense of an option’s exposure to chance.
1.3.1. A Measure of a Chance of a Bad Event The size of a risk in the sense of a chance of a bad event may be the size of the chance, or the severity of the risk as given by the event’s probability-utility product. For evaluation of options, the severity of the risk is the more useful measure of the risk, so I adopt it. A principle of proportionality requires that the intrinsic utility that an agent assigns to a risk, an evaluation of the risk according to its a priori implications, be proportional to the risk’s size taken
Types of Risk 37 as its probability-utility product. The value of the probability-utility product for the event depends on the scale for the event’s ordinary, comprehensive utility, and so the measure of a risk is relative to this utility scale. The scale for intrinsic utility may coordinate with the scale for comprehensive utility so that the intrinsic utility of realizing an event characterized as having a particular comprehensive utility equals the event’s comprehensive utility. Then the intrinsic utility of the risk equals the risk’s size taken as its probability-utility product. Analogous points apply to a prospect of a good event, that is, a chance of the good event. The probability-utility product for a prospect provides a measure of the size of the prospect that depends on the utility scale, and rationality requires that the intrinsic utility of the prospect be proportional to the size of the prospect, its probability-utility product. The prospect’s intrinsic utility equals its size according to a scale for intrinsic utility that coordinates, as described, with the scale for comprehensive utility. An option’s expected utility evaluates the combination of risks and prospects that the option generates. For each risk and for each prospect, its probability-utility product equals its intrinsic utility, and the sum of these products equals the option’s expected utility. As Chapter 4 argues, an option’s utility equals its expected utility. Hence an evaluation of an option may move from sizes of risks and prospects to their intrinsic utilities and then to the option’s utility.
1.3.2. A Measure of an Act’s Exposure to Chance In a decision problem, a common measure of the size of an option’s risk, in the sense of its exposure to chance, is the variance of the probability distribution of the utilities of the option’s possible outcomes. However, variance is only an approximate general measure of an option’s risk. Suppose that an agent cares only about money, ways of obtaining money, and risk. For such an agent, options with the same expected return and the same variance may differ in risk taken as exposure to chance. This happens if, for one option, variation occurs among possible gains and, for the other option, similar variation occurs among possible losses; variation among losses produces the riskier option. Given aversion to risk, the first option has greater utility than the second option, even though the two options have the same expected return and variance. An option’s risk in the sense of its exposure to chance is
38 Rational Responses to Risks sensitive to features of its probability distribution besides variance, such as skew. It is also sensitive to weakness in the grounds for its probability distribution, such as a scarcity of evidence grounding probability assignments and a shortage of experience grounding utility assignments. A distribution’s variance is the square of the distribution’s standard deviation. The standard deviation offers a rival measure of the size of an option’s risk. As a distribution spreads, the variance increases faster than does the standard deviation. Given a principle of proportionality, the two measures lead to different intrinsic utilities for an option’s risk. No strong argument favors one measure of an option’s risk over the other measure. Selection of variance is somewhat arbitrary. Willingness-to-pay produces a measure of risk in the sense of exposure to chance. A typical agent prefers $100 to a gamble with an expected return of $100, such as a gamble on a fair coin toss that pays $0 if heads and pays $200 if tails. The difference between $100 and the largest amount that the agent will pay for a gamble with an expected return of $100 is the agent’s risk premium for the gamble. In general, the risk premium for a gamble is the difference between the gamble’s expected monetary value and the largest amount the agent will pay for the gamble. For an agent who cares only about money, ways of obtaining it, and risk, the risk premium for a gamble provides a measure of the gamble’s risk, taken as its exposure to chance.10 For generality, a measure of risk along these lines, for rational ideal agents, may rely on utilities instead of amounts of money, given two assumptions about rationality. First, assume that rationality requires the intrinsic utility of an option’s risk to be proportional to the size of the option’s risk. Second, assume that rationality prescribes a mean-risk evaluation of an option. This evaluation of an option, which Chapter 4 justifies, combines an appraisal of the option ignoring its risk with an appraisal of the option’s risk. It obtains an option’s utility by adding (1) the option’s expected, or mean, utility ignoring the option’s risk and (2) the intrinsic utility of the option’s risk. Under the foregoing assumptions, the difference between an option’s utility and its expected utility ignoring its risk equals the utility of the risk premium for the option. It also equals the intrinsic utility of the option’s risk, assuming a mean-risk analysis of an option’s utility and coordinated scales for intrinsic utility and comprehensive utility. One may infer the size of the
10 Okasha (2007) adopts this common measure of an agent’s aversion to an option’s risk.
Types of Risk 39 option’s risk from its intrinsic utility, using the constant of proportionality for the agent that governs their relation. Preferences among options in other decision problems may reveal the constant. This method of inferring the size of an option’s risk, although useful, does not define the size of an option’s risk, and, in particular, does not define it in a way that allows an agent to use the size of an option’s risk to construct the intrinsic utility of the option’s risk, the expected utility of the option, and then preferences among options. It does not offer a definition that permits the size of an option’s risk to explain the option’s expected utility. Another measure begins with a comparison of acts according to riskiness. It notes that some acts are riskier than are other acts. A bet on an event with large stakes is riskier than a bet on the same event with smaller stakes. Also, of two bets with the same stake, the bet with the greater probability of losing is riskier. In roulette, betting $100 on number 27 is riskier than betting $100 on red because the first bet creates a greater probability of loss than does the second bet. One act is riskier than another if and only if its risk is larger than the other act’s risk. For rational ideal agents, comparisons of the riskiness of acts ground a measure of an act’s risk in the sense of its exposure to chance. The measure represents comparisons of acts with respect to risk. A method of inferring the size of an act’s risk from comparisons of acts with respect to risk, although useful, does not define the size of an act’s risk, and, in particular, does not define it in a way that permits the sizes of acts’ risks to explain why one act is riskier than another. It does not specify the size of an act’s risk in a way that explains a rational ideal agent’s evaluation of risky acts. In a decision problem, because the size of an option’s risk depends on many features of the option, I do not advance a general account of the size of an option’s risk that defines it independently of the intrinsic utility of the option’s risk and independently of comparisons of options’ riskiness. In the general case, I use only the intrinsic utility of an option’s risk, and not also the risk’s size, to explain the option’s utility. This is sufficient for an evaluation of an individual’s choice. A measure of an option’s risk is not necessary because an agent’s attitude to the option’s risk directly affects the utilities of the option’s possible outcomes, given that they include the option’s risk, and these utilities yield the option’s expected utility according to a method an agent can follow to evaluate options and form preferences among them.
40 Rational Responses to Risks
1.4. Summary A theory of rationality distinguishes some types of risk by advancing different principles for them. Two types of risk it treats differently are a chance of a bad event and an option’s exposure to chance, which arises from the option’s creation of chances for good outcomes and for bad outcomes. A chance of a bad event is physical if it involves a physical probability and is evidential if it involves an evidential probability. Similarly, an option’s exposure to chance may be physical or evidential depending on the type of chance. Rationality also treats differently physical and evidential risks. An adequate measure of a risk in the sense of a chance of a bad event is the probability-utility product for the event. Variance, a common measure of a risk in the sense of an exposure to chance, is adequate only in special cases.
2 Attitudes Many people are averse to risks. In a decision problem with several options a person often favors safe over risky options because of an aversion to risk. Responses to risks include both attitudes, such as aversion, and acts, such as avoidance of risks. Principles of rationality govern both types of response to risks. Attitudes responding to risks differ in their targets and in their scopes. Probability assignments represent doxastic attitudes, and utility assignments represent conative attitudes; different types of utility represent conative attitudes of varying scope. I give probabilities and utilities a representational interpretation but use them to represent attitudes rather than preferences among acts. Probabilities represent strengths of belief, and utilities represent strengths of desire. Taking probabilities and utilities this way yields evaluations of the options in a decision problem that explain the rationality of maximizing expected utility. To explain fully the rationality of an act, one must not only explain why it serves the agent’s goals but also why the agent’s goals are rational. Hence, a full explanation of the rationality of an act responding to a risk explains the rationality of the agent’s attitude to the risk. Rational attitudes to risks ground rational acts that reduce risks. This chapter describes attitudes to risk, and the next chapter presents rationality’s requirements concerning such attitudes. Later chapters present rationality’s requirements concerning acts that affect risks.
2.1. Attitudes to Risk Desires and aversions are conative attitudes with objects that propositions represent. A woman’s desire for health is the woman’s desire that she be healthy. The attitudes are propositional because they apply to propositions. A risk exists independently of attitudes toward it. Even an evidential risk grounded in an event’s evidential probability for an agent exists Rational Responses to Risks. Paul Weirich, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190089412.001.0001.
42 Rational Responses to Risks independently of an aversion to it; the agent may not be averse to the risk. Because a risk exists independently of attitudes toward it, an agent may have an attitude targeting the risk. An agent’s attitude to a risk targets the proposition that the agent faces the risk. A patient’s aversion to an operation’s risk is the patient’s not wanting that he undergo the risk. The patient may have an attitude to the proposition that is psychologically independent of his preferences among acts. Although a common account of attitudes to risk defines them using preferences among acts, such an account cannot use these attitudes to explain preferences among acts. I use a rational ideal agent’s attitudes to risks to explain the agent’s preferences concerning risky acts and so do not define attitudes toward risk using preferences among acts.
2.2. The Scope of Attitudes An event has a propositional representation and in a broad sense is a proposition’s realization. A desire that an event occur, and so that a proposition hold, may be intrinsic or extrinsic. The distinction in type of desire emerges from the reasons for the desire. If the desire considers just the event’s intrinsic features, taken as its a priori implications, then the desire is intrinsic. People typically want pleasure for its own sake and not just as a means to something else, and so have an intrinsic desire for pleasure. If a desire that an event occur also considers, using available information, the event’s extrinsic features, taken as its a posteriori or empirical implications, including its consequences, then the desire is extrinsic. People typically want income, not for its own sake, but as a means to other things and so have an extrinsic desire for income. A similar distinction separates intrinsic and extrinsic aversions. Some aversions are intrinsic, and some are extrinsic. Typically, a person has an intrinsic aversion to pain, because of pain’s nature, but has an extrinsic aversion to smoking, because of its bad consequences for health and not because of its nature. Attitudes of indifference are also either intrinsic or extrinsic, but I do not treat attitudes of indifference, except to use indifference as a zero point for a utility scale. Whether an attitude is intrinsic or extrinsic depends on the propositional specification of the attitude’s object. An agent may have an intrinsic aversion to a bad event but an extrinsic aversion to losing a dollar, although the bad event is losing a dollar. That the bad event occurs and that one loses a dollar
Attitudes 43 are different propositions, even if they have the same physical realization. The difference in propositional object explains the difference in attitude. Some intrinsic attitudes are reasons for other intrinsic attitudes. For example, a person with intrinsic desires for health and for wisdom has reasons for an intrinsic desire for the combination of health and wisdom. An intrinsic attitude is basic if no other intrinsic attitude is a reason for it. A basic intrinsic attitude does not derive from other intrinsic attitudes. Later chapters assume that an agent has a basic intrinsic aversion to risk. In this case, the aversion is independent of other intrinsic attitudes. Intrinsic desires and aversions are less sensitive to changes of information than are extrinsic desires and aversions. Because an intrinsic attitude toward a proposition considers only the proposition’s a priori implications, acquiring information does not change the considerations relevant to the attitude. It illuminates only the proposition’s a posteriori implications. Intrinsic attitudes typically change only if information triggers a change in basic intrinsic attitudes; new information does not by itself provide a reason for the change. A person’s intrinsic aversion to pain typically persists throughout the person’s life despite large changes in information. In contrast, information easily affects a person’s extrinsic attitude to smoking by affecting views about its consequences. A person may at first want to smoke because of smoking’s soothing effect and then, after learning of smoking’s tendency to cause lung cancer, not want to smoke. Sometimes a person desires a situation but then, when the situation obtains, desires it no longer. Experiencing the situation removes its attractiveness. A person may desire tasting an eye-catching dessert but then discover, after a bite, that its taste is unappealing. Experienced goals are more stable than are unexperienced goals. Intrinsic attitudes have a type of stability that derives from the reasons for them and does not require direct acquaintance with their objects. Someone may have a stable intrinsic desire for wisdom without having experienced being wise. Intrinsic and extrinsic attitudes are not exclusive. Having an intrinsic attitude to a proposition does not rule out also having an extrinsic attitude to the same proposition. Typically, a person has both an intrinsic and an extrinsic desire for health, desiring health for its own sake and also as a means of attaining other goals such as productivity. A person may have an intrinsic aversion to pain that is stable with respect to information, and also an extrinsic attitude to pain that varies with information. An athlete who learns the training dictum, “No pain, no gain,” may acquire an extrinsic attraction
44 Rational Responses to Risks to some pain during training as a means to better performance during competition. This may happen without a change in the athlete’s intrinsic aversion to pain. Suppose that a person has an intrinsic aversion to pain. The person may also be averse to pain because it is a distraction. The aversion to distraction, even if intrinsic, is a reason for an extrinsic aversion to pain, not a reason for an intrinsic aversion to pain. The intrinsic aversion to pain may be basic, and so independent of other intrinsic aversions, even if the extrinsic aversion to pain derives from an intrinsic aversion to distraction. Classifying a desire as intrinsic or extrinsic requires attending to its scope. A desire may evaluate some but not all features of an event. An agent may intrinsically desire an event’s realization because of the event’s intrinsic features. However, the agent’s all-things-considered desire concerning the event is extrinsic. The strength of the agent’s all-things-considered desire depends on the event’s extrinsic features as well as its intrinsic features. Suppose that an agent has an attraction to the risk fast driving creates. Typically, the attraction is to the thrill that comes with the risk and is not to the risk itself. The attraction to the risk is extrinsic, not intrinsic. Also, a victim of a stock market crash who loses his life savings may begin to contemplate suicide. Typically, it is his extrinsic attitude to death that changes. He comes to view death as a means of ending his despair. His intrinsic aversion to death persists. Learning of his losses does not provide reasons to change his intrinsic aversion to death. I treat mainly intrinsic and extrinsic conative attitudes. However, intermediate attitudes evaluate a proposition under restrictions on the scope of considerations without limiting them to intrinsic features. A proposition’s evaluation may consider only some events accompanying the proposition’s realization. An evaluation of a skiing injury may consider only the accompanying pain, or perhaps also the inconvenience of hobbling on crutches. Evaluation of a proposition, rather than considering all that would be true if the proposition were true, may consider just the a priori implications, or salient features, or immediate consequences, or all consequences of the proposition’s realization.
2.3. Aversion to Risk Classic work of Arrow (1965, 1970) and Pratt (1964), that Allingham (2002) summarizes, takes aversion to risk to be a feature of utilities attaching
Attitudes 45 to options and their possible outcomes, as these utilities are derived from preferences among acts, including gambles. In particular, the account derives a rational agent’s attitude toward risk, in the sense of exposure to chance, from the agent’s preferences among risky options having as possible outcomes amounts of a commodity. According to the account, a concave utility-curve for amounts of the commodity indicates an agent’s aversion to risk, as in Figure 2.1. The concave shape indicates that the agent prefers to have for sure an amount x of the commodity rather than a gamble that yields an expected amount of the commodity equal to the amount x. For example, an agent with a concave utility-curve for money prefers $1 to a gamble that pays $2 if a toss of a fair coin lands heads and pays $0 if the toss lands tails, for an expected gain of $1. This account of aversion to risk does not formulate principles of rationality governing attitudes toward risk. It does not prescribe either aversion to risk, taken as concavity of the utility curve for a commodity, or neutrality toward risk, taken as the straightness of the curve. Nor does it prohibit attraction to risk, taken as convexity of the curve. It does not claim that any attitude to risk is a requirement of rationality. The account just classifies an agent’s attitude to risk using the shape of the agent’s utility curve for a commodity, given that the curve is either concave, a straight line, or convex. The concavity of an agent’s utility curve for a commodity is a manifestation of aversion to risk in the sense of exposure to chance. However, this concavity, although useful in economics as a technical definition of aversion to risk, is not accurate as a definition of the aversion in its ordinary sense. The concavity may also be a manifestation of diminishing marginal utility, with utilities taken as degrees of desire that generate preferences among acts. The shape of the utility curve for amounts of a commodity, with utilities derived from preferences among gambles, does not distinguish aversion to risk from
Utility
Money
Figure 2.1 An Expression of Aversion to Risk
46 Rational Responses to Risks diminishing marginal utility. Although aversion to risk for a commodity and the diminishing marginal utility of the commodity may each produce a concave utility-curve representing preferences concerning amounts of the commodity, they are different phenomena. Using concavity as the definition of aversion to risk, therefore, conflates the aversion and the diminishing attraction of increases in the commodity. The definition incorrectly attributes risk aversion to a risk-neutral agent with diminishing marginal utility.1 Suppose that an agent’s utility curve for a commodity is constructed to represent the agent’s preferences among gambles that with varying probabilities yield amounts of the commodity. Various effects on preferences among gambles are confounded in the shape of this curve, including the effect of attitudes toward risks and the effect of attitudes toward increases in amounts of the commodity. Aversion to risk influences preference among gambles but is not reducible to these preferences. Constructing an agent’s utility curve for a commodity using degrees of desire for amounts of the commodity reduces confounding but does not produce utility curves that reliably display attitudes to risk. If money has constant marginal utility for an agent, the agent’s utility curve for money, if it uses utilities that represent degrees of desire, is a straight line rather than a concave curve. However, the agent need not be neutral to risk. Using the shape of the utility curve for a commodity to represent an agent’s attitude to risk offers an incomplete account of risk. It offers no definition of risk, no distinctions among types of risk, and no treatment of combinations of risks that an act or a sequence of acts produces. It targets only aversion to an option’s risk in the sense of exposure to chance, and not aversion to an option’s chances for bad events. Although the utility-curve account of attitudes to risk does not take a normative stand, consider the view that an agent may be either adverse to, attracted to, or indifferent to risk, as the account explicates these attitudes. Because treating an attitude to risk as a feature of a utility curve makes the attitude relative to a commodity, this permissive view loosens consistency constraints on attitudes to risks. For each commodity, it permits concavity or convexity of any degree, and so permits extreme variation in attitudes toward 1 As Franz Dietrich pointed out to me in conversation, aversion to risk and addiction to more and more of a commodity may cancel each other to produce a linear utility function for the commodity. Also, attraction to risk and diminishing marginal utility of a commodity may cancel each other to produce a linear utility function. The linear utility function in these two cases misrepresents an agent as neutral toward risk; the agent in the first case has an aversion to it and in the second case has an attraction to it.
Attitudes 47 risk from commodity to commodity. For an agent, the view allows aversion to risk concerning Fuji apples but attraction to risk concerning Gala apples, although this combination of attitudes is inconsistent if the agent is indifferent between Fuji and Gala apples, and consequently is indifferent between one Fuji apple and one Gala apple, between two Fuji apples and two Gala apples, and so on. If the agent wants Fuji apples just as much as Gala apples, then risks with the same probabilities of losing Fuji apples and of losing Gala apples are equivalent. A 50% chance of losing a Fuji apple should elicit the same conative attitude as 50% chance of losing a Gala apple, given that a Fuji apple and a Gala apple have the same utility, taken as degree of desire. A single aversion to risk should explain aversions to risk for the two commodities if they are equally desired. Moreover, the agent’s variation in attitudes to gambles involving apples may lead to an incoherent preference-ranking of gambles. If the agent is averse to risks involving Fuji apples and attracted to risks involving Gala apples, despite being indifferent between Fuji and Gala apples, then he prefers a gamble that offers a 50% chance of a Gala apple to a gamble that offers a 50% chance of a Fuji apple, although the gambles differ only in prizes that are equivalent. This preference violates coherence requirements. In addition, the agent may prefer a Gala apple if and only heads comes up on a coin toss to a Fuji apple if and only if heads comes up on the same toss, and also prefer a Gala apple if and only if tails comes up on the same toss to a Fuji apple if and only if tails comes up on the same toss. Accordingly, he prefers a combination of gambles that guarantees a Gala apple to a combination that guarantees a Fuji apple, although he is indifferent between a Gala apple and a Fuji apple.2 This chapter puts aside economics’ technical account of attitudes to risk. It takes a risk as a chance of a bad event, or as an act’s risk in the sense of its exposure to chance, so that a risk may be a consequence of an act and may have a propositional representation. A rational ideal agent’s attitudes to risks explain the agent’s preferences among options in a decision problem and do not vary with the commodity the preferences involve if the commodities are equivalent. This stability of attitudes to risks with respect to variation
2 In general, a representation of a set of preferences is relative to the set. For an agent, the probability and utility functions that yield an expected-utility representation of one set of preferences may be inconsistent with the probability and utility functions that represent another set of preferences. This does not happen for a rational ideal agent, but may happen for other types of agent.
48 Rational Responses to Risks in commodity increases the psychological significance of an agent’s attitudes to risks. To illustrate the increase in psychological significance, consider the explanation of a risk-averse agent’s a willingness to pay no more for a gamble than a particular amount less than the gamble’s expected monetary value. The difference between the expected monetary value and the highest amount the agent is willing to pay equals the agent’s risk premium for the gamble, as Section 1.3.2 explains. A concave utility-curve for amounts of money implies a willingness to forfeit a gamble and the risk premium for the gamble to have an amount of money equal to the gamble’s expected monetary value. Suppose that the utility scale for money assigns a utility of 1 to a unit of money. Given concavity of the utility-curve, the expected utility of a gamble then is less than the gamble’s expected monetary value. Consequently, the amount of money with a utility equal to the gamble’s expected utility is less than the gamble’s expected monetary value. Figure 2.2 illustrates these points for a gamble with a 50% chance of winning and 50% chance of losing. The upper endpoint of the straight line represents winning, and the lower endpoint represents losing. Each point has a pair of coordinates (m, u) specifying an amount of money and an amount of utility. The straight line’s midpoint has a first coordinate that specifies the gamble’s expected monetary value and a second coordinate that specifies the gamble’s expected utility. In the figure these equal m2 and u1, respectively. The amount of money m1 has a utility equal to the gamble’s expected monetary value m2, so m2 – m1 equals the risk premium. The utility u2 attaches to the amount of money m2 and is more than the gamble’s expected utility u1. I call u2 – u1 the certainty bonus that comes from having for sure the amount of money equal to the gamble’s expected monetary value.
Utility u2 u1
m1
m2
Money
Figure 2.2 Risk Premium m2—m1 and Certainty Bonus u2—u1
Attitudes 49 If the utilities of amounts of money are extracted from preferences among gambles, then the shape of the utility curve for these amounts does not explain a willingness to pay the risk premium to replace a gamble with its expected monetary value. The willingness to pay the risk premium is just an implication of the curve. It is a feature of the preferences among gambles that the utility curve for amounts of money represents. The curve represents but does not generate the preferences. An explanation of an agent’s risk premium uses an option’s expected utility, as opposed to its expected monetary value. A gamble’s expected monetary value, in contrast with its expected utility, reviews only possible monetary outcomes and ignores risk. An explanation of an agent’s willingness to pay a gamble’s risk premium to exchange the gamble for its expected monetary value holds that a gamble’s outcome involves not just money but also a risk in the sense of an exposure to chance that the agent dislikes. The expected utility of the gamble, using its comprehensive possible outcomes, is less than the gamble’s expected monetary value because the gamble’s risk comes with each possible monetary outcome. An agent’s dislike of the gamble’s risk, when included in the gamble’s possible outcomes, lowers the gamble’s expected utility to make it equal the utility of an amount of money less than the gamble’s expected monetary value. Peterson (2012: 179–83) lists three accounts of aversion to risk: (1) concavity of the utility curve for a commodity, (2) a reason not to maximize expected utility, and (3) a reason, which the literature calls ambiguity aversion, to prefer gambling on an event of known probability to gambling on an event of unknown probability.3 I have stated theoretical limitations of the first account. The second account treats aversion to risk in the sense of an option’s exposure to chance but does not take an option’s risk as a consequence of the option when it objects to maximizing expected utility. Building an option’s risk into the option’s consequences removes the objection, as Section 5.4 argues. The third account of aversion to risk also treats an option’s risk in the sense of its exposure to chance and uses the difference in two gambles’ risks to ground a preference between them despite their having the same expected 3 Halevy (2007) surveys attitudes to ambiguity. Machina (2009) treats ambiguity’s role in rank- dependent utility theory. Al-Najjar and Weinstein (2009) claim that choices that exhibit ambiguity aversion are irrational and that standard normative decision theory, as in Savage ([1954] 1972), should not be amended to accommodate such choices. I allow for the rationality of choices that exhibit ambiguity aversion and the rationality of ambiguity aversion itself. By taking ambiguity as a type of risk that figures in an option’s consequences, I preserve the traditional version of the principle to maximize expected utility that Section 5.3 reviews.
50 Rational Responses to Risks utilities ignoring risk. Its objection to comparing gambles according to their expected utilities disappears if a gamble’s consequences include the gamble’s risk, as they do according to Chapter 5’s account of a gamble’s consequences.4 The theory of rational choice I present (1) treats risk as a consequence of a risky option, (2) introduces probability and utility so that they are not defined in terms of preferences among options but rather explain these preferences, (3) takes the expected-utility principle as normative and not just a conventional feature of representation of preferences among options, and (4) uses features of possible outcomes, such as risk, to explain their utilities.5
2.4. Quantities The representational account of quantities, which I adopt, does not require that quantities represent only comparisons, such as preferences among acts. They may represent magnitudes, such as the strengths of an agent’s beliefs. I take probabilities and utilities, as representations of magnitudes that are features of an agent’s attitudes.6 The history of temperature, as Chang (2004) presents it, shows that scientists conceived of temperature as a theoretical quantity useful for science. They adopted various measures of temperature, some taken to be more accurate than others. Magnitudes, such as heat, manifest themselves imperfectly in real measurements and perfectly in ideal measurements. However, scientists did not take temperature, a representation of heat, to be defined by a single method of measurement, even assuming ideal conditions for applying the method of measurement. A theoretical quantity need not be defined by
4 A fourth account of aversion to risk comes from Luce’s (2010a, 2010b) treatment of p-additive utility functions. Davis-Stober and Brown (2013) explain Luce’s account of aversion to risk and use his account to classify agents as attracted to risk, indifferent to risk, or averse to risk. Stefánsson and Bradley (2019) hold that attitudes to risk are desires about how chances are distributed. Holt and Laury (2002) describe patterns of risk aversion found in empirical studies. 5 Dietrich and List (2016b) argue that economic theories of choice are committed to mental states that explain choices. Briefly, they argue that representing choices to be “as if ” maximizing expected utility carries a commitment to taking the representation’s probability function and utility function as yielding, respectively, degrees of belief and degrees of desire. The representation carries a commitment to mental states, they say, because it uses probabilities and utilities that occupy the role of mental states. The representation obtains only by coincidence unless mentalist probabilities and utilities generate the preferences. 6 Heilman (2015) explains the representational theory of measurement, some objections to it, and presents a nonempirical account of the theory that sidesteps the objections.
Attitudes 51 an accurate method of measuring it. When multiple accurate methods exist, using each as a definition produces inconsistent definitions. Length is another theoretical quantity that we measure in multiple ways, without any having definitional priority. A measurement of length is accurate when it faithfully represents the magnitude of extension. Theoretical magnitudes need not be directly observable, whereas measurements of the magnitudes are directly observable. They yield quantities that represent magnitudes of objects. A measure of length may use a concatenation operation, say, putting together two sticks end to end in a straight line. We assign lengths to objects so that summation of the lengths of two objects represents the length of the concatenation of the two objects. Concatenating rulers along a side of a stick measures the stick’s length. We adopt the assignment of lengths obtained using the concatenation operation because we believe that these lengths faithfully represent magnitudes of extension. The extensions of objects exist independently of the concatenation operation for length, and beliefs about their extensions justify selection of a concatenation operation. A stick extends from one end to the other. Its extension is a property it possesses. The properties that quantities represent explain relations among objects with these quantities. The lengths of two sticks explain why one stick extends farther than the other. One stick may be longer than another stick because the magnitude its length represents is greater than the magnitude the other stick’s length represents. An object’s length not only explains the object’s comparisons with other objects but also explains how much gravitational attraction makes it sag when supported by its ends and how long sound takes to travel from one end of the object to the other end. A quantity assigned to an object uses a number to represent a magnitude of the object. Relations among the numbers assigned to objects also represent relations among the magnitudes that the objects possess and relations the objects have because of relations their magnitudes have. A quantity belongs to a representation, but the magnitude of an object that the quantity represents belongs to the object. A method of measuring a quantity, such as length, and the magnitude the quantity represents, such as an object’s extension, explain the result of measuring the quantity. The magnitude explains the approximate agreement of different, imperfect methods of measuring the quantity. Ockham’s razor prohibits introducing a property of an object if it explains nothing, but by
52 Rational Responses to Risks explaining the results of measurements, and properties of objects and relations among objects, magnitudes earn their keep.
2.5. Probability Some preferences are basic and not derived from other preferences. For example, preferences between flavors of ice cream are typically basic. However, preferences between options are typically derivative and rest on evaluations of each option’s chances for possible outcomes. An explanation of a preference’s rationality may apply a principle of rationality and, to go deeper, may also explain why rationality requires conformity with the principle. I take probability and utility as quantities that a theory of rationality uses to explain the rationality of preferences among options. This section treats probability, and the next section treats utility. Conative attitudes to risks depend on the probabilities the risks involve, so an account of these attitudes incorporates an account of probabilities. As Section 1.2 notes, probabilities come in two kinds, evidential and physical. Science discovers physical probabilities. Principles of epistemology settle the evidential probabilities that an agent’s evidence warrants. This section treats evidential probabilities, as general principles for identifying rational choices use them. Evidential probabilities are rational degrees of belief of a cognitively ideal agent. Such an agent has access to all logical, mathematical, and a priori truths. Given reflection on a truth of these types, the agent knows it. Rationality requires an ideal agent to have degrees of belief that conform to the laws of probability, for every probability model that the agent employs. Strength of belief is a magnitude measurable in various ways with more or less accuracy. Suppose that a doxastic attitude toward a proposition has a strength sufficient to motivate paying 80 cents, and no more, for a gamble that pays a dollar if the proposition is true and otherwise pays nothing. Then on a scale from 0 to 1, the attitude’s strength is 0.8 according to a method of measurement using willingness to gamble. Degrees of belief are numbers that represent strengths of belief. The numbers for strengths of belief may represent one belief ’s being twice as strong as another belief. An attitude’s strength is a property it possesses. The unequal strengths of two doxastic attitudes explain why one attitude is stronger than the other. Strengths of belief explain an agent’s betting quotients and preferences between bets formed according
Attitudes 53 to expected utilities. They are doxastic attitudes definitionally independent of choices and may explain the rationality of choices. The distinction between degrees of belief and strengths of belief arises because a strength of belief is a feature of an attitude that effects other attitudes and behavior. It is not just a number figuring in a representation of an agent’s preferences among options, such as the value of a function used in a representation of these preferences. A function with doxastic attitudes as arguments may represent comparative relations among the attitudes, and the value of the function for the attitude may represent its strength. The number the function then yields is a degree of belief, and it represents the attitude’s strength. Given an ideal agent’s evidence, rationality requires a unique doxastic attitude to a proposition, but the attitude’s representation may be a set of probability assignments rather than a unique probability assignment if the doxastic attitude is not sharp. Rationality is permissive concerning choice if a set of probability assignments represents an agent’s rational doxastic attitude. The permissiveness concerning choice appears to be permissiveness concerning doxastic attitude, from the perspective of accounts of probability that extract probabilities from choices. However, permissivism in choice differs from permissivism in doxastic attitude. Rationality may be permissive about choice given an imprecise doxastic attitude, although it strictly regulates an agent’s doxastic attitude so that the attitude fits the agent’s evidence. The subjectivity that some Bayesians advocate may be tolerance of agents with the same evidence holding divergent opinions rather than permission for the agents to hold divergent opinions. Current inductive logic does not specify in general the doxastic attitude that an agent’s evidence settles. Out of tolerance, a Bayesian may condone each agent’s responding personally to evidence, until logicians know better how evidence settles an appropriate doxastic attitude. Subjectivism as tolerance may join the view that rationality settles the same doxastic attitude for all ideal agents with the same evidence.
2.6. Utility As with probability, I introduce utility for a cognitively ideal agent who is rational. An agent’s utility assignment to a proposition represents the extent to which the agent wants the proposition’s realization, more precisely, the agent’s strength of desire, all things considered, that the proposition
54 Rational Responses to Risks be realized. The proposition’s utility is the agent’s degree of desire that the proposition hold, taking degree of desire to have a technical sense covering aversion as well as desire. Letting the zero point for a utility scale represent indifference, positive utilities arise from desires and negative utilities arise from aversions. In a decision problem, an option’s utility evaluates the option for formation of preferences among options. Different types of conative attitude yield different types of utility. Intrinsic attitudes yield intrinsic utilities, and extrinsic attitudes yield extrinsic, or comprehensive, utilities, usually called utilities tout court. Conative attitudes to a proposition, and corresponding types of utility, differ in scope of evaluation. Intrinsic utility evaluates a proposition considering only the proposition’s a priori implications. Comprehensive utility evaluates a proposition considering all that would be true if the proposition were true. The comprehensive utility of an option’s consequences in a world does not evaluate just the option’s consequences; it evaluates all that goes with the consequences and so evaluates the option’s world, just as the option’s comprehensive utility does. However, a third type of utility, causal utility, evaluates a proposition considering just the consequences of the proposition’s realization. An option’s causal utility narrows the scope of the option’s evaluation to its consequences. In a decision problem, an agent’s evaluation of an option, propositionally represented, may produce the option’s intrinsic utility, causal utility, or comprehensive utility, depending on the evaluation’s scope. A utility of restricted scope differs in definition from a utility given a condition, that is, a utility under an assumption. A conditional utility may have unrestricted scope. A bet’s comprehensive utility evaluates possible outcomes of the bet, that is, possible worlds with the bet. A bet’s comprehensive utility, given that it wins, evaluates only possible worlds with the bet’s winning. Evaluation of the bet and evaluation of the bet given that it wins are both comprehensive, and so review entire possible worlds, but the conditional evaluation reviews just possible worlds meeting the assumption that the bet wins.7 A desire that an event occur may obtain all-things-considered or considering only some things. All-things-considered desires generate comprehensive utilities. If an agent desires that an event occur, all-things-considered, then the event’s utility for the agent is positive. In general, the utility for an 7 The utility of an act given a condition is not the utility of the act and the condition. Switching the positions of act and condition may affect the conditional utility but not the utility of the conjunction, as Weirich (1980) observes.
Attitudes 55 agent of an event, which a proposition represents, arises from the agent’s all-things-considered conative attitude to the event. However, an agent’s utility assignment to a proposition may evaluate the proposition’s realization ignoring some of its features. For example, a patient may evaluate taking a medicine ignoring its bad taste. A utility assignment that ignores some considerations differs in definition from a conditional utility assignment, that is, an assignment under an assumption, even if the two assignments have the same value for a proposition. The utility of taking a medicine ignoring its bad taste may equal the utility of taking the medicine given that it lacks a bad taste, but the first, restricted utility has a different definition than does the second, conditional utility. It ignores a feature of the medicine rather than makes an assumption about the feature. Chapter 4 treats an option’s utility ignoring its risk. An agent’s desire for a 40% chance of a dollar may be twice as strong as the agent’s desire for a 20% chance for a dollar. Strengths of desire yield ratio comparisons of desires taking indifference as a zero point. The agent’s utility assignment represents strengths of desire for the chances and thereby also represents the first desire’s being as twice as strong as the second desire. The zero point for strength of desire may be a conative attitude or an object of a conative attitude. An alternative to taking the attitude of indifference as a zero point for strength of desire is taking an object of a conative attitude, such as a proposition expressing the status quo, as a zero point. Although an agent may be indifferent to the status quo, an agent may also desire the status quo or have an aversion to the status quo. For generality, I use indifference, or any proposition to which the agent is indifferent, to serve as the zero point. The rational degrees of desire of an ideal agent meet various structural constraints so that they generate a utility function that represents conative attitudes and relations that hold because of them. A human agent does not have a utility function, even if rational, if the agent does not comply with rationality’s requirements for an ideal agent. However, an application of utility theory may take the utility function of a rational ideal agent as a human agent’s utility function if, for the purpose of the application, the human agent sufficiently resembles the rational ideal agent. Two propositions may have different utilities although they represent events with the same physical realization. An agent may desire to read Mark Twain without desiring to read Samuel Clemens, believing them to be different individuals, even though, because they are the same, reading one amounts to reading the other.
56 Rational Responses to Risks A proposition may not represent an event as bad but may just represent an event that in the agent’s eyes happens to be bad. Take the proposition that it rains. This proposition does not have negative intrinsic utility because that it rains does not entail that a bad event obtains. Rain is a bad event if the agent is averse to rain, but this aversion is an empirical fact. That rain is realization of an aversion is not an intrinsic feature of rain. Some points about the relation between conative attitudes to an event and to a chance of the event clarify the relation between the utility of the event and the intrinsic utility of a chance of the event. An agent’s extrinsic attitude to a chance of a bad event takes account of all accompaniments of the chance’s realization. A patient may be intrinsically averse to a risk of a bad side-effect from taking a medication but nonetheless extrinsically desire taking the medication, and undergoing the risk, for the sake of health. That a chance has a negative intrinsic utility does not imply that it has a negative utility. A 100% chance of a bad event is not equivalent to the bad event because the chance, if evidential, is relative to evidence, whereas the bad event is not relative to evidence. An agent has an intrinsic aversion to a 100% chance of a bad event, so characterized, that derives from an aversion to the bad event. That a nonextreme chance of a bad event exists entails that a risk exists, and so has a negative intrinsic utility for an agent with an intrinsic aversion to risk, even if the event’s badness is extrinsic. An attitude’s deriving from an intrinsic attitude does not make it intrinsic. An extrinsic attitude may derive from an intrinsic attitude together with empirical information. Even deriving solely from an intrinsic attitude does not make an attitude intrinsic. An intrinsic attitude to a proposition results from appraisal of the proposition’s a priori implications. An attitude to a chance of pain is not intrinsic because of its derivation from an intrinsic aversion to pain but because of the agent’s aversion to intrinsic features of the chance, characterized as a chance of realization of an intrinsic aversion. Consider an ideal agent who has an intrinsic aversion to pain and who evaluates a chance of pain. As Chapter 3 argues, the agent, if rational, has a derivative intrinsic aversion to the equivalent chance of realization of an intrinsic aversion. An ideal agent knows his intrinsic aversions, and so sees pain as the realization of an intrinsic aversion. The agent has an intrinsic aversion to this realization of an intrinsic aversion. Although his aversion to a chance of pain is not intrinsic, because it depends not just on the a priori implications of the chance but also on his knowing his aversion to pain, his aversion to the equivalent
Attitudes 57 chance of realization of an intrinsic aversion is intrinsic because it depends only on the a priori implications of the chance.8
2.7. Measurement Measurement theory offers methods of inferring probabilities and utilities. Let H stand for an option that yields health, let W stand for an option that yields wisdom, and let U stand for comprehensive utility. Suppose that a rational ideal agent desires option H twice as much as option W. Using a scale that adopts U(W) as a unit, U(H) = 2. The utility’s measurement uses the intensity of the agent’s preference between the two options to infer strength of desire. The intensity is a utility difference on an interval scale, and a ratio of utilities on a ratio scale that uses indifference as the zero point. Some preferences reveal intensities of preferences. For a rational ideal agent, the intensity of a preference for a to b is greater than the intensity of a preference for c to d if and only if the agent is willing to pay more to obtain a in exchange for b than to obtain c in exchange for d. Also, the largest amount the agent is willing to pay to choose between two options instead of letting 50–50 randomization select one reveals the intensity of the agent’s preference between the two options; it is twice the utility of the amount. Economic theory may not need intensities of preference in addition to preferences but does not mount a case against intensities of preference. The chapter’s appendix sketches a formal account of measurement of utilities using intensities of preference. It is possible to use an agent’s preferences among options, including gambles, to discover the agent’s utility assignments to options and to their possible outcomes, and the agent’s intrinsic-utility assignments to features 8 The reasons for a rational ideal agent’s intrinsic aversion to pain may lead the agent to an intrinsic aversion to a chance of pain. If he negatively evaluates pain considering its a priori implications, then for consistency he negatively evaluates a chance of pain because of its a priori implications. The a priori implications of an epistemic chance of pain, the pertinent sort of chance, include uncertainty concerning an intrinsic attitude’s realization. An intrinsic aversion to uncertainty is a reason for an intrinsic aversion to the chance. Suppose that an agent evaluates a proposition using its a priori implications and also the agent’s knowledge of his own mind. The agent may know that he is extrinsically averse to a proposition’s realization and then be averse to its realization because it is the realization of an aversion. For example, an agent may be averse to losing a dollar because he knows it is the realization of an extrinsic aversion. The aversion arises from an intrinsic aversion to realizing an extrinsic aversion but has wider evaluative scope than has an intrinsic aversion. Although such variants of intrinsic aversion are interesting, I put them aside.
58 Rational Responses to Risks of their possible outcomes. Intrinsic utilities of features of possible outcomes ground attitudes to an option’s possible outcomes and thus are revealed in utilities of possible outcomes and utilities of options, which are revealed in preferences among options. A method of measuring a rational ideal agent’s utilities assumes that the agent’s preferences among options follow utilities and assumes that the utility of an option’s possible outcome is a sum of the intrinsic utilities of the outcome’s realizations of basic intrinsic attitudes, as Weirich (2015a: Chap. 2) argues. The chapter’s appendix uses an off-the-shelf representation theorem to show the possibility, for rational ideal agents, of measuring attitudes toward risks using preferences among the options in a decision problem. Preferences among options reveal, but do not define, the intrinsic utility of an option’s risk in the sense of its exposure to chance, as measured by the difference between the option’s comprehensive utility and the option’s comprehensive utility ignoring the option’s risk, a measure that Chapter 4 supports.9
2.8. Summary This chapter introduces probability and utility assignments used to characterize and evaluate risks. It describes attitudes to risks, including intrinsic and extrinsic attitudes, and types of utility of various evaluative scopes. The following chapter formulates norms for attitudes toward risk, such as rationality’s requirement that an agent have an intrinsic aversion to a risk taken as a chance of a bad event. Later chapters show how rational assessments of an option’s risk and its other consequences combine to yield a rational assessment of an option that directs a rational decision concerning the option’s adoption.
2.9. Appendix: Representation I use utility functions that have a representational interpretation. They represent strengths of desire, which may be inferred from preferences 9 Cozic and Hill (2015) show how representation theorems and their proofs may be used to define concepts of decision theory such as probability and utility. However, one need not use the theorems and their proofs to define these concepts. Psychological theory and norms of decision theory may introduce the concepts.
Attitudes 59 and intensities of preferences. A utility function may directly represent preferences and intensities of preferences and, through their representation, indirectly represent strengths of desire. This appendix in its first part introduces a comprehensive-utility function that represents preferences and intensities of preferences among possible worlds and in its second part introduces an intrinsic-utility function that represents preferences among consequences of options in a decision problem. An option’s expected utility comes in the standard way from comprehensive utilities of possible worlds, and the expected intrinsic-utility of an option’s consequences comes in the standard way from the intrinsic utilities of possible composites of consequences.
2.9.1. Preferences and Intensities of Preferences An account of quantities as representations of relations chooses objects to treat, such as options and their possible outcomes, taken either comprehensively or selectively. It decides what relations among objects to represent, such as preferences among options and, perhaps also, the intensities of these preferences. Representations have many purposes, and some may represent attitudes to an option’s possible outcomes as well as preferences among options. Also, an account decides how to represent the relations, say, to represent preferences among options as forming an order, as maximizing expected utility, or, deviantly, as minimizing expected utility. By design, a representation of preferences may use functions that represent introspectively accessible mental states that an agent can use to form preferences. The explanatory principle of expected-utility maximization uses quantities that represent mental states to which the agent has introspective access, namely, strengths of belief and strengths of desire, because these mental states motivate a rational ideal agent. The explanatory principle’s probability and utility functions represent comparisons of attitudes and comparisons of differences in attitudes. Its utility function, for example, assigns numbers to propositions expressing options, and to propositions expressing possible outcomes of options, to represent intensities of preferences as well as preferences. From among the multiple ways of representing an agent’s preferences among possible outcomes, I adopt a representation that explains the rationality of the agent’s preferences using the agent’s introspectively accessible doxastic and conative attitudes.
60 Rational Responses to Risks A utility may represent a strength of desire revealed (imperfectly) in survey responses using a seven-point Likert scale. For example, a student may reveal her strength of desire for a college education by saying whether she is strongly for it, is for it, is weakly for it, is neither for it nor against it, is weakly against it, is against it, or is strongly against it. The explanatory principle of expected-utility maximization takes utilities to represent a rational ideal agent’s strengths of desire. For such an agent, the strength of desire that a proposition hold matches the intensity of the preference for the proposition’s holding to its not holding, taking its not holding as a zero point for utility. Hence representing intensities of preference also represents strengths of desire. A preference for A to B is more intense than a preference for C to D if the agent desires A more than B, desires C more than D, and the difference between the agent’s desire for A and desire for B is greater than the difference between the agent’s desire for C and desire for D. If A, B, C, and D meet these conditions, a representation may assign utilities to A, B, C, and D so that U(A)–U(B) > U(C)–U(D). The result of representing intensities of preference this way is an interval scale for utility. Consider a utility representation for three objects of desire: A, B, and C. Suppose that their strict preference-ranking, starting at the top is A, B, C, and that the intensity of the preference for A over B equals the intensity of the preference for B over C. Existence of a representation: A representation of their comparisons is U(A) = 3, U(B) = 2, U(C) = 1. For all x and y, U(x) ≥ U(y) if and only if x is weakly preferred to y; and for all x, y, z, and w, U(x)–U(y) ≥ U(z)–U(w) if and only if the intensity of the preference for x over y is greater than or equal to the intensity of the preference for z over w. Uniqueness of the representation: The representation U is unique up to positive linear transformations. Suppose U′ is a positive linear transformation of U. Then U′ is also a representation of the preferences and their intensities, because it preserves the order of A, B, and C and also preserves interval comparisons. Moreover, suppose that U′ represents preferences and their intensities. Let a = [U′(B)–U′(C)]/[U(B)–U(C)], and let b = U′(C)– U(C). Then a > 0 and U′(x) = aU(x)–b, so U′ is a positive linear transformation of U.
Attitudes 61 A general representation theorem extends such results about the existence and uniqueness of a type of representation of preferences and intensities of preferences to cases with more than three elements and to cases in which not all intensities of preferences for adjacent elements in the preference-ranking are equal, as Weirich (2018b) explains. One generalization takes the objects of desire to be possible worlds, assumes a worst world and a best world, and assumes for any n, that there are n + 1 possible worlds going from the worst to the best in n equal steps, that is, such that the difference in strength of desire going from one possible world to the next is the same as for another possible world and the next. The representation proceeds by construction. For each possible world w, assume that the agent is indifferent between w and, for an n-step scale starting with the worst world, the possible world that the mth step of the scale reaches. Then the utility of w is m/n. This utility representation of preferences and intensities of preferences among worlds is unique up to positive linear transformations. For more generality, drop the assumption of n-step scales. Let a p-gamble yield the best world with probability p and the worst world with probability 1–p. For each world w, find the p-gamble such that the agent is indifferent between the gamble and the world w. The probability p equals the utility of the world on a scale with the worst world getting utility 0 and the best world getting utility 1. For a rational ideal agent, the utility function constructed this way represents preferences and intensities of preferences among worlds and is unique up to positive linear transformations. Other general representation theorems are possible using the techniques in Krantz, Luce, Suppes, and Tversky (1971). A utility representation obtained, assuming a rational ideal agent, by inference rather than by definition, reveals the agent’s utility assignments, taken as strengths of desire.
2.9.2. Composites of Consequences Composites have components with utilities that may offer a way to calculate the utility of the composite. For example, absent the diminishing marginal utility of money, the utility of an endowment divided into accounts may equal the sum of the utilities of the accounts. A representation theorem of conjoint measurement in Krantz et al. (1971: 257, theorem 2), supposes that nonempty sets A1 and A2, and a binary relation on A1 × A2 meet various conditions, including an independence
62 Rational Responses to Risks (or separability) condition, to form an additive conjoint structure. Then it states that there exist functions ϕi from Ai, i = 1, 2, into the real numbers such that, for all a, b ∈ A1 and p, q ∈ A2,
ap bq if and only if φ1 (a) + φ2 ( p) ≥ φ1 (b) + φ2 (q)
Moreover, if ϕi′ are two other functions with the same property, then there exist constants α > 0, β1, and β2 such that
φ1′ = αφ1 + β1 and φ2′ = αφ2 + β2
The first formula expresses a representation of the relation among composites using a measure of their components, and the second formula expresses the representation’s uniqueness up to positive linear transformations. In an agent’s decision problem, the representation theorem applies to preferences among composites of an option’s possible consequences and to preferences among the composites after fixing some of their features, that is, to conditional preferences among the composites. It works for preferences among composites of consequences divided into risk and independent consequences under the assumption that for any risk and any independent consequences, some composite of consequences combines them. Not all the composites need be possible consequences of the options in the decision problem; some may be possible consequences of hypothetical options.10 The theorem shows how, given the mutual separability of an option’s risk and its independent consequences, to discover an agent’s intrinsic attitude to an option’s risk using the agent’s preferences among composites of consequences, including conditional preferences. It yields an additive intrinsic-utility assignment to consequences and shows the assignment’s existence and uniqueness up to positive linear transformations. The representation takes an option’s utility as a sum of the intrinsic utility of the option’s risk and the expected intrinsic-utility of its consequences besides its risk. The preference-comparisons of all pairs of options elicit the intrinsic utility of an act’s risk in the sense of its exposure to chance, assuming that this intrinsic utility influences preferences among options according to mean-risk evaluations of options, as in Chapter 4.
10 Hansson (1988) treats conjoint measurement and aversion to risk.
Attitudes 63 If a bet has the consequence of undertaking a risk and also the consequence of gaining $10, then it has the consequence of the risk and gain together. The pair of consequences, however, does not form a consequence independent of the risk. In a decision problem for an agent, the representation assumes, for a risky option, a twofold division of a possible composite of consequences into the option’s risk and the option’s independent consequences. It also assumes that the agent’s attitude to the option’s risk affects the intrinsic utility of the composite of consequences, given the representation’s characterization of these consequences, independently of the agent’s attitude to the consequences in the composite that are independent of the option’s risk. The independence of the two attitudes establishes the mutual separability of the option’s risk and its independent consequences. The literature on separability, or decomposability, treats preferences among composites formed by filling a set of categories. As Section 4.3 explains, a category is separable from the other categories if and only if the preference-ranking of composites that vary in occupant of the category is the same for any compatible way of filling the other categories. Given the mutual separability of two categories filled to form a composite, and some other conditions the representation theorem assumes, preferences among the composites have a utility representation according to which the utility of a composite is a sum of the utilities of its components. An agent’s desire-expressing intrinsic-utility function is also additive and represents preferences among composites of consequences, as Chapter 4 shows. Hence, according to the agent’s preference-ranking of composites of an option’s risk and its independent consequences, an option’s risk is separable from the option’s independent consequences, and vice versa. Therefore, assuming the other conditions of conjoint measurement, some intrinsic utility function that is unique up to positive linear transformations represents preferences among composites of consequences and makes the intrinsic utility of an option’s composite of consequences equal the sum of the intrinsic utility of the option’s risk and the intrinsic utility of the option’s independent consequences. This intrinsic-utility function, given scale transformations, matches the agent’s desire-expressing intrinsic-utility function, assuming that the agent is rational and ideal. Hence, conjoint measurement reveals the agent’s intrinsic-utility function, and, in particular, its value for an option’s risk.
3 Rational Attitudes toward Risks Rationality is permissive about attitudes to many events. It tolerates attraction to loud music and also aversion to loud music. However, rationality imposes requirements on attitudes to risks. Rationality’s requirements for an attitude to a risk depend on the type of risk, the type of attitude, the type of agent, and the agent’s circumstances. They treat differently physical and evidential risks, and also treat differently a chance of a bad event and overall exposure to chance. This chapter treats rationality’s requirements for agents who are cognitively ideal and in ideal circumstances for forming attitudes toward risks, and who have rational attitudes, except possibly for an attitude that a requirement up for justification targets. I present principles of consistency and proportionality for attitudes to risks, consider whether risks of losses may rationally loom larger than prospects of gains, and compare rational attitudes to past and to future risks.
3.1. Reasons for Attitudes Some reasons an agent has to perform acts are internal, in the sense that they are accessible to the agent with reflection. The reasons I treat are internal to the agent. Basic intrinsic attitudes, with their strengths, furnish such reasons. They have motivational power for the agent. Principles involving internal reasons govern attitudes toward an option, its risk, and its other consequences. Rationality regulates the reasons for having and for not having conative attitudes. Reasons for desires are the nature of the object of desire or the object of desire’s being a means to fulfillment of another desire. An account of the reasons for desires, both intrinsic and extrinsic desires, disqualifies as a reason the object’s attainability, because a rational agent need not limit desires to objects he can attain. He may desire wisdom even if it is not attainable. Because it is possible to want something whether or not it is attainable, unattainability does not undermine a desire. Similarly, it is possible to
Rational Responses to Risks. Paul Weirich, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190089412.001.0001.
Rational Attitudes toward Risks 65 be averse to an event that has no chance of occurring. An adult may be averse to being a teenager, remembering that awkward age, without any chance of repeating it. Because unattainability does not count against a conative attitude, exhibiting sour grapes is irrational. It is irrational to cease wanting something because one discovers it is beyond one’s grasp. Desires should not change because of empirical evidence about the attainability of the object of desire. In particular, desires should not change to make them easier to satisfy. A cognitively ideal agent in ideal circumstances for holding conative attitudes does not suffer frustration from failing to satisfy a desire for an unattainable object, and so need not eradicate unattainable desires to prevent frustration.1 The reasons for wanting or not wanting something may differ from higher- order reasons for wanting or not wanting to want or to not want it. A person has a reason for not wanting to want objects he cannot obtain. It may be pointless or even frustrating to desire what one cannot obtain. Although attainability of the object of desire may rationally influence the holding of the desire, it does not figure among the reasons for or against the desire itself, that is, the attitude to the desire’s object. The reasons for or against the desire itself evaluate the object of desire. The reasons for or against holding the desire evaluate also having the desire as a means to other events. Preventing desires that bring frustration makes sense for a nonideal agent in nonideal circumstances for holding conative attitudes. However, for an ideal agent in ideal conditions for holding conative attitudes, the unattainability of an object is not a reason against a desire for the object. In ideal conditions for holding conative attitudes, a rational ideal agent forms a desire according to reasons involving the object’s features and not also according to higher-order reasons for having or not having the desire as a means to other events.2 In a similar vein, Binmore (2009: Sec. 1.4) holds that attitudes to acts, states, and outcomes should all be independent, and Gilboa (2010: Chap. 1) holds that attitudes to acts and outcomes should be independent. The independence they have in mind rules out sour grapes. The fox who ceases to want the grapes he learns he cannot reach is irrational because he lets a judgment about attainability influence his evaluation of the grapes, contrary to the independence condition. 1 For comparison, a cognitively ideal agent in ideal circumstances for holding beliefs apportions belief to the evidence without adjusting for the comfort or discomfort beliefs bring. 2 Conditions are not ideal for forming a desire if forming the desire considering its object’s features incurs a penalty. That is, if a rational desire is punished.
66 Rational Responses to Risks What about not wanting something you already have? That you have attained something is a reason not to want it, right? This point uses a sense of wanting different from the sense that the independence condition uses. Wanting sometimes means lacking and trying to get. One does not lack what one has, and it is irrational to try to get what one already has. If wanting involves trying to get, then wanting the unattainable is trying to get what you cannot have. This is wasted effort and is irrational. But an agent may want in a sense what he already has. A home owner may want to have a house although he already has a house. He gains satisfaction from having a house. His wanting a house is a pro-attitude. I take desire as a pro-attitude that need not involve trying to get and take aversion as a con-attitude that need not involve trying to prevent. Rationality makes a pro-attitude independent of beliefs about the attainability of the attitude’s object, and makes a con-attitude independent of beliefs about the preventability of the attitude’s object. Utilities represents such rational pro-and con-attitudes.
3.2. A Risk as a Chance of a Bad Event Nothing recommends a risk, understood as a chance of a bad event and considered by itself. Undertaking a risk may be attractive because it brings about some good event, but a chance of a bad event considered by itself should not be attractive to an agent. In fact, rationality requires an intrinsic aversion to a chance of a bad event if the agent is aware of it. Rationality does not require an intrinsic aversion to a physical chance of a bad event, because a rational agent may be justifiably ignorant of the physical chance. An agent unaware of the risk of illness from exposure to radiation may not be averse to the risk. Rationality’s requirement that an agent have an intrinsic aversion to a risk he faces assumes that the agent is aware of the risk, as is the case for an evidential risk if the agent is ideal. Evidential risks, involving evidential probabilities, are products of uncertainty, and because ideal agents are aware of their mental states, they are aware of evidential risks, and therefore rationality requires them to have intrinsic aversions to these risks.3
3 Kaplan and Garrick (1981: 24), speaking of evidential risk, agree: “Considered in isolation, no risk is acceptable! A rational person would not accept any risk at all except possibly in return for the benefits that come along with it.”
Rational Attitudes toward Risks 67 Negative utility indicates aversion on a scale that uses indifference as its zero point. The intrinsic utility of a risk of a bad event, so characterized, is the product of the probability and utility of the event. Because the event has a negative utility, the intrinsic utility of the risk is also negative, which indicates that the risk is an object of an intrinsic aversion. A risk in the sense of a chance of a bad event, such as a chance of injury, inherits the badness of the event. An intrinsic aversion to a chance of a bad event derives from the aversion to the bad event. The aversion to the event need not be intrinsic to generate an intrinsic aversion to a chance of the event. A rational ideal agent has an intrinsic aversion to realization of an extrinsic aversion and also to a chance of the extrinsic aversion’s realization so characterized. Because the intrinsic aversion to the chance is independent of the type of aversion to the event, it does not rest on another intrinsic attitude and is therefore basic. Rationality requires an agent to have a basic intrinsic aversion to a risk taken as a chance of a bad event. The object of an aversion has a propositional representation. An aversion to risk leaves open ways of making risks propositionally definite. An agent may have aversions of different intensities toward a risk of death from cancer and a risk of death in an auto accident. The risks differ, despite being alike risks of death, because they have different propositional objects. An intrinsic aversion arises from the a priori implications of its object’s realization. Even if an agent has an intrinsic aversion to realization of an extrinsic aversion, and has an extrinsic aversion to losing a dollar, the agent does not thereby have an intrinsic aversion to losing a dollar because losing a dollar does not by itself imply realization of an extrinsic aversion. It implies realization of an extrinsic aversion only given that the agent has an extrinsic aversion to losing a dollar. An intrinsic aversion arises to the proposition that he realizes an extrinsic aversion by losing a dollar. Similarly, an intrinsic aversion arises to the proposition that he faces a chance of realizing an extrinsic aversion by losing a dollar. Because an ideal agent knows his conative attitudes, he knows that he has an extrinsic aversion to losing a dollar, and so knows that if he faces a risk of losing a dollar, he thereby faces a risk of realizing an extrinsic aversion by losing a dollar. He has an intrinsic aversion, not to the risk of losing a dollar, but to the risk of realizing an extrinsic aversion by losing a dollar. The risks have the same physical realization but different propositional characterizations. Rationality requires an ideal agent to have an intrinsic aversion to realization of an aversion because (1) he is aware of his aversion and (2) considering
68 Rational Responses to Risks the aversion by itself does not want its realization. This requirement leads to the agent’s intrinsic aversion to a chance of a bad event, characterized as a chance of realization of an aversion. Analogous reasoning supports rationality’s requiring an intrinsic attraction to a chance of realizing a good event, or a prospect. Because a chance of a good event inherits the goodness of the event, rationality demands an attraction to a chance for a good event, considered by itself. Because of an agent’s intrinsic attitudes to chances of good and bad events, the expected-utility principle may evaluate an option using comprehensive utilities of possible outcomes, as Chapter 4 explains. An intrinsic aversion to risk in general does not require an extrinsic aversion to every particular risk. A person intrinsically averse to risk may want some particular risks on balance. For instance, a generally cautious person may enjoy high stakes poker. Attraction to a bet’s risk on balance may arise because of some accompaniment of the risk, say, the prospect of winning a big pot, and not because of the risk itself, namely, a chance of losing. In one sense, a means to an event need not be a cause of the event but may be part of a process that produces the event. Given that a risk counts as a means to its accompaniments, the risk, although bad in itself, may be good as a means to prospects of good events. The prospects that accompany the risk may make it good on balance. At an automobile plant, working produces wages and autos. Although the wages do not produce the autos, they are part of a process that produces the autos. The wages, in a sense, are means of auto production. Suppose that an act produces a risk and also a prospect of a good event. Then the risk, although bad in itself, may be good as a means to the prospect, in the sense that the act produces the prospect along with the risk. In some cases, rationality permits an extrinsic attraction to a chance of a bad event because the chance is a means to a goal. A patient may want an operation, along with the risk it brings, because the operation, and the risk it brings, are means to health.4 A commencement speaker may urge graduates to take risks. What acts does the speaker encourage? Not reckless driving. Maybe, for graduates in computer science, starting a new software business. The speaker, understanding the graduates’ intrinsic aversions to risks, encourages them to take 4 Besides broadening what counts as a means, an account of extrinsic desire may broaden what counts as an extrinsic desire by grounding extrinsicality in factors besides being a means. A desire may count as extrinsic not only because attaining it is a means of attaining another desire but also because attaining it provides a contribution to attainment of another desire or in some other way promotes attainment of another desire. A politician may desire a particular voter’s support because it contributes to election even if it does not count as a means to election.
Rational Attitudes toward Risks 69 risks as a means of gaining benefits that compensate for the risks. Starting a new business runs the risk of failure but also offers the prospect of success. Although rationality requires an intrinsic aversion to the risk of failure, characterized as a chance of a bad event, rationality allows an extrinsic attraction to undertaking the risk if it brings a compensating prospect of success. Rationality requires an intrinsic aversion to a risk taken as a chance of a bad event but permits an extrinsic attraction to a risk that is a means to a good event.5
3.3. An Option’s Risk An option’s risk in a technical sense is the option’s exposure to chance. An option’s exposure to physical chance depends on the option’s physical environment and is insensitive to information. An option’s exposure to evidential chance arises from ignorance of the option’s outcome and so is sensitive to information. If an agent does not have information about an option’s exposure to physical chance, the exposure does not play a role in the agent’s deliberations. However, if the agent is aware of the option’s exposure to physical chance, and lacks only information about its resolution, then the option’s exposure to evidential chance equals the option’s exposure to physical chance. Because an ideal agent knows the option’s exposure to evidential chance, and because the exposure to evidential chance may substitute for the option’s exposure to physical chance, if known, rational deliberations may concentrate on the option’s exposure to evidential chance. An option’s exposure to physical chance, if unknown, does not enter deliberations and, if known, is superseded by the option’s exposure to evidential chance. This section describes rationality’s requirements on an agent’s attitude to an option’s exposure to evidential chance. Suppose that a traveler wonders whether her morning flight will arrive in time for her to give an afternoon presentation. She cares about arriving in time. Not knowing whether her flight will arrive in time is therefore an exposure to evidential chance. Rationality requires an intrinsic aversion 5 Reasons for an extrinsic attitude to a risk may involve living with anxiety until the risk’s resolution, and reasons for an extrinsic attitude to a prospect may involve savoring the possibility of gain until the prospect’s resolution. The length of time one lives not knowing the risk’s or prospect’s resolution may affect one’s attitude to the risk or prospect and to receiving information about the risk’s or prospect’s resolution. Epstein (2008) presents a representation theorem that incorporates such features in a utility representation of preferences among options.
70 Rational Responses to Risks to this exposure because it is ignorance of a practically significant matter. Rationality requires an intrinsic aversion to an option’s risk, taken as the option’s exposure to evidential chance; it requires aversion to ignorance of relevant features of an option’s outcome. Knowledge has intrinsic value, and a rational ideal agent has an intrinsic aversion to its absence, ignorance. Uncertainty is the opposite of certainty and conflicts with knowledge. Whereas knowledge is good in itself, uncertainty is bad in itself and follows from an act’s exposure to an evidential chance, that is, the act’s evidential risk. Intrinsic aversion to an act’s evidential risk is an obligation of rationality. A rational ideal agent is intrinsically averse to the exposure to evidential chance that, for example, a volatile investment brings. Considering just a priori implications, she prefers knowing to not knowing whether the investment succeeds. Financial advisors suggest, for an international wire transfer of funds, that currency conversion take place at the sending bank rather than at the receiving bank because the rate of conversion is then known at the start of the transfer instead of being unknown until the transfer concludes. They assume that a client is intrinsically averse to the risk of the exchange rate becoming worse during the transfer even if this risk is balanced by the prospect of the exchange rate becoming better during the transfer so that the expected change in the exchange rate equals zero. The advisors assume that the client, being reasonable, has an intrinsic preference for the exposure to chance that comes with dependence on the known current exchange-rate to the greater exposure to chance that comes with dependence on the unknown future exchange-rate. Rationality requires intrinsic aversion to variability in the utilities of an option’s possible outcomes, although it permits an extrinsic attraction to an option’s exposure to chance as a means to good events. Extrinsic attraction to an option’s risk, when rational, arises from a positive evaluation of the option’s combination of chances of bad events and chances of good events. A rational person is averse to an option’s risk, namely, the option’s exposure to chance, taking the risk by itself. However, benefits sometimes offset risks in a person’s overall evaluation of an option. The overall evaluation depends on the option’s expected utility, all things considered, including the option’s risk. A patient may have an intrinsic aversion to the exposure to chance that an operation creates but also have an extrinsic desire to undergo the operation, and the exposure to chance, for the sake of health.
Rational Attitudes toward Risks 71 Although rationality requires an intrinsic aversion to an option’s risk, rationality is permissive about the strength of the aversion. The strength of the aversion may vary from one rational person to another, within some limits. According to one limit, intrinsic aversion to variability in the utilities of an option’s possible outcomes should never be so great that removing it by making the worst possible outcome certain increases the option’s utility. According to another limit that Section 3.5 explains, an intrinsic aversion to an option’s risk should be proportional to the size of the risk. An intrinsic aversion to ignorance may ground an extrinsic attraction to gathering evidence concerning events that matter, for example, evidence concerning the consequences of an investment. Information may reduce the evidential risk an option generates, that is, the risk from not knowing the option’s outcome at the time of adopting the option. When in a decision problem accessible information may reduce an option’s evidential risk, rationality may require gathering information before reaching a decision. Because rationality requires an aversion to the risk, it requires steps to reduce the risk, other things being equal. When the cost of gathering information is less than the gain from reducing the risk, rationality requires gathering the information. The expected utility of an option that maximizes expected utility improves as information increases, as Good (1967) shows.6 The improvement comes partly from knowing more about the states of the world that settle the outcomes of options and partly from a reduction of the evidential risks that the options involve. Additional information may make probabilities of an option’s possible outcomes more reliable and robust, which reduces the option’s evidential risk and so increases the utilities of an option’s possible outcomes, thereby increasing the option’s expected utility. Acquiring more experience to ground utility assignments also makes utilities more robust, thereby reducing exposure to chance and increasing the expected utility of the option that maximizes expected utility. Rationality requires reasonable steps, prior to a decision, to reduce the exposure to chance of the option adopted because this increases its expected utility. Some couples expecting a baby do not want to know their baby’s sex before its birth. For them, some ignorance is good. They will not trade the evidential chance that the baby is a boy and the evidential chance that it is a girl for knowledge of the baby’s sex. Wanting to maintain epistemic possibilities, if justified, is an extrinsic desire—perhaps, a desire to enhance the significance 6 Buchak (2012) advances some caveats concerning applications of Good’s result.
72 Rational Responses to Risks of the baby’s birth with the good of learning the baby’s sex at birth. Ignorance is not good in itself, and the type of ignorance that creates an option’s risk in the sense of its exposure to evidential chance, is not good in itself, although in some cases ignorance may be good as a means to some end. A rational person may enjoy suspense and so enjoy not knowing the outcome of an option. Perhaps, some people enjoy for this reason buying lottery tickets. They would rather have the ticket than an amount of money equal to the ticket’s expected return. However, if they enjoy the suspense that an option such as buying a lottery ticket brings, then that good consequence raises the utility of the option by raising the utility of each possible outcome of the option. The variability of the utilities of the possible outcomes is by itself an object of aversion but is attractive as a means to a good. An agent may rationally pay a premium to undergo an option’s risk as a means to some goal, but this payment, justified all things considered, does not show that rationality permits an attraction to the option’s risk taken by itself. The permitted attraction to the option’s risk is extrinsic and may coexist with an intrinsic aversion to the option’s risk. This analysis of attraction to lottery tickets assumes my account of an option’s outcome (formulated to ground justification of the expected-utility principle). According to the account, an option’s outcome includes every event that would occur if the option were realized, in particular, the option itself and all its consequences. Because buying a lottery ticket generates suspense, the suspense is part of the purchase’s outcome. The outcome of the lottery may be a prize for the holder of the winning ticket, and not suspense, but the outcome of purchasing a lottery ticket includes suspense for the buyer. As noted, an option’s exposure to chance comprehends chances of good events that the option creates. Consider an option that offers various possible gains and no loss. Although rationality requires attraction to a chance of a good event, considered by itself, the option brings variation in possible outcomes, and rationality requires aversion to the variation it brings. A rational agent’s intrinsic aversion to chance includes intrinsic aversion to a good event’s depending on chance. Aversion to chance lowers the expected utility of an option that offers just a chance of a monetary gain. It makes the price an agent is willing to pay for the chance of a monetary gain less than the chance’s expected monetary value. Because a rational agent’s intrinsic aversion to an option’s exposure to chance lowers utilities of the option’s possible outcomes, the option’s expected utility is less than the option’s expected utility ignoring the option’s exposure to chance. Hence a rational agent may
Rational Attitudes toward Risks 73 prefer to the option a sure thing with a utility equal to the option’s expected utility. The intrinsic utility of an option’s risk equals, but is not defined as, the difference between the option’s expected utility and the option’s expected utility ignoring the option’s risk. An agent’s risk premium for an option indicates the strength of an agent’s intrinsic aversion to the option’s risk. The agent is willing to pay the premium to trade the option for a sure thing with a utility equal to the option’s expected utility ignoring its risk. The utility of the premium is the difference between the option’s expected utility and the option’s expected utility ignoring risk. This utility equals the intrinsic utility of the option’s risk, given a coordination of scales for utility and for intrinsic utility. In a typical agent, an intrinsic aversion to an option’s risk is an intrinsic aversion to ignorance of the option’s outcome but need not derive from an intrinsic desire to know the option’s outcome. The intrinsic aversion may arise because of the features of not knowing the option’s outcome rather than because of an intrinsic desire to know the option’s outcome. I generally treat rational ideal agents who have a basic, underived intrinsic aversion to an option’s risk because, as Chapter 4 explains, a useful principle of additivity governs the intrinsic utilities of an agent’s basic intrinsic attitudes.
3.4. Consistency Rationality imposes consistency constraints on the degrees of belief and degrees of desire that constitute, respectively, probability and utility assignments. It also imposes consistency constraints on attitudes to risks. Consistency is a simple structural relation among attitudes to risks. The next chapter considers structural relations more complex than consistency. Theorists differ about what consistency requires of attitudes, because consistency, or coherence as it is sometimes called, may be made precise in various ways. It does not matter whether the consistency requirements I advance are requirements of consistency in some precise sense, but only whether they are requirements of rationality. Risks have sizes that depend on probabilities and utilities. The size of a risk may settle the intensity of a person’s intrinsic aversion to the risk. New information may lower an evidential risk by changing an evidential probability it involves, and so may decrease an intrinsic aversion to the risk. A physician’s assurance that a treatment is safe lowers the patient’s evidential risk of side
74 Rational Responses to Risks effects from the treatment and so decreases the patient’s intrinsic aversion to the treatment’s risk. New information does not give an agent reasons to change an intrinsic aversion to a risk of a certain size, but may affect an intrinsic aversion to a particular evidential risk if it bears on the risk’s size. Although the size of an option’s risk depends on the agent’s intrinsic attitudes to the option’s possible outcomes, an agent’s intrinsic aversion to an option’s risk need not derive from other intrinsic aversions. I generally assume that an agent has a basic intrinsic aversion to a risk and is indifferent to features of a risk besides its size. I adopt these assumptions for both a risk in the sense of a chance of a bad event and for a risk in the sense of an exposure to chance. The first consistency requirement is to have the same attitude to equivalent risks. Its application depends on a specification of the equivalence relation between risks. For an agent who is indifferent to features of risks beside their sizes, and is aware of their sizes, I take risks to be equivalent if and only if they have the same size. An intrinsic aversion to a risk is inconsistent with an intrinsic aversion of a different intensity to another risk of the same size. For an agent who is averse to death but indifferent to the manner of death, it is irrational to have an aversion to the risk of being struck dead by lightning greater than an aversion to the risk of dying in a car accident, if the agent knows that being struck dead by lightning is the lesser risk because it is the rarer form of death. The literature proposes various measures of the size of a risk. For a risk in the sense of a chance of a bad event, I adopt the event’s probability-utility product as a measure of size, but for a risk in the sense of exposure to chance, I do not adopt any general measure of a risk’s size. Even without appeal to a general measure of risks in the sense of exposure to chance, we can still apply the first consistency requirement to cases in which two such risks clearly have the same size. Whether an attitude to a risk is intrinsic or extrinsic (or both) depends on the proposition expressing the risk. If an event’s characterization expresses its badness, then a rational ideal agent has an intrinsic aversion to a chance of the event. The principle requiring an intrinsic aversion to a chance of a bad event assumes an expression of the event under which the agent recognizes its badness. Taking events to be propositional individuates them finely, but the same proposition may be expressed two ways, and the agent may recognize the badness of the proposition’s realization given only one of its expressions.
Rational Attitudes toward Risks 75 The equivalence relation to which consistency in attitudes attends takes account of an agent’s way of understanding the proposition that represents an event. A traveler may fear bad weather in St. Petersburg more than he fears bad weather in Leningrad although these weather events are the same. The traveler may not see that these events are the same, and so have the same size, because he does not know that St. Petersburg is the same city as Leningrad. He understands the same event in two ways. Rationality demands that an agent have the same attitude toward risks involving the same probability of the same event understood the same way. Except in Chapter 10 on relaxing idealizations, I put aside differences in ways of understanding the same risk by assuming that an agent’s deliberations, including calculations of options’ expected utilities, use fully understood sentential names of propositions. A second requirement of consistency governs the relationship between first-order and second-order attitudes. An agent should have an intrinsic desire to realize a desire. This requirement is sensitive to a desire’s propositional expression. Although a traveler knows he wants to visit St. Petersburg, he need not have an intrinsic desire to visit Leningrad. He is required to have an intrinsic desire to visit only a city characterized as a city he has a desire to visit. Similarly, an agent should have an intrinsic aversion to realizing an aversion characterized as such. When reviewing an option’s possible outcomes, an agent should have an intrinsic desire to realize those characterized as having positive utilities and an intrinsic aversion to realizing those characterized as having negative utilities, assuming that indifference is the zero point for the utility scale.
3.5. Proportionality Attitudes toward risks that are appropriately aversions may nonetheless be excessive. For example, a person may have, and recognize that he has, an excessive, irrational aversion to the risk of crashes during air travel and, consequently, may enroll in a program to reduce fear of flying. A principle of proportionality requires that aversions to risks be neither excessive nor insufficient. For an agent who is indifferent to features of risks besides their sizes, proportionality entails the same attitude to risks of the same size and so consistency in a sense that Section 3.4 explicates.
76 Rational Responses to Risks An intrinsic aversion to a risk of a bad event should grow with an increase in the chance of the bad event and should also grow with an increase in the event’s badness. A principle of proportionality states that an intrinsic aversion to a chance of a bad event, given a constant badness of the event, should be proportional to the chance of the bad event, and should be proportional to the badness of the event, given a constant chance of the event. Putting the two requirements together, a principle of proportionality for a risk in the sense of an evidential chance of a bad event requires an intrinsic aversion to the risk that is proportional to the product of the event’s evidential probability and its utility. Taking a risk’s severity, or size, as its probability-utility product, as in Hansson (2007) and Cranor (2007), the principle maintains that intrinsic aversion to the risk should be proportional to the risk’s size. Hence, an intrinsic aversion to the risk of a medication’s side effect should increase in proportion to the product of the side effect’s probability and gravity. The principle, for continuity and simplicity, extends the ordinary sense of a risk to count as a risk certainty of a bad event’s occurrence and also certainty of its nonoccurrence. If the scale for intrinsic utility uses as a unit the agent’s intrinsic aversion to some bad possible world, so that the world has an intrinsic utility of negative one, then the intrinsic utility of a risk of the world equals the risk’s size. The principle of proportionality then makes the intrinsic utility of the chance that an option will have some particular bad possible outcome equal to the probability-utility product for the possible outcome. The principle of proportionality assumes a constant of proportionality that together with the size of a risk yields the intensity of an aversion to the risk. It assumes the constant exists, not trivially for a single risk, but for a set of risks. Support for the principle arises from a requirement that the intensity of an intrinsic aversion to a risk be fitting given the size of the risk. Fitting intrinsic attitudes to risks, taken one by one, yields intrinsic utilities that are proportional to the probability-utility products for the risks. The requirement in each case that the intrinsic attitude be fitting yields the principle of proportionality for the set of risks. Stefánsson and Bradley (2015, 2019) propose evaluating a risk that is a chance of a bad event using the context of the risk as well as the bad event’s probability and utility. In a decision problem, they reject proportionality, in particular, linear increases in the intrinsic utility of the chance of a possible outcome of an option as the chance increases.7 They hold that chances for a 7 They speak of the value of a chance of a possible outcome, but I take them to mean what I call the chance’s intrinsic utility.
Rational Attitudes toward Risks 77 possible outcome may have diminishing marginal intrinsic utility. Rejecting proportionality facilitates representation of preferences among options, because it allows a representation to use, besides probabilities and utilities of possible outcomes, an independent assignment of intrinsic utilities to chances. However, explaining the rationality of evaluations of options, in a way that respects the additivity of the intrinsic utilities of chances given any partition of possible outcomes, requires intrinsic utilities that respect the principle of proportionality. For example, someone with linear utility for money but diminishing marginal intrinsic utility for chances of possible outcomes may, irrationally, prefer not to pay $0.50 for a chance of $1 if heads and prefer not to pay $0.50 for a chance of $1 if tails, although the chances concern the outcome of the same coin toss and together are sure to yield $1. An objection in the same vein observes that if an agent has diminishing marginal intrinsic utility for chances of possible outcomes, then applying the principle of like attitudes to like risks may yield incoherent attitudes. If an agent’s intrinsic utility for a 50% chance of an event of unit utility is less than 0.5, then the agent’s intrinsic utility for another equal chance is also less than 50%, if the agent is consistent. However, on a particular coin toss, a chance for a unit of utility if heads combined with a chance for a unit of utility if tails yields a unit of utility for sure. Given that the agent’s intrinsic utility for each chance is less than 0.5, the agent’s utility for the combination is less than a unit, and is irrational given certainty that the combination has a unit of utility. Different principles of proportionality govern different types of risk. A principle of proportionality also governs an agent’s attitude to an option’s risk in the sense of the option’s exposure to chance, assuming that the agent is indifferent to features of risks besides their sizes. In cases in which only variance affects an option’s risk, the larger the variance, the larger the risk, and the greater should be an agent’s intrinsic aversion to the risk. Strengthening this principle of monotonicity yields a principle of proportionality requiring that an agent’s intrinsic aversion to an option’s risk be proportional to its variance. In some cases, the size of a risk is imprecise because the risk involves an imprecise probability or an imprecise utility. Suppose that a set of sizes represents the risk. Then the principle of proportionality yields a set of intrinsic utilities to represent a rational intrinsic attitude to the risk. In this way, proportionality governs attitudes to imprecise risks as well as attitudes to precise risks.
78 Rational Responses to Risks
3.6. Risk and Prospects The bad events that define risks are losses, and the good events that define prospects are gains. The status quo separates gains and losses and so separates risks and prospects. Suppose that an agent is indifferent to the status quo, desires gains, and is averse to losses. May intrinsic aversion to exposure to chance treat differently risks and prospects? Rationality requires intrinsic aversion to an option’s exposure to (evidential) chance but permits various levels of intrinsic aversion to the option’s exposure to chance. If one option offers chances of bad events, another option offers chances of good events, and the options’ exposures to chance are equal in magnitude, may the levels of intrinsic aversion to the options’ exposures to chance differ? Taking utility as strength of desire makes sense of an agent’s utility level. A utility level may exist below which life is not worth living. An agent’s attitude to an option’s exposure to chance may involve fear of falling below that level. If an agent is just a little above the level, the agent may be very averse to falling further. Losses are bitter—more bitter than are sweet comparable gains. Losses of resources produce greater utility changes than do gains of the same magnitude because of the diminishing marginal utility of resources. Diminishing marginal utility produces a concave utility curve for resources, and because of the concavity, a loss produces a greater utility change than a comparable gain. Because rationality permits diminishing marginal utility, it permits aversion to loss being greater than attraction to gain. Also, the intensity of an agent’s attitude toward a change may depend on whether the change is a gain or a loss with respect to the agent’s situation. If a person loses track of his salary, his attitude toward an amount of salary may depend on whether he learns the amount arises from a raise or a reduction. A rational agent may care about the direction of change in utility level so that his utility assignment to an option’s comprehensive outcome depends not only on the level of resources the outcome yields but also on whether the level is a gain or loss. If an option’s comprehensive outcome includes movement from the utility of the status quo to the utility of the outcome, then an equilibrium involving the outcome’s utility and the outcome’s specification of the movement generates the outcome’s utility. Despite these concessions, a rational agent’s utility assignment to a comprehensive outcome attends exclusively to the outcome’s realization of basic intrinsic attitudes, including attitudes to gaining and to losing, so that the assignment may put aside the agent’s current situation after using it to settle
Rational Attitudes toward Risks 79 Value
Resource
Figure 3.1 An S-shaped Value Curve
whether the outcome yields a gain or a loss. When outcomes are comprehensive, and not just monetary, the intensity of a rational ideal agent’s attitude toward a change in utility level depends on only the magnitude of the change in utility level. The intensity is the same for a unit gain as for a unit loss. Rationality treats utility gains and losses symmetrically in this sense when utilities are comprehensive. Utility losses do not loom larger than utility gains. Kahneman and Tversky (1979) and Wakker (2010) present prospect theory as a descriptive account of human behavior.8 According to prospect theory, people are more averse to losses than they are attracted to gains of equal magnitude. The literature calls this loss aversion. Losses loom larger than gains so that the value function for losses and gains has an S-shape with inflection about a reference point separating gains from losses, as in Figure 3.1. The reference point depends on the framing of a decision problem and not just the status quo. Taking concavity of a value function as aversion to risk and convexity as attraction to risk, agents are averse to risk when choosing among prospects for gains and are attracted to risk when choosing among risks of losses. May this loss aversion be rational? I interpret the value function as a utility function and consider the rationality of loss aversion as expressed by an S-shaped utility curve for a resource. I suppose that the S-shaped utility curve for a resource represents preferences between options and that its shape arises from attitudes to risk rather than 8 Barberis (2013) reviews applications of prospect theory, especially in the fields of finance and insurance.
80 Rational Responses to Risks diminishing marginal utility. Under these assumptions, loss aversion, if rational, derives its justification from attitudes to risk. An agent’s risk (chance) premium for a gamble yielding chances of various amounts of a resource equals the amount of the resource with a utility equal to the difference between the utility of the gamble all things considered and the expected utility of the gamble considering only amounts of the resource. The premium depends on the shape of the agent’s utility curve for the possible amounts of the resource. The premium is positive when the curve is concave and negative when the curve is convex. Its being positive indicates a willingness to pay to avoid risk, and its being negative indicates a willingness to pay to take on risk. Suppose that one option yields a probability distribution of possible gains and that another option yields an identical probability distribution of possible losses, with corresponding gains and losses being of the same magnitude. If the resource is money, the first option may be a 50% chance of gaining a dollar, and the second option may be a 50% chance of losing a dollar. According to an illustrative S-shaped curve, the risk premium for the first option may be $0.10, and the risk premium for the second option may be –$0.20. The curve represents an agent who would exchange the chance of gain for $0.40, that is, the expected gain of $0.50 minus $0.10; and who would pay $0.70 to avoid the chance of loss, that is, the expected loss of $0.50 augmented by a loss of $0.20. Does rationality permit an agent’s risk premium to differ for the two options? In particular, may the risk premium for the option with only possible gains be positive, while the risk premium for the mirror- image option with only possible losses is negative? Such a pattern of risk premiums is irrational for an ideal agent, given that it arises from a difference in intrinsic attitudes to the options’ risks in the sense of their exposures to chance. First, as Section 3.3 explains, for every option, including an option involving only losses, intrinsic attraction to the option’s exposure to chance is irrational. Second, the reference point separating gains and losses is an arbitrary matter. The 50% chance of losing a dollar becomes a 50% chance of gaining a dollar if the reference point shifts down a dollar. An ideal agent, who sees the arbitrariness of the separation of gains and losses, should not be risk-averse concerning gains and risk-loving concerning losses. For consistency, an agent’s attitude to the same outcome should not differ depending on whether it is presented as a gain or as a loss. Consider Tversky and Kahneman’s (1981: 453) case of the rare Asian disease, which threatens to kill 600 people. Compare two options. Option A
Rational Attitudes toward Risks 81 saves 200, whereas option B saves 600 with a probability of 1/3 and none with a probability of 2/3. People favor option A. Again, compare two options. Option C results in 400 dying, whereas option D results in none dying with a probability of 1/3 and in 600 dying with a probability of 2/3. People favor option D. However, option A is equivalent to option C, and option B is equivalent to option D. It is inconsistent to favor A over B and also to favor D over C. A change in the framing of options going from the presentation of options A and B to the presentation of options C and D changes the reference point from 600 dying to 600 being saved. The change in reference point affects a person’s S-shaped utility curve and so the person’s preferences, according to mechanisms that Kahneman and Tversky (1979: 286–88) explain. However, because the outcomes of options are the same whichever reference point is adopted, preferences among options should not change for an ideal agent as the reference point changes.
3.7. Past Risks The previous sections’ principles of consistency and proportionality govern attitudes held at the same time. Other principles govern attitudes as time passes. Section 2.2 observes that new information does not provide reasons to change intrinsic attitudes. However, an agent’s aversion to an evidential risk generally disappears after new evidence resolves the risk. Is this a sign that the aversion is extrinsic rather than intrinsic? Does rationality really require an intrinsic aversion to a risk? This section presents an objection, along these lines, to rationality’s requiring an intrinsic aversion to a risk. The objection and my response, for definiteness, takes risk in the sense of an option’s exposure to chance. Similar points apply to risk in the sense of a chance of a bad event. Suppose that an agent compares an option that yields $10 for sure to an option that yields $10 if a coin toss yields heads, and −$10 if the coin toss yields tails. The first option is obviously better than the second, but compare the outcome of the first option with the outcome of the second option given heads. The monetary outcomes are the same—a gain of $10 in each case. An agent, intrinsically averse to an option’s risk, prefers the outcome of the first option to the outcome of the second option given heads. She prefers an outcome in which she runs no risk and gains $10 to an outcome in which she runs a risk and thereby gains $10. However, an objection claims, a rational
82 Rational Responses to Risks agent likes gaining $10 just as much whether or not risk precedes the gain. It is irrational to care whether a gain issues from an option’s risk. An option’s risk does not matter if it has a successful resolution, the objection claims. Framing the objection more generally, suppose that for an ideal agent, only wealth, the risk of losing it, and the prospect of gaining it drive her utility assignment. She compares two worlds where her wealth is the same, but in one her wealth arrives with certainty, whereas in the other her wealth arrives after successfully negotiating some risks. She prefers the first world because of an intrinsic aversion to risk in the sense of exposure to chance. Is the preference rational? If she has wealth, should she not be indifferent to the risks that produced it? Should she not treat the risks just as means to wealth? And are not means irrelevant after attaining the end? A reply to this objection explicates the utility of a world that might be an option’s outcome. It notes that an agent’s evaluation of a possible outcome depends on the agent’s perspective. Parfit (1984: Sec. 64) observes that a person typically cares more about a pain to come than an equally intense pain already past. Failing to care about a pain after it has passed does not show that the conative attitude to it was only extrinsic. The attitude may cease because of a change in perspective. While the aversion to the pain existed, it was intrinsic. An agent’s utility assignment to a world is relative to a time so that if the agent’s desires and aversions change, the utility assignment may change. The agent’s utility assignment is also relative to the agent’s perspective in the world, including the agent’s spatiotemporal position and knowledge, so that the agent may evaluate differently a pain according to whether it is past or future and may similarly evaluate differently a risk according to whether it is past or future. For a typical agent, the agent’s temporal relation to a risk rationally affects attitudes toward it. A risk past matters less than an equal risk to come. That an agent does not care about a risk past is not a sign that his attitude to the risk is extrinsic rather than intrinsic, because a change in the agent’s perspective explains the change in the agent’s attitude. An agent’s utility assignment to a world depends on the agent’s information at his position in the world. An option’s consequences include the evidential risk the option carries, and the option’s evaluation includes an evaluation of the evidential risk. Suppose that an agent has an intrinsic aversion to an option’s risk in the sense of its exposure to chance. Gaining $10 by chance and gaining $10 with certainty have the same monetary consequences, but if gaining $10 by chance is in the future, a consequence of taking the chance
Rational Attitudes toward Risks 83 includes the unattractive epistemic possibility of not gaining $10. This epistemic possibility is a reason to prefer gaining $10 with certainty. In a decision problem, a rational agent who cares about risk evaluates an option’s possible outcomes from the agent’s perspective at the time of the decision problem. The comprehensive utility of a world that is an option’s possible outcome appraises chances that arise at times in the world from the agent’s perspective in the world. In expected-utility calculations for a risky option, a possible outcome’s utility is relative to a time when the option’s risk is to come, and not relative to a later time when the option’s risk is past. In a decision problem, an agent’s evaluation of an option’s risk does not assume knowledge of the risk’s result. A world that is a possible outcome specifies the resolution of a risk the agent runs in the world, but the utility of the world does not assume that in the world the agent knows the risk’s resolution at the time of undertaking the risk. The proposition that represents an option’s possible outcome depends on the agent’s perspective at the time of the option because the perspective settles which desires and aversions the option’s possible outcome realizes. If the agent is averse to running risks, then whether an option’s possible outcome realizes the aversion depends on the agent’s perspective in a world in which he adopts the option. The outcome realizes the agent’s aversion just in case from the agent’s perspective the option’s risk is unresolved. A world’s propositional representation specifies whether the world realizes an aversion to a risk, but whether the world realizes the aversion depends on the agent’s perspective in the world. Because an agent’s change in perspective in a world justifies a change in evaluation of the world, noticing that a rational agent may not care about risks past does not show that a rational aversion to a risk is extrinsic rather than intrinsic. As Section 3.3 claims, rationality requires an intrinsic aversion to an option’s risk.
3.8. Summary A risk, taken as a chance of a bad event, may be a means to a good event in the sense that the risk is a consequence of an act that produces the good event. An act’s exposure to chance may similarly be a means to a good event. Rationality permits an extrinsic attraction to a risk as a means to a good event but requires an intrinsic aversion to a risk. It imposes this requirement both for a risk of a bad event and for an option’s risk in the sense of the option’s
84 Rational Responses to Risks exposure to chance, but only for evidential types of these risks. Physical risks are exempt from the requirement because even a rational ideal agent may be ignorant of physical risks and so not have attitudes to them. For an agent who is indifferent to features of risks besides their sizes, rationality requires equal intrinsic aversions to risks of the same size, and having an intrinsic aversion to a risk in proportion to its size, given a suitable account of the sizes of risks.
PART II
ACT S A F F E C T ING R I SK S Many acts are gambles in the sense of having uncertain outcomes. For example, starting a new job may bring surprises. Evaluations of acts prior to their performance set standards for rational decisions. This part justifies evaluations of acts that use expected utilities and that divide an act’s consequences into its risk and its other consequences. Evaluations of acts ground evaluations of combinations of acts differently according to whether the combinations are simultaneous or sequential.
4 Evaluation of an Act This chapter formulates constraints on the strengths of desire that utilities represent. The constraints govern not just an agent’s attitude to a single risk, as in the previous chapter, but also an agent’s attitude to a combination of risks that an act produces. Later, Chapter 6 formulates constraints on an agent’s attitude to a combination of risks that multiple acts produce. A version of the expected-utility principle adds independent intrinsic utilities of risks and prospects to obtain an option’s utility, and a version of the mean-risk principle obtains an option’s utility by adding (1) the option’s utility ignoring the option’s risk in the sense of its exposure to chance and (2) the intrinsic utility of the option’s risk.1 Sections 4.4 and 4.7, respectively, justify these two principles, and other sections lay foundations for the justifications. The foundations introduce principles of independence, such as compositionality and separability, for evaluation of composites using evaluation of their components.
4.1. An Act’s Combination of Risks and Prospects An expected-utility principle that Section 4.4 justifies evaluates an agent’s act by considering its possible comprehensive outcomes. Propositions specifying, for the agent, realization or non-realization of each basic intrinsic attitude represent the act’s possible outcomes. For the agent and a partition of the act’s possible outcomes, the act’s expected utility is the sum of the probability-utility products for the act’s possible outcomes. Risks interact when they occur in combination, and their interaction affects rational attitudes to their combination. For example, two risks may neutralize each other so that their combination produces no risk. A risk of losing a dollar given heads and a risk of losing a dollar given tails combine to form a sure loss of a dollar when the same coin toss resolves the two risks. The 1 Weirich (1987) provides background on the mean-risk principle.
Rational Responses to Risks. Paul Weirich, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190089412.001.0001.
88 Rational Responses to Risks combination’s components are risks in the sense of chances of bad events, but the combination is not a risk in the ordinary sense of a chance of a bad event because the bad event comes for sure. An act often produces risks in the sense of chances of bad events together with prospects in the sense of chances of good events. The risks and prospects interact, and their interaction creates the act’s exposure to chance. An act’s expected utility reviews the act’s possible comprehensive outcomes, which include the act’s exposure to chance. An act’s expected utility evaluates an act by separating its possible outcomes. Another evaluation of an act separates the act’s risk, taken as its exposure to chance, and the act’s consequences that do not involve its risk. An act’s utility equals, as Section 4.7 argues, the expected utility of the act’s consequences independent of its risk plus the intrinsic utility of the act’s risk.
4.2. Compositionality Evaluation of an act in a decision problem may use relations involving parts of the act’s realization according to a representation of the act’s realization. Given a division of an act’s realization into parts and given a type of utility assignment to each part, the act’s utility is compositional if and only if it is a function of the utilities assigned to its parts. That is, if an act a’s parts are a1, a2, . . . , an and U′ is a type of utility assignment for them, then U(a) is compositional if and only if for some function f, U(a) = f(U′(a1), U′(a2), . . . , U′(an)). Suppose that an act’s realization divides into chances for possible outcomes forming a partition and that intrinsic utilities attach to these chances. Then its utility is compositional if its utility is a function of the intrinsic utilities of the chances. Section 4.4 argues that an act’s utility is compositional in this way.2 Another principle of compositionality holds that an act’s utility is a function of (1) the act’s utility ignoring the act’s risk, in the sense of its exposure to chance, and (2) the intrinsic utility of the act’s risk. That is, if a is an act, r is its risk, and Ur–(a) is the act’s utility ignoring its risk, then for some function f, U(a) = f(Ur–(a), IU(r)). Accordingly, changes in the act that do not affect
2 Weirich (2015c) argues for the compositionality of intrinsic utility in general.
Evaluation of an Act 89 the function’s arguments—namely, the act’s utility ignoring its risk and the intrinsic utility of the act’s risk—do not affect the act’s utility. This principle of compositionality assumes quantitative appraisals of an act, the act’s risk, and the act ignoring its risk. A generalization for cases without quantitative appraisals asserts that an agent is indifferent between two acts if she is indifferent between them ignoring their risks and is indifferent between their risks.3 That is, for acts a and b, if a ~r– b and R(a) ~ R(b), then a ~ b. For components of composites, interchangeability of equivalents holds if and only if replacing a component of a composite with an equivalent component, in a way that leaves other components equivalent, produces an equivalent composite. That is, for a composite c = (c1, c2, . . . , ci, . . . , cn) if ci is replaced by cj to form c′ = (c1, c2, . . . , cj, . . . , cn), then if cj ~ ci and for all n ≠ i, j, (cn given c) ~ (cn given c′), then c ~ c′. In cases with utility assignments to components and composites, compositionality amounts to interchangeability of equivalents: exchanging a composite’s component for one of the same utility, without changing the other components’ utilities, does not change the utility of the composite.4 Compositionality entails interchangeability, but interchangeability does not entail compositionality, except in quantitative cases. In quantitative cases, to show compositionality, one may show interchangeability, which does not require identifying the function of composition. Assuming a division of an act’s realization into its risk and its independent consequences, and given a basic intrinsic aversion to the act’s risk and the resultant independence of the attitude to the act’s risk and the attitude to the act ignoring its risk, interchangeability and so compositionality hold in quantitative cases. Section 4.7 elaborates this argument for compositionality in quantitative cases. The expected-utility principle implies that an act’s utility is compositional given the act’s division into chances for exclusive and exhaustive possible outcomes. The function of composition is addition. Accordingly, the utility of a bet is the utility of the chance of winning plus the utility of the chance of losing.
3 That an agent is not indifferent between two options does not entail that the agent prefers one to the other. 4 Interchange of equivalents, if it holds, applies iteratively, and so after a series of interchanges of equivalents, the initial composite and the final composite still have the same utility.
90 Rational Responses to Risks
4.3. Separability An evaluation of an act may review the act’s consequences rather than the act’s whole outcome, taken as a possible world. This is a common type of evaluation of limited scope. This section assumes comparison and evaluation of acts according to their consequences. It treats an act’s consequences as a composite and considers evaluating the composite using evaluations of the components. When an act’s consequences are uncertain, the act’s evaluation may use its possible consequences and may divide its possible consequences into components, such as its risk and its independent consequences. An evaluation of its independent consequences may review the possibilities for them. Suppose that filling categories forms a composite of items. For example, a shopping basket of items in the categories milk, bread, and fish results from selecting skim milk, rye bread, and cod. Suppose that an agent has preferences among several composites. A category is separable from the other categories if and only if a preference-ranking of composites alike in the other categories is the same for any way of making the other categories alike. Milk is separable from bread and fish if, given any way of filling the categories of bread and fish, the preference-ranking of baskets different in milk is the same.5 Let C be a set of composites of n components ordered according to category, and for any c ∈ C let ci stand for c’s ith component and let c–i stand for c without its ith component. Then the category of the ith component is separable from the other categories just in case for any a, b, c, d ∈ C if ai = ci, bi = di, a–i = b–i, and c–i = d–i, then a b if and only if c d. Separability holds for a range of composites and is a type of independence among components of the composites. In a decision problem, the relevant composites are acts, taken as composites of their consequences. A preference- ranking of acts compares the composites. The definition of separability uses a preference-ranking of acts given that they have certain consequences in common; that is, it uses a type of conditional preference among acts, namely, preference given those common consequences. Suppose that for the acts in a set, an act’s chances for possible outcomes are each separable from the others. Then the preference-ranking of acts that are alike in all chances but one is the same however the acts are alike in all chances but this one. Also, suppose that for a set of acts, an act’s risk is separable from 5 Broome (1991: Chap. 4) explains separability and theorems concerning it.
Evaluation of an Act 91 its independent consequences. Then the preference-ranking of two acts alike in their consequences independent of risk is the same however they are alike in their consequences independent of risk. This feature assumes that an act’s risk may be conjoined with various sets of independent consequences. For example, a bet concerning bushels of wheat may present the same risk as another bet involving barrels of oil. To extend the range of composites alike in some component, the definition of separability may take risks to be the same if they are the same size, and it may take independent consequences to be the same if they have the same utility ignoring risk. Weak separability of the categories filled to make a composite holds if each category is separable from the other categories. Strong separability of the categories holds if every subset of categories is separable from the other categories, with a subset being separable from the other categories if the ranking of composites alike in the other categories is the same for every way of making them alike in the other categories. Given composites formed from only two categories, weak separability is equivalent to strong separability. An agent’s comparison of acts may attend to selected consequences of the acts rather than all consequences of the acts. An agent may, for example, compare two acts considering only their risks. Such comparisons of limited scope ground an extension of the definition of separability. If two categories are exhaustive, the first is preference-separable from the second, according to a ranking of ways of filling the first category, if and only if the ranking of composites is the same as the ranking of their ways of filling the first category given any way of filling the second category. Expressing the definition formally, let C be a set of composites of two components ordered according to categories. Then the category of the ith component is preference-separable from the category of the other component just in case for any a, b ∈ C if a–i = b–i, then a b if and only if ai bi. Suppose that an agent compares acts’ risks, that is, compares the acts ignoring all but their risks, to form intrinsic preferences among the acts’ risks. Furthermore, suppose that the acts’ preference-ranking, given that they share consequences independent of their risks, agrees with the intrinsic preference-ranking of their risks. In this case, according to intrinsic preferences, an act’s risk is preference- separable from its independent consequences. Imagine that each category in a set filled to form a composite may have a null value. For categories of goods in a shopping basket—such as bread, milk, and fish—a category has its null value if it is left empty. Separability
92 Rational Responses to Risks and preference-separability are equivalent if preferences among ways of filling a target category are defined in terms of preferences among composites in which all categories have their null value, except for the target category. I adopt this definition and so assume the equivalence of separability and preference-separability. If two categories are exhaustive, the first is equivalence-separable from the second, according to a ranking of ways of filling the first category, if and only if the ranking of composites agrees with the ranking of their ways of filling the first category, given equivalent ways of filling the second category. Expressing the definition formally, let C be a set of composites of two ordered components. Then the category of the ith component is equivalence- separable from the category of the other component just in case for any a, b ∈ C such that a–i ~ b–i, a b if and only if ai bi. Suppose that an agent compares two acts ignoring their risks, that is, considering their consequences independent of their risks. The acts’ consequences independent of their risks are equivalent if the agent is indifferent between one act’s consequences independent of its risk and the other act’s consequences independent of its risk. Then, assuming equivalence- s eparability of risk from consequences independent of risk, a comparison of acts agrees with an intrinsic comparison of their risks whenever the acts are equivalent in consequences independent of their risks. Equivalence-s eparability generalizes preference-s eparability by using equivalence rather than identity of acts’ consequences independent of their risks. It follows from preference-s eparability if acts’ consequences are taken to be the same if they are the same in the relevant type of utility. Defining utility as rational degree of desire, rather than as a function constructed to represent preferences, grounds types of separability that involve utilities of components of composites. A type of separability of one component from the other components of a composite, monotonic- separability, may hold given utility assignments to the components. According to it, increasing the utility of the component, while holding constant the utilities of the other components, increases the composite’s utility. Defining it formally, let C be a set of composites of two ordered components. Then the category of the ith component is monotonically separable from the category of the other component just in case for any a, b ∈ C such that U(a–i) = U(b–i), U(a) ≥ U(b) if and only if U(ai) ≥ U(bi). This type of separability entails equivalence-separability.
Evaluation of an Act 93 Next, suppose that an agent assigns utilities to acts ignoring their risks, and assigns intrinsic utilities to acts’ risks. A generalization of separability asserts a relation involving these utilities. For a type of utility assignment, a category is utility-separable from the other categories if and only if the utility assignment to an item in the category is the same given every way of filling the other categories. Expressing the definition formally, let C be a set of composites of n components ordered according to category. Then the category of the ith component is utility-separable from the category of the other components just in case for any a, b ∈ C, if ai = bi, then U(ai given a–i) = U(bi given b–i). For example, if milk is utility-separable from bread and fish, then the utility of skim milk is the same given any kind of bread and any kind of fish. For an intrinsic-utility assignment, an act’s risk is utility-separable from the act’s other consequences if and only if the intrinsic utility of the act’s risk is the same whatever are the independent consequences, provided that the combination of the act’s risk and the independent consequences form a combination that some act produces. Table 4.1 summarizes how the resources for comparing composites and comparing their components generate selected separability relations among categories of the composites’ components. The context-independence of a rational ideal agent’s attitudes to an option’s chances for possible outcomes grounds their separability in all this section’s senses, as Section 4.4 shows. Also, the context-independence of a rational ideal agent’s attitude toward an option’s risk supports the separability, in all this section’s senses, of the option’s risk from the option’s other consequences. For example, independence of attitudes and the resultant independence of preferences grounds preference-separability. If a rational ideal agent’s intrinsic preferences concerning options’ risks are independent of context, the agent intrinsically prefers one option’s risk to another option’s risk, and the two options are the same in consequences independent of their risks; Table 4.1 Resources for Relations among Categories Resources
Relation
Comparisons of Composites Comparisons of Composites and of Components Utilities for Composites and for Components
Separability Preference-Separability Utility-Separability
94 Rational Responses to Risks then the agent prefers the first option to the second option. Independence also grounds equivalence-separability. If a rational ideal agent’s intrinsic preferences concerning risks are independent of context, the agent intrinsically prefers one option’s risk to another option’s risk, and the two options are equivalent in consequences independent of their risks; then the agent prefers the first option to the second option. Independence similarly supports utility-separability, as Section 4.7 explains. Accordingly, a risk an option produces has the same intrinsic utility even if another option with different consequences produces the risk. An option’s risk belongs to a possible outcome of the option, so changing the chances of the other possible outcomes in a way that affects the option’s risk changes the possible outcome. However, suppose that two bets offer the same chance of winning a dollar. Also, the first bet risks losing a book, and the second bet risks losing an equivalent book, so that their risks are the same. Given that for these bets the chance of winning is utility-separable, using intrinsic utility, from the chance of losing, the intrinsic utility of winning is the same given any compatible chance of losing. The expected-utility principle implies the utility-separability of chances for outcomes, using intrinsic utility, taking an option as a composite of chances for possible outcomes. The intrinsic utility of a chance of a possible outcome is the same whatever are the other compatible chances for possible outcomes.
4.4. The Expected-Utility Principle The expected-utility principle asserts a type of additivity for utilities. It states that an option’s utility equals the sum of the probability-utility products for its epistemically possible outcomes when they form a finite partition and so are mutually exclusive and jointly exhaustive. This section justifies the principle. The justification uses the independence of an agent’s intrinsic attitudes toward chances of an option’s possible outcomes. It starts by showing how to construe the expected-utility principle as summing intrinsic utilities of chances of possible outcomes. Then it shows that the independence of these intrinsic utilities grounds their additivity and that their additivity grounds the expected-utility principle.6 6 Bradley and Stefánsson (2017) argue against including a certain formulation of the expected- utility principle within their extension of Jeffrey’s decision theory. Their arguments do not target this chapter’s formulation of the expected-utility principle.
Evaluation of an Act 95 In a typical decision problem, an option has multiple possible outcomes, each a possible world or a set of possible worlds. Good possible outcomes are prospects, and bad possible outcomes are risks. A bet, for example, offers the prospect of winning and the risk of losing. According to causal decision theory, which I adopt, an option’s utility equals the expected utility of its outcome, and this expected utility derives from the utilities of worlds that might result if the option were realized (rather than worlds that might result if the option is realized).7 This calculation of an option’s expected utility does not use states of the world as a means of specifying the option’s possible outcomes (and thereby prevents unwelcome state-dependent utilities of possible outcomes). States create categories that chances for possible outcomes may fill, and comparison of options using separability may compare ways of filling such a category while holding constant the way other categories are filled. Without states, options, taken as composites of chances for possible outcomes, exhibit a type of separability among categories that the chances fill by either their presence or absence (a null value). An option’s expected utility comes from the option’s possible comprehensive outcomes, which include the option’s risk in the sense of its exposure to chance, granting that a rational ideal agent has an aversion to this risk so that it is a relevant consequence of the option. Efficient general methods of evaluating an option may trim possible outcomes to include only the option’s realizations of basic intrinsic attitudes, that is, basic consequences that entail all other relevant consequences.8 For an agent, the intrinsic utility of a chance of a possible outcome represents the agent’s intrinsic attitude to the chance, which considers just the a priori implications of the chance’s existence. This intrinsic attitude derives from the agent’s intrinsic attitude to the possible outcome, and so is not a basic intrinsic attitude. As Section 3.5 maintains, the intrinsic utility of the chance of the possible outcome equals the product of the probability and the comprehensive utility of the possible outcome, assuming a coordination of scales for intrinsic and comprehensive utility that makes a possible world have the same intrinsic and comprehensive utility. 7 In particular, I adopt the version of causal decision theory that Gibbard and Harper ([1978] 1981) formulate and that Joyce (1999) generalizes using probability images instead of probabilities of conditionals. 8 An option’s utility equals the sum of the intrinsic utilities of its chances of realizing basic intrinsic attitudes, as Weirich (2015a: Chap. 2) argues, because the intrinsic utilities of realizing basic intrinsic attitudes are independent and additive.
96 Rational Responses to Risks A rational ideal agent has an intrinsic desire to realize an extrinsic desire characterized as such and similarly has an intrinsic aversion to realizing an extrinsic aversion characterized as such. To a possible outcome characterized as having a certain comprehensive utility, she assigns an intrinsic utility equal to the comprehensive utility. For each chance of a possible outcome, characterized by a probability and comprehensive utility, she assigns an intrinsic utility equal to the probability-utility product. Hence, an option’s expected utility, a sum of the probability-utility products for possible outcomes, equals the sum of the intrinsic utilities of the chances for the option’s possible outcomes. Using intrinsic utilities of chances, the expected-utility principle applied to a bet considers, not the chance of losing money, but instead an equivalent chance of realizing an (extrinsic) aversion. The principle’s summation involves, not the intrinsic utility of the chance of losing money, but the intrinsic utility of the equivalent chance of realizing an aversion. A rational ideal agent applying the principle, and knowing her conative attitudes, replaces the chance of losing money with the equivalent chance of realizing an aversion and makes similar replacements for all chances of possible outcomes. Because she knows her probability and utility assignments, in applications of the expected-utility principle, she characterizes each chance of a possible outcome as a probability-utility product. An evaluation of an option begins with probability-utility products for exclusive and exhaustive possible outcomes. Then it represents these chances using just the probabilities and utilities they involve. So represented, the chances have intrinsic utilities equal to the probability-utility products, and the sum of their intrinsic utilities equals the option’s expected utility. Suppose that bet gains a dollar if heads comes up on a coin toss (H) and loses a dollar if heads does not come up (~H). Using P to stand for probability and U to stand for utility, the bet’s expected utility, computed using possible monetary gains and losses is: P (H )U ($1) + P (~H )U (−$1) Suppose that gaining a dollar increases utility by a unit, and losing a dollar decreases utility by a unit. Then, letting IU stand for intrinsic utility, the bet’s expected utility, computed using intrinsic utilities of chances is: IU(a probability of 0.5 of gaining a unit of utility) + IU(a probability of 0.5 of losing a unit of utility)
Evaluation of an Act 97 The two formulas yield the same expected utility for the bet, because the product for a chance equals the chance’s intrinsic utility. The second formula reveals the justification for taking the bet’s expected utility as its utility; the bet’s utility equals the sum of the intrinsic utilities of the chances, as the next few paragraphs show. They argue that an option’s utility equals the sum of the intrinsic utilities of the chances for the option’s possible outcomes. The first step argues that the sum of the intrinsic utilities equals the intrinsic utility of their combination. The second step argues that the intrinsic utility of their combination equals the option’s utility. Suppose that two events are exclusive. Consider the disjunctive event they form. It is a combination of a chance for the first event and a chance for the second event. In the sense of Section 4.3, the two chances are utility- separable, using intrinsic utility, because of the independence of the intrinsic utilities of the chances. The intrinsic utility of each is constant whatever may be the other chance. The combination of chances yields together the intrinsic utility of each chance. The intrinsic utility of the disjunctive event is therefore the sum of the intrinsic utilities of the chances. Similarly, the intrinsic utility of a chance for the disjunctive event equals the sum of the intrinsic utilities of the resultant chances of its constituents. An option offers chances for possible outcomes. The intrinsic utility of a pair of chances equals the probability-utility product for the disjunctive event that the two possible outcomes form. The product for the disjunctive event equals the sum of the probability-utility products for the chances. Therefore, if o1 and o2 are exclusive possible outcomes of an option, P(o1 ∨ o2)U(o1 ∨ o2) = P(o1)U(o1) + P(o2)U(o2). As the previous paragraph argues, the sum equals the intrinsic utility of the pair of chances. Combining pairs of possible outcomes iteratively eventually yields the probability-utility product for the disjunctive event that all the possible outcomes of an option form, given a finite number of possible outcomes. For an option, the intrinsic utilities of the chances for the option’s possible outcomes add up to the intrinsic utility of the option’s combination of chances. Given an option’s realization, the probability-utility product for the disjunctive event that the option’s possible outcomes form involves a probability equal to one and a utility equal to the option’s utility because the disjunctive event formed by the option’s possible outcomes is equivalent to the option. Hence the option’s utility equals the intrinsic utility of its combination of chances, that is, the sum of the intrinsic utilities of the chances. This establishes the expected-utility principle because the sum of the intrinsic
98 Rational Responses to Risks utilities equals the sum of the probability-utility products for the possible outcomes. An objection may arise. Is the intrinsic utility of the combination of chances not equal to the option’s intrinsic utility, and does not the option’s intrinsic utility differ from the option’s (comprehensive) utility? The option’s utility is the expected utility of its outcome, and this equals the expected intrinsic-utility of its outcome; it does not equal the option’s intrinsic utility, or the intrinsic utility of the option’s outcome. The probability distribution of utilities of possible outcomes that represents an option contains different information than does the proposition that represents the option. So, the distribution and the proposition have different intrinsic utilities. The option’s utility, but not its intrinsic utility, equals the intrinsic utility of combination of chances that the distribution represents. Two additional objections target the expected-utility principle’s method of evaluating an option. The expected-utility principle evaluates a probability distribution of utilities of possible outcomes using just the distribution’s mean. The first objection doubts that the mean suffices for the evaluation. May not the variance of the distribution also influence the evaluation? If an agent cares about the variance, because it is a measure of the option’s risk in the sense of its exposure to chance, then the risk measured by the variance belongs to each possible outcome. Hence, the mean of the probability distribution of the utilities of the possible outcomes, without adjustment, covers the agent’s attitude to the variance. The principle’s evaluation of the distribution does not ignore the distribution’s variance. The second objection, playing off the response to the first objection, accuses the expected-utility principle of double counting an option’s risk by putting its risk in the option’s possible outcomes. Does not the calculation of expected utility using consequences independent of risk already account for the option’s risk? No, it accounts for risks in the sense of chances for bad events, but does not account for the option’s risk in the sense of the option’s exposure to chance. Rational agents may differ in the intensities of their aversions to an option’s risk, and the formula for an option’s expected utility ignoring the option’s risk, being the same for all agents, does not accommodate this diversity. To further rebut the charge of double counting, consider an agent who cares only about money, the means to it, and risk. The agent’s utility assignment to amounts of money does not express the agent’s aversion to risk. For example, consider these options.
Evaluation of an Act 99
A: $3000 B: $4000 with probability 4/5
C: $3000 with probability 1/4 D: $4000 with probability 1/5
A rational agent may prefer A to B, because of aversion to risk, and also prefer D to C, because of the attraction of D’s greater possible gain. If the agent’s preferences follow expected utilities, computed using monetary outcomes, the utility of money satisfies these constraints:
U($3000) > (4/5)U($4000) (1/4)U($3000) < (1/5)U($4000), or U($3000) < (4/5)U($4000)
These constraints are inconsistent. No utility assignment to amounts of money accommodates the agent’s preferences and the attitudes behind them.9 I conclude that the expected-utility principle does not omit or double count aversion to an option’s risk. The additivity of intrinsic utilities of chances of possible outcomes successfully supports the principle.
4.5. The Structure of Rational Attitudes Under some mild assumptions about an agent’s attitudes, an option’s utility is a function of the intrinsic utility of an option’s risk, taken as its exposure to chance, and the option’s utility ignoring its risk; the option’s utility is compositional using this division of the option’s realization and these utilities. I call the relationship mean-risk compositionality because the option’s utility, or expected utility, ignoring its risk is a mean utility. Moreover, an option’s risk is preference-separable from its independent consequences; the preference- ranking of the options in a decision problem (or in a hypothetical set of options) agrees with their ranking according to their risks given any compatible way of fixing their risk-independent consequences. Furthermore, an option’s risk and its independent consequences are equivalence-separable; 9 Because this case illustrates the effect of aversion to risk, I take it to be a version of Allais’s paradox, even though it exhibits the common ratio effect rather than the common consequence effect. The case discredits the view that the shape of an agent’s utility curve for money represents an agent’s attitude to risk, granting that the agent has an aversion to risk but has no utility curve for money.
100 Rational Responses to Risks the utility ranking of options follows the intrinsic utility ranking of their risks, given the equality of their utilities ignoring their risks. Finally, because of mean-risk additivity, an option’s utility equals the sum of (1) the option’s expected utility ignoring its risk and (2) the intrinsic utility of the option’s risk in the sense of the option’s exposure to chance. Mean-risk additivity implies the mutual preference-separability of an option’s risk and the option’s independent consequences, and this separability implies mean-risk compositionality. Applications of compositionality identify desires of the same strength, applications of preference-separability identify comparisons of strengths of desires, and applications of mean- risk additivity identify quantitative strengths of desires. Moving from applications of compositionality to applications of separability and then of additivity requires an infusion of resources at each step. A rational ideal agent’s attitude to an option’s risk is independent of the agent’s attitude to the option’s consequences that do not involve its risk, if the agent has a basic intrinsic attitude to the option’s risk. Mean-risk compositionality, preference- separability, equivalence- separability, and mean-risk additivity all follow from the independence of the agent’s attitude to an option’s risk, as Section 4.7 shows.
4.6. Independence of Attitudes A risk that an option creates, taken as a chance of a bad event contributes to, but is not the same as, the option’s risk in the sense of the option’s exposure to chance. Rationality imposes different requirements for an agent’s attitude to a chance of a bad event that an option creates and for an agent’s attitude to the option’s risk in the sense of its exposure to chance. Rationality requires an ideal agent’s attitude to an option’s risk to be independent, in a certain way, of the agent’s attitude to the option ignoring its risk, assuming a basic intrinsic attitude to the option’s risk. This section explains the type of independence. Section 1.3 presents an option’s risk, in the sense of the option’s exposure to chance, as a component of an equilibrium involving the utilities of the option’s possible comprehensive outcomes. An option’s risk is a component of a possible outcome of the option when this component needs an adjustment that the outcome already makes. Once the equilibrium settles the option’s risk, the option’s risk appears in a mean-risk analysis of an option’s utility. An option’s utility, according to the mean-risk analysis, comes from
Evaluation of an Act 101 (1) the option’s expected utility ignoring the option’s risk and (2) the intrinsic utility of the option’s risk. An option’s expected utility ignoring the option’s risk has a constant value across compatible changes in the intrinsic utility of the option’s risk, and, similarly, the intrinsic utility of the option’s risk has a constant value across compatible changes in the option’s expected utility ignoring its risk. Independence of intrinsic attitudes follows from independence of reasons for intrinsic attitudes, as Chapter 2 explains. An intrinsic attitude, such as an intrinsic aversion to a risk, evaluates the attitude’s object considering only its intrinsic features and not its extrinsic features, such as being a means to realization of another attitude. An agent’s intrinsic attitude to a proposition expressing an option’s risk considers just the a priori implications of the proposition, and these implications are the same whatever are the option’s consequences not involving the option’s risk. An agent’s reasons for an intrinsic attitude to an option’s risk are independent of the option’s consequences not involving the option’s risk, and so are the same, whatever are these other consequences, as long as they are compatible with the option’s risk. The unchanging reasons require a constant attitude to the option’s risk. An option’s risk depends on the option’s independent consequences, in particular, its chances of bad events. However, after settling the option’s risk, the independent consequences do not also affect the risk’s intrinsic utility. The independent consequences are not reasons for the agent’s intrinsic utility assignment to the option’s risk. Appraisal of the option’s risk is independent of the particular chances for bad events that generate the option’s risk.
4.7. Mean-Risk Additivity The canonical principle of mean-risk additivity asserts that an option’s utility equals the sum of (1) the option’s utility ignoring the option’s risk and (2) the intrinsic utility of the option’s risk. That is, U(o) = Ur–(o) + IU(r). As an option’s utility equals its expected utility, an option’s utility ignoring the option’s risk Ur–(o) equals the option’s expected utility ignoring its risk, or EUr–(o). The intrinsic utility of the option’s risk IU(r) evaluates just the intrinsic features of the option’s exposure to chance, that is, the a priori implications of the proposition expressing the exposure, and not all that accompanies the exposure. According to mean-risk additivity, if having an operation brings a risk, the
102 Rational Responses to Risks utility of having the operation equals the utility of the operation ignoring its risk plus the intrinsic utility of the operation’s risk. This section argues that for an ideal agent with a basic intrinsic attitude to risk, rationality requires mean-risk additivity. The requirement follows from the independence of the agent’s basic intrinsic attitudes. A representation theorem establishing a certain type of utility- representation of preferences among options assumes that the preferences meet some conditions. A representation theorem of the theory of conjoint measurement, that Section 2.9.2 mentions, assumes, as a constraint on preferences among options, the strong separability of an option’s consequences ignoring risk and the option’s risk. It makes this assumption to show that the preference-ranking of options has a utility-representation complying with mean-risk additivity. This section does not simply assume conditions ensuring that preferences among options have a mean-risk representation but instead argues that rationality requires an ideal agent’s utility assignments to comply with mean-risk additivity and thus to have preferences among options that meet all conditions necessary for their mean-risk representation. Mean-risk additivity assumes that a mean-risk division of an option’s consequences produces an exclusive and exhaustive division of the option’s realizations of basic intrinsic attitudes. The mean-risk division accomplishes this, assuming that the agent has a basic intrinsic attitude to an option’s risk. In this case, evaluation of the option ignoring its risk covers all relevant consequences independent of the option’s risk, including any relevant interaction between an option’s risk and the option’s independent consequences, and it does not double count the option’s risk by counting realizations of basic intrinsic attitudes that ground the agent’s intrinsic attitude to the option’s risk, for no such grounding attitudes exist for a basic intrinsic attitude. Although the canonical form of mean-risk additivity obtains an option’s utility from utilities of two types, it is equivalent to a form of mean-risk additivity that obtains an option’s utility from intrinsic utilities only. An option’s utility ignoring the option’s risk equals the expected intrinsic-utility of the option’s outcome ignoring the option’s risk, that is, the expected intrinsic-utility of features of the option’s outcome that are independent of its risk. Using Or–(o) to stand for the outcome of an option ignoring its risk, Ur–(o) = EIU(Or–(o)). The relevant features of an option’s outcome reduce to realizations of basic intrinsic attitudes. Assuming that an agent has a basic intrinsic aversion to the option’s risk, the relevant features of an option’s
Evaluation of an Act 103 outcome ignoring the option’s risk reduce to realizations of basic intrinsic attitudes besides the agent’s attitude to the option’s risk. Canonical mean- risk additivity holds if and only if an option’s utility equals the sum of (1) the expected intrinsic-utility of the option’s outcome ignoring the option’s risk and (2) the intrinsic utility of the option’s risk. That is, it holds if and only if U(o) = EIU(Or–(o)) + IU(r). A simplification of mean-risk additivity attends exclusively to an option’s consequences, which are just part of the option’s outcome and exclude, for example, past events. An option’s possible outcomes differ only in the option’s consequences, and the outcomes of two options in a decision problem differ only in the options’ consequences. Hence, in a decision problem an option’s evaluation may review, for a world that might be the option’s world, the option’s consequences in the world rather than all the world’s features. An option’s causal utility, given the world, narrows the option’s evaluation this way. The option’s causal utility, given the world, equals the intrinsic utility of the option’s consequences in the world, and the option’s causal utility equals the expected intrinsic-utility of the option’s consequences, as Weirich (2015a: Chap. 6) shows. Letting C be a function that yields an option’s consequences, CU(o given w) = IU(C(o in w)), and CU(o) = EIU(C(o)). A basic consequence of an act is a realization of a basic intrinsic attitude. Given that an agent has a basic intrinsic attitude to an act’s risk, the act’s risk is a basic consequence of the act. An act’s relevant consequences that are independent of the act’s risk follow from the act’s other basic consequences. The intrinsic utility of the act’s independent consequences equals the intrinsic utility of its basic consequences besides risk. An agent with incomplete information about an act’s basic consequences besides its risk may evaluate these consequences using their expected intrinsic-utility. This expected intrinsic-utility is a probability-weighted average of the intrinsic utilities of exclusive and exhaustive epistemic possibilities ignoring the act’s risk. An act’s causal utility ignoring the act’s risk equals the expected intrinsic- utility of the act’s basic consequences besides the act’s risk. For an option o in a decision problem, letting BCr–(o) stand for the option’s basic consequences besides risk, CUr–(o) = EIU(BCr–(o)). An option’s causal utility evaluates the option using a probability-weighted average of the intrinsic utilities of its consequences in the worlds that might be the option’s world, and so are epistemically possible for the agent. A mean- risk analysis of an option’s causal utility, given a world, separates (1) the intrinsic utility of the option’s consequences in the world that are independent
104 Rational Responses to Risks of the option’s risk, IU(Cr–(o in w)), and (2) the intrinsic utility of the option’s risk, IU(r). The option’s risk has the same intrinsic utility in every world, and ignoring it yields a probability-weighted average of the intrinsic utilities of the option’s independent consequences in the worlds that might be the option’s world. This average intrinsic-utility equals the option’s causal utility ignoring the option’s risk, given a coordination of scales for intrinsic utility and causal utility established by making a world’s causal utility equal its intrinsic utility. According to a mean-risk analysis of an option’s causal utility, analogous to a canonical mean-risk analysis of an option’s utility, an option’s causal utility equals the sum of (1) the option’s causal utility ignoring the option’s risk and (2) the intrinsic utility of the option’s risk. That is, CU(o) = CUr–(o) + IU(r). Mean-risk additivity applies to an option’s comprehensive utility and also to its causal utility. The difference between the two types of utility of the option is a constant equal to the expected intrinsic-utility of the option’s outcome apart from its consequences, the same constant for all the options in a decision problem. Hence, the two forms of mean-risk additivity yield the same ranking of the options. Moving from comprehensive utility to causal utility amounts to a scale change for utility, and does not affect the options’ ranking. Hence, I use both comprehensive utility and causal utility to evaluate options, selecting the type of utility that best fits the context. An argument for mean-risk additivity follows analogous arguments for the additivity of quantities besides utility. Length, as we understand it, is additive in the sense that the length of a concatenation of two objects, end to end in a straight line, equals the sum of the lengths of the objects, putting aside special forces that may shrink or expand objects when concatenated. A demonstration that length is additive in a context shows that the special forces do not arise in the context. Utility, taken as strength of desire, is additive in the sense that, for a range of composites, the utility of a composite equals the sum of the utilities of its components, putting aside special interactions between the components. A demonstration that utility is additive given some conditions shows that special interactions do not arise in those conditions. The justification of additivity is partly conceptual, because it concerns the concept of utility, and is partly normative because it concerns the relation between the utility of a composite and the utilities of its components. A short argument for canonical mean- risk additivity supports it by supporting the equivalent form of additivity that obtains an option’s utility using only intrinsic utilities. Each possible comprehensive outcome of an option, represented as O(o given w), for a possible world w with o, includes
Evaluation of an Act 105 the option’s risk. Using the additivity of intrinsic utilities of realizations of basic intrinsic attitudes, which Weirich (2015a: Chap. 2) establishes, each possible comprehensive outcome’s intrinsic utility equals the sum of the intrinsic utility of the outcome ignoring the option’s risk and the intrinsic utility of the option’s risk: IU(O(o given w)) = IU(Or–(o given w)) + IU(r). Hence, the option’s utility equals the sum of (1) its expected intrinsic-utility ignoring the option’s risk and (2) the intrinsic utility of the option’s risk. That is, U(o) = EIUr–(o) + IU(r). The option’s expected intrinsic-utility ignoring its risk equals the option’s utility ignoring its risk, EIUr–(o) = Ur–(o). Thus, substituting equals for equals, an option’s utility equals (1) the option’s utility ignoring its risk plus (2) the intrinsic utility of its risk. That is, U(o) = Ur–(o) + IU(r). This establishes canonical mean-risk additivity. A supplementary argument shows that the option’s expected intrinsic- utility ignoring its risk equals the option’s utility ignoring its risk, or EIUr–(o) = Ur–(o). Given the additivity of intrinsic utilities of realizations of basic intrinsic attitudes, the intrinsic utility of a world is the sum of the intrinsic utilities of the basic intrinsic attitudes that the world realizes. Assuming a basic intrinsic attitude to an option’s risk, for a world that might be the option’s world, this additivity yields the intrinsic utility of the world as a sum of (1) the intrinsic utility of the world ignoring the option’s risk and (2) the intrinsic utility of the option’s risk. That is, IU(w) = IUr–(w) + IU(r). The expected intrinsic-utility of an option’s world is therefore a sum of (1) the expected intrinsic-utility of the option’s world ignoring the option’s risk and (2) the intrinsic utility of the option’s risk. Letting W(o) stand for an option o’s world, EIU(W(o)) = EIUr–(W(o)) + IU(r). Given comprehensive utility’s sensitivity to information, the option’s comprehensive utility equals the expected intrinsic-utility of the option’s world, U(o) = EIU(W(o)), and, similarly, the option’s comprehensive utility ignoring the option’s risk equals the expected intrinsic-utility of the option’s world ignoring the option’s risk, Ur–(o) = EIUr–(W(o)). Therefore, the option’s expected intrinsic- utility ignoring its risk equals the option’s (comprehensive) utility ignoring its risk: EIUr–(o) = Ur–(o). This argument for the equality bolsters the previous argument for canonical mean-risk additivity.10
10 Oliveira (2016) defends a form of additivity for intrinsic value but acknowledges that some features of a possible world may block a contribution to the intrinsic value of the possible world from a prima facie good feature of the possible world and thereby block additivity. The argument for mean- risk additivity shows that for this form of additivity no feature of a possible world blocks it.
106 Rational Responses to Risks A mean-risk analysis of an option’s causal utility treats the option’s consequences as a composite of the option’s risk, in the sense of the option’s exposure to chance, and the option’s independent consequences. The option’s risk and the option’s independent consequences are components of a composite of consequences. Mean-risk additivity holds for an option’s causal utility, granting that it holds for an option’s comprehensive utility, because the two types of utility differ only by a constant. Also, an argument similar to the one for comprehensive utility’s mean-risk additivity directly supports causal utility’s mean-risk additivity. Because the attitude toward an option’s risk is independent of the attitude toward the option’s consequences not involving its risk, the intrinsic utility of an option’s consequences in a world realizing the option equals the sum of (1) the intrinsic utility of the option’s consequences in the world ignoring its risk and (2) the intrinsic utility of the option’s risk. That is, IU(C(o in w)) = IUr–(C(o in w)) + IU(r). Given this sum world by world, the option’s causal utility equals the sum of (1) its expected causal-utility ignoring its risk, that is, its causal utility ignoring its risk, and (2) the intrinsic utility of its risk. That is, CU(o) = CUr–(o) + IU(r). To deepen the explanation of mean-risk additivity for an option’s causal utility, the argument for it can expand to include the grounds of additivity of intrinsic utilities, and, for economy, can target just two-part composites of an option’s risk and its independent consequences. For these composites, it can establish utility-separability using points about basic intrinsic attitudes, and then use points about utility-separability and marginal utility to show mean- risk additivity. Because the argument uses utility-separability to support mean-risk additivity, and mean-risk additivity entails utility-separability, the argument uses an implication of the principle of mean-risk additivity to support the principle. The argument uses utility-separability and a principle of concatenation to obtain mean-risk additivity. The relevant concatenation operation is forming a composite of objects of utility. The principle of concatenation applies to a composite with utility-separable components. If an agent possesses one component, he has the utility of possessing it. If he possesses the other component, he has the utility of possessing it also. Because of utility-separability, both utilities are context-independent and so well-defined. The utility of the composite is also context-independent and so well-defined. It is the utility of an option’s world, which is context-independent because it forms a complete context that leaves no room for additional features. Taking composition as concatenation of objects of utility, if concatenation produces a composite
Evaluation of an Act 107 with utility-separable components, an agent who has the composite, and so has both its components, has the sum of the utilities of the components. The choice of a concatenation operation amounts to taking a composite’s utility to be a sum of the utilities of its components, if its components are utility-separable. The choice belongs to the conventions of representation of strengths of desire with utilities, but it has a theoretical motivation. The choice produces a representation of strengths of desire useful for formulating principles of rationality, such as mean-risk additivity. Consider composites with components obtained by filling categories. Assume that among the assignments to a category, one is null. For example, an option’s absence of risk is the null assignment to the category of an option’s risk. For a category, an assignment’s marginal utility given an assignment of the other categories is the difference between (1) the utility of the composite with the assignment to the category and (2) the utility of the composite with the null assignment to the category. A category is utility-separable from the other categories if and only if the marginal utility of an assignment of the category is the same given all compatible assignments of the other categories. Consequently, the category’s assignment contributes the same utility to a composite’s utility given all compatible assignments of the other categories. For composites of consequences arising from the options in a decision problem, a rational attitude toward an option’s risk is independent of context, and so an option’s risk is utility-separable from its other consequences, using intrinsic utility. Because of its utility-separability, the option’s risk has a constant marginal intrinsic-utility. Therefore, the option’s risk makes a constant contribution to the utility of composites that have it as a component. Its contribution equals its intrinsic utility. The utility-separability of the components of the composites arises from features of basic intrinsic attitudes. Basic intrinsic attitudes, including their strengths, are independent of context because basic intrinsic attitudes depend exclusively on a priori implications of their propositional objects. A rational attitude toward a composite of realizations of basic intrinsic attitudes derives from basic intrinsic attitudes, and their independence ensures their constant strength as context changes. Because a basic intrinsic aversion to an act’s risk is independent of other intrinsic attitudes, the act’s risk is utility-separable from the act’s other basic consequences, using intrinsic utility. The intrinsic utility of the act’s risk is the same given any compatible way of fixing the act’s other basic consequences. Similarly, the act’s basic consequences besides its risk are utility-separable
108 Rational Responses to Risks from the act’s risk, using intrinsic utility. The intrinsic utility of the act’s basic consequences besides its risk is the same given any compatible way of fixing the act’s risk. Suppose that an agent has a basic intrinsic desire for health and a basic intrinsic aversion to risk, and that possible consequences of acts are: health and risk, health without risk, illness and risk, and illness without risk. The following pairs represent these possible consequences: (H, R), (H, ~R), (~H, R), (~H, ~R). Assume the mutual utility-separability of health and risk using intrinsic utility. If given risk, IU(H) = 1 and IU(~H) = 0, then also given the absence of risk, IU(H) = 1 and IU(~H) = 0. Independence grounds additivity. Combining the monetary value of x dollars and the monetary value of y dollars produces the monetary value of x + y dollars because the monetary value of each amount is independent of context and, in particular, independent of the monetary value of the other amount. Utility-separability, a type of independence, grounds additivity of utilities. Given the mutual utility-separability of two amounts of money, the utility of the sum of the two amounts equals the sum of the utilities of the amounts. The utility of two dollars is the sum of the utility of the first dollar and the utility of the second dollar. Given the mutual utility-separability of an act’s risk and its other basic consequences using intrinsic utility, an act’s causal utility is the sum of (1) the mean or expected intrinsic-utility of the act’s basic consequences besides its risk and (2) the intrinsic utility of the act’s risk. For an option in a decision problem this implies that CU(o) = EIU(BCr– (o)) + IU(r). From this, follows mean-risk additivity for an option’s causal utility: CU(o) = CUr–(o) + IU(r). It holds because CUr–(o) = EIU(BCr–(o)) Similar support deepens the case for a canonical mean-risk analysis of an option’s comprehensive utility. For a rational ideal agent, utility-separability of categories holds for a desire-expressing utility function and a range of composites of (1) an option’s risk and (2) independent features of a world that might be the option’s world. Therefore, the risk’s marginal utility is the same given any compatible way of fixing the world’s independent features, and vice versa. The marginal utility of an item in a mean-risk pair is the difference between the intrinsic utility of the pair and the intrinsic utility of the same pair except with the null assignment to the item. If both items have constant marginal utility, then the intrinsic utility of the pair is the sum of their marginal utilities. For a type of utility that evaluates an item in isolation, such as intrinsic utility, the utility of each item without the other equals its marginal utility, and the sum of their utilities equals the pair’s utility. If the two items
Evaluation of an Act 109 are compatible and belong to utility-separable categories, using intrinsic utility, then the intrinsic utility of the pair equals the sum of their intrinsic utilities. The intrinsic utility of a world that might be the option’s world equals the intrinsic utility of the option’s risk plus the intrinsic utility of the world’s independent features: IU(w) = IU(r) + IUr–(w). Because of this, the option’s utility equals (1) the expected intrinsic-utility of the option ignoring its risk plus (2) the intrinsic utility of the option’s risk: U(o) = EIUr–(o) + IU(r). Because the former equals the option’s utility ignoring its risk, that is, EIUr–(o) = Ur–(o), the result is canonical mean-risk additivity: U(o) = Ur–(o) + IU(r). Attention to utility-separability thus adds depth to the explanation of mean-risk additivity. Another way of organizing points about basic intrinsic attitudes to support mean-risk additivity uses two constraints on a function going from intrinsic utilities of the composite’s components to the composite’s intrinsic utility. The two constraints hold because of the nature of the basic intrinsic attitudes behind intrinsic-utility assignments. The function must be such that, first, when both components have intrinsic utility equal to zero, the composite also has intrinsic utility equal to zero, and, second, when the intrinsic utility of one component increases by x and the other component has a constant intrinsic utility, then the intrinsic utility of the composite increases by x, that is, an increase in a component’s intrinsic utility produces the same increase in the composite’s intrinsic utility. Given these two constraints, the function for composites must be addition. For example, let F be a two-place function yielding the intrinsic utility of a two-part composite given the intrinsic utilities of its two parts. According to the first constraint, F(0, 0) = 0. Then, according to the second constraint, F(1, 0) = 1 and F(0, 1) = 1. Applying the second constraint again, F(1, 1) = 2, F(0, 2) = 2, F(2, 0) = 2. Applying it one more time, F(1, 2) = 3, F(2, 1) = 3, F(1, 2) = 3, F(2, 1) = 3. And so on, as in Figure 4.1. In general, F(x + z, y) = F(x, y) + z, and F(x, y + z) = F(x, y) + z. Because in these formulas x may equal 0 so that F(x, y) equals y, the formulas entail, by the symmetry of addition, that F(z, y) = z + y. The constraints on the utilities of changing composites demand that the function for a composite be addition. Assume that the composites include a zero point of the form F(0, 0) = 0 and include composites with the component utilities given by the pairs (z, 0) and (0, y). Then, according to the constraints, F(z, 0) = z and F(0, y) = y, and, therefore, F(z, y) = z + y. The constraints yield a form of mean-risk additivity that becomes canonical mean-risk additivity assuming, as argued, that
110 Rational Responses to Risks Utilities of Components
Utility of Composite
(0, 0)
0
(1, 0) (2, 0)
(3, 0) . . .
(1, 1)
(0, 1) (1, 1)
(2, 1) (2, 1) (1, 2) (2,1) . . . . . . . . . . . .
1 (0, 2)
(1, 2) (1,2) . . . . . .
2
(0, 3) . . .
3 . . .
Figure 4.1 Utility of a Composite of Two Components, Each of Utility 0, and Utilities of Composites Descending from It under Enhancement of a Component by a Unit of Utility
the expected intrinsic-utility of an option’s outcome ignoring its risk equals the option’s utility ignoring the option’s risk. Mean-risk additivity is a principle for quantitative cases and, in these cases, entails mean-risk compositionality and the mutual separability of risk and independent consequences. However, the independence of basic intrinsic attitudes holds even when the attitudes are imprecise and yield only comparisons. Independence of basic intrinsic attitudes, besides supporting mean- risk additivity, supports a nonquantitative form of mean- risk compositionality and supports mean-risk separability. These structural relations govern attitudes of indifference and preferences in nonquantitative cases with either an imprecise basic intrinsic attitude to an option’s risk, or an imprecise attitude to the option’s independent consequences. The argument for the analogues of compositionality and separability in nonquantitative cases explains, for a rational ideal agent, an attitude toward a composite of two components using attitudes toward the components. To support compositionality, it maintains that the attitude to a composite should depend exclusively on, and so be a function of, the attitudes to the components. For instance, indifference between two composites may arise from indifference between their ways of filling each category. To support preference-separability, the argument maintains that preferences among
Evaluation of an Act 111 composites should agree with preferences among composites alike in some of their components. Also, suppose that a set of pairs of a probability assignment and a utility assignment represent an agent’s imprecise doxastic and conative attitudes. Each utility assignment in the set must satisfy mean-risk additivity, and so compositionality and separability follow for the imprecise attitudes that the set represents. Objections to compositionality, separability, and additivity generally try to formulate counterexamples. Any alleged counterexample to mean- risk compositionality either assumes that the strength of the attitude to an option’s risk is not independent of context, or else assumes that the option’s risk interacts with the option’s independent consequences to produce additional relevant consequences. An agent’s having a basic intrinsic attitude to the option’s risk blocks the first assumption, and the comprehensiveness of the option’s independent consequences blocks the second assumption. Similar defenses guard separability and additivity against counterexamples.
4.8. Summary The expected-utility principle follows from the additivity of the intrinsic utilities of chances that an option generates for exclusive and exhaustive possible outcomes. For a rational ideal agent with a basic intrinsic attitude to an option’s risk, the intrinsic utilities of these chances are additive because they are each independent of the others. Because the intrinsic utility of a chance of a possible outcome equals the probability-utility product for the possible outcome, the intrinsic utilities of the chances add up to the option’s expected utility, which equals the option’s utility. Also, mean-risk additivity follows from the independence of a rational ideal agent’s attitude to an option’s risk, in the sense of its exposure to chance, and the agent’s attitude to the option ignoring its risk. An option’s utility equals the sum of its utility ignoring its risk and the intrinsic utility of its risk.
5 Rational Management of Risks In a decision problem, rationality permits a rational ideal agent to adopt an option unless the agent has a reason against the option, and only a preference for another option constitutes a reason against the option. Thus, the agent may adopt the option in the absence of a preference for another option. An account of risk provides an explanation of the rationality of preferences among options and so works with a theory of rational choice to explain rationality’s requirements concerning decisions involving risks. This chapter shows how for a rational ideal agent, in standard quantitative cases, the expected utilities of options explain preferences among options that in turn explain a decision that maximizes expected utility. The explanation uses a substantive version of the principle of expected-utility maximization that goes beyond consistency of choices. Expected utilities evaluate options, shape preferences among them, and settle whether a risky option merits adoption. The chapter precisely formulates the principle of expected-utility maximization for standard decision problems and justifies the principle by explaining the principle’s grounds. Then it generalizes the principle for decision problems with imprecise probabilities and utilities.
5.1. Changing Risks A driver may reduce the risk of sliding off a wet road by slowing down, and a farmer may reduce the risk of crop failure by hedging in the futures market for commodities. A home owner may decide that she does not want to face more than a 1% chance that her house will burn down. She can install smoke alarms and replace defective wiring to reduce the chance of a major fire. She can decide against remodeling so that if her house burns down she does not also lose the cost of remodeling, and, in this way, may reduce the loss from a fire. She can also buy insurance so that if her house burns down the insurance provides the cost of a replacement.
Rational Responses to Risks. Paul Weirich, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190089412.001.0001.
Rational Management of Risks 113 A rational response to a risk of an aversion’s realization, if rationality does not require the aversion with its intensity, may be to reduce the aversion rather than to prevent its realization. A person can reduce a risk, in the sense of a chance of a bad event, by reducing his aversion to the bad event the risk threatens. A patient who fears a medical procedure can learn about it to reduce anxiety. Then the risk of undergoing the procedure is less severe. As these examples show, a person facing a chance of a bad event can reduce the risk by reducing the chance, by reducing the badness of the event, or by performing an act so that the risk in combination with the act is less than the risk without the act, as in cases of insuring against a loss and hedging bets. Aversion to risks targets both physical and information-sensitive, evidential risks. Good methods of reducing risks by reducing chances differ for physical risks and for evidential risks. Aversion to physical risks leads people to adopt safety measures such as replacing worn tires with new ones. A person may reduce a physical risk of flu by getting a flu shot. Aversion to evidential risks leads people to be cautious when confronting the unknown, such as the depth of water flooding across the road ahead. Not knowing its depth, a driver may be averse to trying to cross, even if in fact the water is too shallow to make his car stall. A person may reduce an evidential risk that eating a mushroom will cause illness by learning that the mushroom belongs to a harmless, nonpoisonous species. Changing the environment may reduce physical risks, and learning may reduce evidential risks. Rational risk management is just rational choice in decision problems with options that affect risks. A theory of rational choice prescribes when and how to reduce risks. This chapter explains how rationality evaluates choices in decision problems. It advances a standard a rational option meets. An agent may use the standard’s evaluations of options to form preferences among options.
5.2. Decision Problems A parent may wonder what to cook for dinner. A look in the refrigerator shows the fixings for a cassoulet and for an omelet. Among the parent’s possible decisions are forming the intention to make a cassoulet and forming the intention to make an omelet. The parent may resolve the decision problem by deciding to make an omelet. An agent facing a decision problem has a set of options, namely, the various decisions that he might make. The agent
114 Rational Responses to Risks resolves the problem by making a decision to perform an act. The decision, the formation of an intention, is itself an act, one that the agent performs at will. This chapter assumes that an agent’s attitudes, including attitudes toward risk, are rational and formulates principles that then govern the step from attitudes to preferences among options, and, next, the step to a decision grounded in preferences among options. Complying with the principles explains the rationality of a decision. Whether a decision is rational depends on its evaluation according to the agent’s doxastic and conative attitudes. If these attitudes are quantitative, then degrees of belief and degrees of desire represent them, as Chapter 2 explains. Suppose that the agent is cognitively ideal. If her degrees of belief are rational, they comply with the laws of probability, and a probability function represents them. If her degrees of desire are rational, a utility function represents them, and they comply with the expected-utility principle, according to which an option’s utility equals its expected utility, a probability- weighted average of the utilities of its possible comprehensive outcomes, for a partition of them. Grounding probabilities and utilities in quantitative attitudes gives probabilities and utilities the explanatory power of these attitudes. I treat standard decision problems. These problems meet some simplifying conditions. In a standard decision problem, options are performable at will, a quantitative probability and utility attaches to each possible outcome of each option for some partition of its possible outcomes, and some option has maximum expected utility. A common decision principle for a standard decision problem holds that an option is rational for an ideal agent if and only if it maximizes utility, that is, its utility is at least as great as the utility of any other option. Such an option is at the top of the agent’s preference-ranking of options. For rational ideal agents, an option’s utility equals its (partition invariant) expected utility, and maximizing expected utility is equivalent to maximizing utility. Maximizing expected utility arises from following preferences. This is the argument for it. In a standard decision problem at a time, an option specifies exhaustively the content of a possible decision at the time. If an agent may at a time make a complex decision that includes simple decisions, then only the complex decision constitutes an option. A simple decision that is part of the complex decision is not an option because it does not specify exhaustively the content of a possible decision at the time. For example, if an agent has to decide at a time
Rational Management of Risks 115 about two bets on offer, a decision to accept both is an option and includes a decision to accept the first, but a decision to accept the first is not itself an option because it is incomplete. Section 6.2 returns to this point about options. For a rational ideal agent in a standard decision problem, each option has a utility equal to the utility of the way of performing the option that has the highest utility. So, buying a hat has a utility equal to the utility of buying a red hat, if buying a red hat maximizes utility among ways of buying a hat. Not every decision problem meets this condition. In Arntzenius, Elga, and Hawthorne’s (2004) case of Satan’s apple, Eve chooses, for each piece of an apple divided into an infinite number of pieces, whether to take the piece. She wants each piece but suffers a terrible penalty if she takes an infinite number of pieces. She should stop taking pieces after taking some finite number of pieces. This act of stopping has no utility because no best way of stopping exists; for any n, taking n + 1 pieces has greater utility than stopping after taking n pieces. An option may fail to have a utility because (1) probabilities and utilities defining its expected utility fail to exist, (2) because adequate expected utility calculations must review an infinite number of possible outcomes, or (3) because there are ways of performing the option that are better and better without end, as in Satan’s apple. These obstacles do not arise in a standard decision problem.1
5.3. The Principle to Maximize Expected Utility The traditional principle to maximize expected utility, going back to the work of Daniel Bernoulli ([1738] 1954), requires an option that has maximum expected-utility computed using, for an option’s possible outcomes, a probability function that represents strengths of belief and a utility function that represents strengths of desire. A twentieth-century principle, inspired by the traditional principle, requires that preferences among options be “as if ” maximizing expected utility. That is, the preferences should be such that for the possible outcomes of options, expressed so that the agent knows his attitudes to them, some probability function and some utility function (unique up to scale transformations) are such that the preferences follow 1 In a version of Satan’s apple, the salient options may be Eve’s binding herself to not starting to take pieces, binding herself to stopping after taking some finite number of pieces, and binding herself to taking all the pieces. This decision problem has three options with the same time of performance. It fails to be a standard decision problem because the second option has no utility assignment.
116 Rational Responses to Risks expected utilities computed using the possible outcomes and the functions. I call the traditional principle a substantive version of the principle to maximize expected utility and call the modern principle a representational version of the principle to maximize expected utility. Take any probability function and any utility function for some characterization of the possible outcomes of options in an agent’s decision problem, and among the options construct preferences for the agent that maximize expected utility according to the functions and characterization of possible outcomes. These constructed preferences are representable as maximizing expected utility but, except by coincidence, do not maximize expected utility, taking an option’s possible outcomes to comprehend all the option’s consequences and taking their probabilities as strengths of belief and their utilities as strengths of desire. The traditional, substantive version of the principle to maximize expected utility is stronger than the modern, representational version of the principle.2 A representation theorem presents conditions on preferences among options, called axioms of preference, that ensure that the preferences may be represented as following expected utilities given the theorem’s characterization of possible outcomes of options. The preference axioms of a representation theorem divide into those necessary for an expected-utility representation of preferences and structural axioms that facilitate the representation. The representational version of the principle to maximize expected utility, given the structural axioms, demands no more than satisfaction of the necessary preference axioms. The substantive version demands more. It requires that preferences among options, besides being representable as following expected utilities according to some characterization of possible outcomes and some probability function and some utility function (unique up to scale transformations) for the possible outcomes, follow expected utilities obtained from the possible outcomes of options using for them degrees 2 Okasha (2016) notes that the traditional principle of expected utility maximization that Daniel Bernoulli advanced is normatively stronger than the representational principle to choose as if maximizing expected utility. He describes the traditional principle’s interpretation of probability and utility as mentalist and the representational principle’s interpretation of probability and utility as behaviorist. He objects that the traditional principle has no support for its mentalist interpretation of probability and utility and, granting the interpretation, no support for its normative claims, in particular, because it prohibits rational aversion to risk taken as variance in the probability distribution of utilities of possible outcomes. I address these objections to the traditional principle. Chapter 2 supports the principle’s interpretation of probability and utility. This chapter, together with the previous chapter, presents and argues for a version of the traditional principle of expected-utility maximization. The principle accommodates aversion to an option’s risk taken as the variance of the utilities of possible outcomes by treating this risk as a consequence of the option.
Rational Management of Risks 117 of belief and degrees of desire defined independently of preferences among options. Choices that have an “as if ” maximizing representation meet a consistency constraint for choices that is weaker than the substantive requirement of maximizing expected utility. The substantive version of the principle to maximize expected utility is stronger than the representational version in two ways. First, it requires real or literal maximization instead of “as if ” maximization. Second, it requires maximization in all standard decision problems instead of just in standard decision problems with preferences among options that satisfy necessary assumptions and structural assumptions ensuring that the set of preferences among options is rich enough to provide for the representation’s existence and uniqueness (up to permissible transformations of the utility scale). If a decision problem has just two salient options, say, an act and its opposite, then typically a preference between them follows expected utilities according to multiple representations. To obtain uniqueness of a representation (up to scale transformations), the representation theorem needs a rich set of options, and generally not just the options in a decision problem, but also hypothetical options constructed using the consequences of the options in a decision problem. The substantive principle of expected-utility maximization governs a single choice, not just a set of choices. It demands more than consistency with other choices, taken as the choices’ representability as maximizing expected utility. The substantive principle explains the rationality of a single choice by noting that the choice is at the top of a preference-ranking of options that follows the expected utilities of options. The explanation, as in Section 5.2, observes that an option’s expected utility equals the option’s utility, and explains why an agent should maximize utility. It notes that utilities represent preferences, so that an agent’s maximizing utility is just following preferences, that is, the agent’s adopting an option such that she prefers no other option. Given preferences that satisfy the axioms of a representation theorem, the preferences can be represented either as maximizing expected utility, or as minimizing expected utility. The substantive principle motivates their representation as maximizing expected utility so that the representation reveals an agent’s degrees of belief and degrees of desire. The representational principle is inspired by the substantive principle and does not compete with it. Representing preferences among options as maximizing expected utility, according to probability and utility assignments constructed for this purpose, offers a means of inferring probabilities and utilities that represent an agent’s
118 Rational Responses to Risks attitudes to options and their possible outcomes and so explain rational choices. The relations that hold between preferences and utilities depend on whether utilities represent just preferences among options or also degrees of desire. If they represent just preferences among options, and so arise from the preferences, then they do not explain the preferences. In contrast, the principle of expected-utility maximization in its substantive form may explain preferences among options that arise for a rational agent from degrees of belief and degrees of desire attaching to the possible outcomes of options. Although utilities representing preferences among options may reveal utilities that represent attitudes that explain preferences among options, only the latter type of utilities have the power to explain preferences among options. An agent’s deliberations do not profit from an expected-utility representation of her preferences among acts. She cannot use the representation to form preferences among acts because the representation assumes that the preferences exist. In contrast, she can use probability and utility assignments to possible outcomes of acts, if these assignments are defined independently of preferences among acts, to form preferences among acts. A representation theorem’s axioms of preferences divide into conditions necessary for the desired type of representation and the structural conditions that ensure the uniqueness of the representation. Some conditions necessary for representing preferences among options as maximizing expected utility are advanced as requirements of rationality. However, some conditions necessary for a type of representation are not requirements of rationality. A complete preference-ranking of options is necessary for an expected-utility representation of the preferences, but rationality does not require completeness, and so it does not require an expected-utility representation. At most, it requires that a preference-ranking of options can be completed in a way that gives them an expected-utility representation.3 The substantive version of the principle of expected-utility maximization uses utilities that represent attitudes to possible outcomes of options. The representational version uses only preferences among acts. The utilities it assigns to possible outcomes need not represent attitudes to the possible 3 An argument that rationality requires completeness contends that if an agent fails to compare two options, then in a sequence of decision problems with these two options, the agent may pay to adopt the first rather than the second and then pay to adopt the second rather than the first, for a net loss. However, a rational ideal agent prevents such a net loss by considering, as Section 6.5 explains, an option’s consequences given the other options the agent adopts.
Rational Management of Risks 119 outcomes. Savage ([1954] 1972) assumes that for each possible outcome, some act has that outcome in every state. Then he constructs the outcome’s utility using preferences among acts. However, no constant act’s outcome equals any variable act’s outcome, including products of the act’s variability, such as the act’s exposure to chance. The utility that Savage assigns to the variable act’s outcome does not represent the agent’s attitude to the outcome. At most it reveals the agent’s attitude to the outcome ignoring exposure to chance.4 Section 4.4’s justification of the expected-utility principle observes that attitudes to the risks of bad outcomes and the prospects of good outcomes that an option creates are independent in a way that supports summing the intrinsic utilities of the risks and prospects to obtain the option’s utility. A version of the principle advanced as part of a representational approach to expected-utility maximization, such as Savage’s ([1954] 1972), does not have this justification because it excludes from an option’s outcome a factor that matters to an agent, namely, the option’s risk, taken as the option’s exposure to chance. It runs aground on Allais’s paradox (see Section 5.4) and Ellsberg’s paradox (see Section 6.1.2), which show that some rational choices cannot be represented as maximizing expected utility if the possible outcomes of the choices are just monetary and so exclude an option’s risk.
5.4. Risk as a Consequence An option’s outcome includes all that happens given an option’s realization, but an option’s evaluation may use just an option’s consequences and, in particular, only the consequences the agent cares about. An option’s relevant consequences are the avoidable relevant events that occur with the option’s realization. The substantive version of the principle to maximize expected utility takes an option’s consequences comprehensively and so includes the option’s risk in the sense of its exposure to chance. The representational version of the principle may consider only some consequences, such as monetary consequences, to facilitate its representation of preferences among options.5 4 Gilboa (2009) reviews Savage’s framework and representation theorem. Gaifman and Liu (2018) generalize Savage’s representation theorem so that it does not require acts that have a constant outcome in every state. 5 Weirich (2001b; 2018b) advocates taking an option’s risk as a consequence of the option when formulating a substantive version of the principle of expected-utility maximization.
120 Rational Responses to Risks A representation of preferences among options attaches probabilities and utilities to each option’s possible outcomes. The proof of the representation’s existence and uniqueness (up to positive linear transformations of the utility function) assumes that various options may produce the same possible outcome so that preferences among options constrain the outcome’s probability and utility assignments in a representation that uses expected utilities. Savage ([1954] 1972: 82–91) admits that, for normative accuracy, his utility theory should take an option’s outcome as a grand world that covers everything an agent cares about. However, for practicality, he uses only small worlds that omit some relevant considerations. For example, buying a lottery ticket with a car as a prize has possession of the car as the outcome given the ticket’s selection. This small world outcome does not include the price of gasoline, although the price affects the car’s utility. The car’s utility is a state- dependent utility, and Savage’s framework using small worlds does not accommodate this dependency. Savage’s ([1954] 1972: Sec. 2.7) sure-thing principle, a preference axiom for his representation theorem, states that the order of two options agrees with the order of their consequences in a state s, given that they have common consequences in s’s complement. Savage advances it as a normative principle asserting a type of independence. Because Savage uses small worlds, the sure- thing principle faces difficulties such as Allais’s (1953) paradox. Suppose that outcomes specify only monetary consequences although an agent also dislikes risk. Table 5.1 depicts two decision problems, using rows to represent options. In the top problem, two options both yield $10 in a certain state s and differ only in the state’s complement ~s. Given its complement, the top option yields $10 while the bottom option yields $0 or $20 with equal probability. The agent may prefer the top option because of aversion to risk. In the bottom problem, the two options yield $0 in the state s but have the original payoffs in the state’s complement ~s. Given the change, if the state’s complement is unlikely, the agent may prefer the bottom option to the top option because the options differ little in risk, and the bottom option offers a chance for a larger prize. This preference reversal, although justified by aversion to risk, is contrary to the sure-thing principle applied with respect to monetary consequences only. The problem disappears if the sure-thing principle acknowledges consequences besides money, such as risk.6 6 Machina (1982) drops Savage’s sure-thing principle, a type of independence axiom, and changes the representation of preferences, to accommodate preferences, such as those in Allais’s paradox, that violate the sure-thing principle but seem rational. Mongin (2018) reviews the history
Rational Management of Risks 121 Table 5.1 Preference Reversal Top decision problem s $10 $10
~s $10 $0 or $20
Bottom decision problem s $0 $0
~s $10 $0 or $20
less risky row
row with brighter opportunity
Although taking consequences noncomprehensively facilitates an expected-utility representation of preferences among options, the substantive principle of expected- utility maximization requires taking consequences comprehensively, so that they include an option’s risk when an agent is averse to the option’s risk.7 I adopt the substantive principle of expected-utility maximization because of its normative force and explanatory power. The case for it requires putting an option’s risk in the option’s possible outcomes. Does the principle of expected-utility maximization suffer from putting an option’s risk among the option’s consequences? Some theorists object to the principle’s making outcomes comprehensive.8
of Allais’s paradox. He notes that Savage’s version of Allais’s paradox uses a partition of states and the probabilities of states, whereas the original version of Allais’s paradox uses just probabilities of outcomes. 7 Gilboa, Minardi, Samuelson, and Schmeidler (2018) argue that taking states to resolve all relevant uncertainties, and so to be fine-grained, makes it hard to present an agent with decision problems involving the states so that the agent’s choices in the decision problems reveal the agent’s beliefs or degrees of belief. They seek ways of observing beliefs through choices, and so replace states with coarser-grained eventualities. 8 Bradley and Stefánsson (2017: 495) seem to hold that taking the risk an option generates as a consequence of the option requires explaining the risk’s effect on an option’s utility. However, an option’s consequences do not depend on the option’s utility, but rather the option’s utility depends on the option’s consequences. Hence, I interpret them as holding that a decision principle that takes the risk an option generates as a consequence of the option should explain the consequence’s effect on the option’s utility. This demand is too strong. A good decision principle, such as the principle to maximize utility, may take warmth as a consequence of basking in the sun without explaining warmth’s effect on the utility of basking in the sun. The principle of utility maximization uses the utilities of options but need not explain the utilities of options. The principle may recommend an option
122 Rational Responses to Risks Hacking (2001: 99−100) has reservations about the substantive principle to maximize expected utility. He considers a case in which Garvin wants to buy lottery tickets offered at a fair price, whereas Elena prefers keeping her money to buying the tickets. The acts of buying and of not buying have the same expected value, he supposes, but rationality does not require indifference between the two acts. Hacking treats maximization of expected value as a rule of thumb. He rejects maximization of expected value as a standard of rationality and calls the rejected view expected-value-rule dogmatism. He notes that a defender of maximization may accommodate the case of Garvin and Elena by taking risk as a consequence of buying lottery tickets and classifying Garvin as attracted to risk and Elena as averse to risk. Hacking calls this defense artificial because it puts risk and money in different dimensions. However, the utility of an act’s consequences routinely evaluates its consequences along multiple dimensions, that is, considering multiple attributes. The utility of traveling to a job interview depends on the attractiveness of the job prospect and the unattractiveness of traveling. An act’s utility is an all-things-considered assessment of the act. By letting an option’s consequences include the option’s risk, this chapter’s version of the principle of expected- utility maximization addresses Hacking’s (2001) objection that the principle fails to accommodate individual differences in attitude to risk. An agent’s assessment of an option’s consequences uses the agent’s personal attitude to risk. Contrary to Hacking’s reservations, also voiced by Bradley and Stefánsson (2017: 495), putting risk in an option’s consequences does not trivialize the principle by adding an uncontrolled fudge factor. To affect an option’s expected utility, consequences must matter to the agent, as Dreier (1996) notes. Thus, an option’s risk affects an option’s expected utility only if the agent cares about the risk. The agent’s attitude to the risk controls the risk’s effect on an option’s expected utility. Suppose that an agent prefers a sure-thing to a risky option. One may say that this happens because he dislikes risk. Counting risk as an option’s consequence does not make the expected-utility principle trivial because the agent has to care about risk for it to affect an option’s expected utility for him. Risk counts only if it is not a consequence drummed up to prevent violation of the expected-utility principle. of maximum utility without explaining how an option’s consequences generate an option’s utility. Although not required for a justification of the decision principle, an explanation of an option’s utility enhances a decision theory to which the principle belongs, and for this purpose Chapter 3 explains how an agent’s attitude to risk affects the utility of an option for the agent.
Rational Management of Risks 123 Perhaps incomparability is the source of Hacking’s misgivings. For some people, some dimensions of evaluation may be incomparable. Some may not have any way of comparing health and wisdom. Maximizing expected utility makes sense only when options’ possible outcomes have utilities that represent all-things-considered degrees of desire. This chapter applies the substantive principle of expected utility maximization only to standard decision problems in which options’ possible outcomes have such utilities.9
5.5. Resources for Solving Decision Problems According to common terminology, an agent makes a decision under risk when she knows all relevant physical probabilities and otherwise makes a decision under uncertainty. An agent deciding under risk knows, in particular, the physical probabilities of all the possible outcomes of all options (for some partition of possible outcomes for each option).10 For an agent making a decision under risk, the usual decision principle is to maximize expected utility, computed using subjective probabilities that are equal to any known physical probabilities, according to a principle of direct inference such as Lewis’s ([1980] 1986) Principal Principle. An agent making a decision under uncertainty, even if ignorant of physical probabilities of options’ possible outcomes, may nonetheless assign subjective probabilities to the options’ possible outcomes. Assuming that these subjective probabilities meet all constraints that rationality imposes, they are evidential probabilities in the sense of Section 2.5. For an agent making a decision under uncertainty, given subjective probabilities and utilities for options’ possible outcomes, the usual decision principle is to maximize (subjective) expected utility. Epstein (1999) formulates differently the distinction between the two types of decision problem. He distinguishes decision under risk and decision under uncertainty according to the aptness of a probabilistic representation of an agent’s information. If a probability assignment represents the agent’s information, the decision is under risk; and if the agent’s information is too imprecise, vague, or ambiguous for a probability assignment 9 The Saint Petersburg paradox presents a nonstandard decision problem in which an option has an infinite number of possible outcomes, each with a distinct utility. Weirich (1984a) argues that taking risk as a gamble’s consequence makes progress toward a resolution of the paradox. 10 Wakker (2010) addresses both decision under risk and under uncertainty.
124 Rational Responses to Risks to represent it, the decision is under uncertainty. This chapter, following Epstein, distinguishes decisions with and without quantitative (evidential) probability and utility assignments to possible outcomes of options. For the former, it uses maximization of expected utility, and for the latter, it uses a generalization of maximization of expected utility, or, in special cases, other compatible decision principles. Comparisons of options resolve a decision problem, and calculating each option’s utility yields comparisons of options. Theorists advance several principles for comparing options in nonquantitative cases, where probability or utility assignments are missing. A well-known principle, the maximin principle, responds to the evidential risk arising from the absence of probability or utility assignments. It counsels adopting an option whose worst possible outcome is at least as good as the worst possible outcome of any other option. Compliance with this principle displays excessive aversion to risk, however. The well-known principle of dominance also applies to cases without quantitative probability and utility assignments. One version requires an option that is not dominated by another option. It takes one option to dominate another, with respect to a partition of states of the world, if the first option is better than the second option in each state, or if the first option is just as good as the second option in all states and is better than the second option in some states. Also, the version of the principle adopts the restriction that an option’s realization does not causally influence any state in the partition of states. The principle of dominance formulates a necessary but not a sufficient condition for an option’s rationality because in some decision problems some options not dominated are not choice-worthy. Comparison of options using (first- order) stochastic dominance generalizes comparisons using (state-wise) dominance. An option stochastically dominates another option if and only if for any amount of utility that the two options may yield, the probability of obtaining at least the amount is at least as great with the first option as with the second option, and for some amount the probability of obtaining at least the amount is greater with the first option than with the second option. Comparing options using stochastic dominance requires probability and utility comparisons, but not quantitative probability and utility assignments to possible outcomes of options. To illustrate stochastic dominance, consider two gambles with utility outcomes that depend on a coin toss, as in Table 5.2. Gamble A does not (state-wise) dominate gamble B, but it stochastically dominates gamble B. For a utility of at least 1, the probability is the same with the two gambles,
Rational Management of Risks 125 Table 5.2 Stochastic Dominance
Gamble A Gamble B
Heads
Tails
1 2
3 1
namely, 1. For a utility of at least 2, the probability is the same with the two gambles, namely, ½. However, for a utility of at least 3, the probability is ½ with gamble A and 0 with gamble B. The decision principle of stochastic dominance prohibits adopting a stochastically dominated option and so, in a decision problem offering only gamble A and gamble B, prohibits choosing gamble B, leaving gamble A as the rational choice. In quantitative cases, the principle of expected utility justifies comparisons of options using stochastic dominance. It justifies permutations of factors in the sum that generates an option’s expected utility and thus its utility. Any way of rearranging the terms of the sum also yields the option’s expected utility. To compare two options, one may rearrange the probability-utility products in their expected-utility sums according to the utilities in the products, going from highest to lowest utility. One sum is bigger than another if moving from highest to lowest utility, the cumulative probability function for the first option is always at least as great as, and sometimes exceeds, the cumulative probability function for the second option. Relations of compositionality and separability may also yield comparisons of options without calculations of options’ utilities. Using these relations may resolve decision problems lacking complete quantitative probability and utility assignments for possible outcomes of options, and may resolve decisions problems efficiently when calculations with such assignments have a cognitive cost. In some decision problems, separating an option’s risk from the option’s other consequences simplifies deliberations by focusing attention on pivotal considerations. This simplification adds conviction to deliberations even if it does not make them more efficient because of the cognitive costs of identifying the pivotal considerations. Chapter 4 establishes that the utility of an option is a function of the utility of the option ignoring the option’s risk and the intrinsic utility of the option’s risk. This is mean-risk compositionality, and it justifies interchange of equivalent parts in the mean-risk division of an option’s utility. If one option comes from another by replacing part of the option with another equivalent part,
126 Rational Responses to Risks then the options are equivalent. Suppose that two options offer equivalent exposures to chance and are equivalent ignoring exposures to chance. Then a rational ideal agent is indifferent between the options, assuming mean-risk compositionality. This observation resolves a decision problem in which only the two options are contenders. Both options are rational. Various types of separability of components of options yield decisions in some cases without using probability and utility assignments. When two options are equivalent in one component, their ranking in the other component yields their ranking given that the first component is equivalence- separable from the second. Suppose that one option has more exposure to chance, but the two options are equivalent ignoring exposure to chance. An agent’s intrinsic aversion to exposure to chance then requires a preference for the second option, given the equivalence-separability of the option’s risk from the option’s independent consequences. This equivalence-separability, which Section 4.7 supports, yields preferences between options given preferences concerning their risks when their other consequences are equivalent. When two options are equivalent ignoring exposure to chance, their ranking in exposure to chance yields their ranking. Equivalence-separability yields a principle of dominance for decisions: if two options are alike in one of two equivalence-separable components, and the first is better than the second in the other component, then the first is better. Compositionality and separability thus yield comparisons of options without computing utilities of options and may suffice for choices in some cases. Compositionality generalized for nonquantitative cases, in which an agent’s attitude toward an option’s risk is imprecise and lacks a quantitative intrinsic utility, suffices for solving decision problems with options related by substitution of equivalent risks. Separability solves additional decision problems. However, a general solution of a standard decision problem, including problems with options not equivalent in either component, needs additivity, the relation of utilities that yields compositionality and separability. The expected-utility principle evaluates an option assuming such additivity. The principle of expected-utility maximization using evidential probabilities covers only cases in which evidence is rich enough to settle quantitative evidential probabilities. When evidence is sparse, evidential probabilities do not exist. A set of probability assignments then represents a rational ideal agent’s doxastic attitudes. Analogously, a set of utility assignments meeting all constraints that rationality imposes (in light of the agent’s experiences)
Rational Management of Risks 127 represents a rational ideal agent’s conative attitudes, if the agent has only superficial experiences concerning the options’ possible outcomes (and information does not substitute for lack of experiences) so that precise utilities do not exist. If an agent does not assign precise evidential probabilities and precise utilities to the possible outcomes of options, because of insufficient evidence and experience, a set of evidential probability assignments paired with utility assignments represents her state of mind. In such nonquantitative cases, a pair of a probability assignment and a utility assignment for options’ possible outcomes yields an assignment of expected utilities to options. A set of such pairs represents the agent’s evaluation of the options according to the agent’s sparse evidence and superficial experience concerning the options’ possible outcomes.11 A generalization of expected-utility maximization handles cases with imprecise probabilities and utilities for an ideal agent; in a standard decision problem, but without quantitative probability and utility assignments to all possible outcomes of options, it requires maximizing according to some evidentially and experientially admissible pair of a probability assignment and a utility assignment to possible outcomes of options. As this section argues, an option is rational if and only if it maximizes expected utility given a pair of probability and utility assignments in the set that represents the agent’s doxastic and conative attitudes to the possible outcomes of options. This is a permissive extension of the principle of expected-utility maximization; in a standard decision problem, it allows any option to which the agent does not prefer another option. It states necessary and sufficient conditions for an option’s rationality, given background idealizations about the agent. The permissive extension takes an option’s outcome to include every consequence an agent cares about, including the option’s risk. It assumes that all reasons exert their influence through constraints on probability and utility assignments in the set of admissible paired probability and utility assignments. The rule is permissive but no more permissive than are the constraints that rationality imposes through constraints on probability and utility assignments.
11 Walley (1991); Augustin, Coolen, Cooman, and Troffaes (2014); and Bradley (2017) treat imprecise probabilities and utilities. Cubitt, Navarro-Martinez, and Starmer (2015) study imprecise preferences, which may emerge from imprecise probabilities and utilities, and conclude that such preferences have a place in psychology’s account of an agent’s mental states.
128 Rational Responses to Risks The permissive extension of expected- utility maximization treats decisions under risk and decisions under uncertainty. The evidence settles a set of evidential probability functions. Evidence and experience together yield a set of utility functions, each matched with a probability function from the set that evidence yields. The permissive extension takes account of all rationality’s constraints on probability and utility assignments to options’ possible outcomes. It agrees with the principle to maximize expected utility when probability and utility assignments exist (and the set of admissible pairs of a probability assignment and a utility assignment has just a single pair). Objectivity in probability assignments differs from objectivity in choice. Choice following the permissive extension is arbitrary and subjective because it allows maximizing expected utility according to any pair of probability and utility assignments in the set representing evidence and experience. However, the set of probability functions representing an agent’s evidence is the same for all agents with the same total evidence. The permissive extension, a standard of rationality for a choice, assumes an ideal agent who is rational in attitudes and everything else except possibly her choice. The permissive extension agrees with principles of choice using dominance, stochastic dominance, compositionality, and separability because the utility functions paired with probability functions are constrained by these relations. For example, suppose that paired with a probability function is also an intrinsic-utility function over options’ risks and a comprehensive-utility function over options, once ignoring and once not ignoring their risks. Then deciding according to compositionality, given a division of an option’s consequences into its risk and its independent consequences, agrees with deciding according to the permissive extension because all the admissible pairings of a probability function with an intrinsic-utility function and a comprehensive-utility function have utility functions according to which an option’s utility is a function of the intrinsic utility of the option’s risk and the comprehensive utility of the option ignoring the option’s risk. Consequently, the comprehensive-utility function over options assigns the same comprehensive utility to two options if it assigns the same comprehensive utility to the options ignoring their risks, and the intrinsic-utility function over option’s risks assigns the same intrinsic utilities to the options’ risks.12 12 Weirich (2015b) defends the permissive extension, for an ideal agent who can predict her choices, against Elga’s (2010) objections concerning sequences of choices. Chapter 6, which treats sequences of choices, reviews the defense. Joyce (2010) and Hart and Titelbaum (2015) defend imprecise probabilities and utilities against White’s (2010) objections involving a phenomenon known as dilation that may occur upon discovering correlations between events. Paul (2014) considers
Rational Management of Risks 129
5.6. Risk Weights This chapter argues that, in a standard decision problem, a risky option is rational if and only if it maximizes expected utility. Expected-utility calculations accommodate attitudes toward an option’s exposure to chance, granting that it is a consequence of the option and part of each of the option’s possible outcomes. Some theorists advance variants of expected- utility methods for decision problems with risky options. This section reviews cumulative prospect theory and risk-weighted expected-utility theory but does not follow their suggestion to handle an option’s risk using weights for probabilities instead of treating the option’s risk as a consequence of the option. Cumulative prospect theory and risk-weighted expected-utility theory, to accommodate attitudes to risks, modify the formula for an option’s expected utility by introducing weights for probabilities of possible outcomes and define the weights using an agent’s preferences among options. They assume that an agent’s set of preferences meets certain conditions that ground representation theorems showing how to extract probability and utility assignments, and weights for probabilities, from an agent’s preferences among options. Their primary goal is to use probabilities, utilities, and weights for probabilities to represent preferences among options rather than to explain the rationality of these preferences, however, I consider whether they can be recast to occupy this explanatory role. Psychology, as an empirical rather than a normative science, treats the decisions people make about risks rather than the decisions they ought to make. Prospect theory, a psychological account of human choice proposed by Kahneman and Tversky (1979), modifies a version of expected-utility theory to account for systematic deviations from its predictions. Tversky and Kahneman (1992) advance cumulative prospect theory as a refinement of prospect theory, and Wakker (2010: 3) endorses it as a predictive theory
decisions problems in which an agent’s choice may lead to a transformative experience, that is, an experience that is novel and may have profound implications for the agent’s life, including the agent’s values, so that for possible outcomes of options, the agent lacks precise evidential probability assignments because relevant evidence is sparse and also lacks precise utility assignments because relevant experience is sparse. Paul concludes that because of lack of information and experience, a rational decision is impossible in these cases, perhaps because it cannot issue from an exercise of well-established quantitative procedures. However, in the ordinary, normative sense, a decision may be rational without resting on precise probability and utility assignments for the possible outcomes of options. This happens if the agent is not able to gather information about, and acquire experience pertinent to, possible outcomes sufficient for probability and utility assignments and if the agent’s decision complies with the permissive extension of expected-utility maximization.
130 Rational Responses to Risks of choice with a psychologically realistic foundation. The theory, to handle risks, attaches weights to probability assignments that reduce small probabilities and augment large probabilities. It also adopts for amounts of a commodity an S-shaped utility curve, with as center a reference point separating gains and losses, so that in standard technical senses, an agent has an aversion to risk in the domain of gains and has an attraction to risk in the domain of losses. It calculates modified expected utilities of options that, it claims, represent preferences and choices. Although cumulative prospect theory has empirical support and so fits the behavior of people, who often behave rationally, it is not a satisfactory normative theory of preference or choice. No normative justification supports its distinctive features, such as its weights for probability assignments or its S- shaped utility curve. Traditional arguments for the rationality of maximizing expected utility count against giving it a normative role. Buchak (2013) also proposes adding risk weights to the expected-utility formula, following methods of Quiggen (1982). She formulates rank- dependent expected-utility (REU) theory, a type of cumulative prospect theory that derives risk weights for probabilities from the ranking of possible outcomes of options.13 The REU theory makes attitudes to risk independent of attitudes to amounts of a commodity, in contrast with the view that the shape of a utility curve expresses an attitude to risk. A representation theorem shows how to extract an agent’s risk weights, along with probability and utility assignments, from preferences among options. The risk weights are simple if the partition of states for an option has just two states; they are more complex if the partition has multiple states, because the weights depend on the rank of an option’s possible outcome in a ranking of its possible outcomes. Having a risk function, in addition to a probability function and a utility function, facilitates representation of preferences among options using only monetary consequences of options. Buchak (2013: Chap. 2) defines the risk-weighted expected utility of a gamble this way. Suppose that Ei is a possible event that yields the outcome xi of a gamble. For the gamble g = {x1, E1; x2, E2; . . . ; xn, En}, where u(x1) ≤ . . . ≤ u(xn), REU(g) = ∑j = 1 to n r(∑i = j to n p(Ei))(u(xj)–u(xj–1)), where r is the agent’s risk function, adhering to the constraints r(0 ) = 0, r(1) = 1, r is nondecreasing, and 0 ≤ r(p) ≤ 1 for all p. If the gamble g has just two possible outcomes, x1 if E1 and x2 if E2, then the risk-weighted expected utility of the gamble is its low
13 Buchak (2014, 2017) briefly presents the core of her theory.
Rational Management of Risks 131 value plus the interval between the low value and the high value, weighted by the risk function of the probability of getting the high value. That is, if g = {x1 if E1, x2 if E2}, where u(x1) ≤ u(x2), then REU(g) = u(x1) + r(p(E2)) (u(x2)–u(x1)). The REU theory proposes, as a norm for preferences among options, that the preferences be “as if ” following risk-weighted expected utilities. If the preferences are “as if ” following expected utilities, then they are “as if ” following risk-weighted expected utilities with risk weights equal to one. Hence, the REU theory’s norm is weaker than the norm of representational expected-utility theory if both theories have the same noncomprehensive interpretation of possible outcomes, say, as amounts of money.14 The REU theory’s representation theorem shows that if preferences among gambles, with consequences taken narrowly, meet certain conditions, they may be represented as maximizing risk-weighted expected utility according to some probability assignment, some utility assignment (unique up to positive affine transformations), and some risk-weighting function for probabilities. However, the preferences among gambles, with consequences taken comprehensively to include an option’s risk, if rational, also may be represented as maximizing expected utility according to some probability assignment and some utility assignment (unique up to positive affine transformations). Because rational preferences among options follow expected utilities computed using comprehensive outcomes, a representation of the preferences as following expected utilities exists. Also, because probability assignments that represent strengths of belief are unique and because utility assignments that represent strengths of desire are unique up to the usual scale transformations, a representation of preferences as following expected utilities is unique up to the usual scale transformations. Accommodating choices in cases such as Allais’s paradox does not require weakening expected-utility theory’s constraints if possible outcomes are comprehensive, as Weirich (1986) shows.15 14 The REU theory must limit its recognition of an option’s consequences. Suppose that an agent has a choice between $3,000 and a 2/3 chance of $4,000. Even taking account of aversion to risk, suppose it is rational to prefer the chance. If the consequences of the chance include regret experienced if the chance produces nothing, then the REU theory may incorrectly conclude that taking the chance is irrational because the regret, as well as a risk weight, reduces its evaluation of the chance even though the regret and the chance’s risk are not independent considerations. Because my expected-utility theory is substantive, and not representational, and takes an option’s possible outcomes comprehensively, the REU theory does not generate it by setting risk weights equal to one. 15 Weirich (1986) characterizes the preferences that constitute Allais’s paradox, which Buchak takes to follow risk-weighted expected utilities, as following expected utilities computed using
132 Rational Responses to Risks Expected utility theory, as I present it, explains the rationality of preferences among options, whereas the REU theory proposes only a weak representational norm for preferences. The REU theory’s representation of preferences among options does not introduce probability, utility, and risk functions independently of preferences among options and thereby introduce, independently of its requirements for preferences among options, the norm that preferences follow risk-weighted expected utilities. That preferences follow risk-weighted expected utilities holds by definition in the REU theory’s representation of preferences among options; hence, risk-weighted expected utilities do not explain the rationality of preferences among options. To explain the rationality of preferences, the REU theory must claim that, given idealizations about agents and decision problems, rationality requires an agent to maximize risk-weighted expected utility. This claim is plausible only if risk weights have a suitable psychological grounding, their derivation from preferences among options reveals them, and the REU theory’s formula rationally evaluates an option. The argument constructed for expected-utility theory counts against the REU theory, insofar as it conflicts with expected-utility theory. However, let us independently assess the REU theory. To evaluate the REU theory’s formula, let us begin by making its context more precise. It uses risk weights to modify an expected-utility formula, which, for agreement with causal decision theory’s formula, assumes that acts do not influence states. It restricts itself to cases that meet this assumption. Also, the REU theory’s formula is not partition invariant; given grand worlds instead of small worlds, it reverses the preferences that Allais’s paradox exhibits, as Thoma and Weisberg (2017) show. For consistency, the REU theory’s formula must restrict the partitions of states that it uses. Furthermore, if risk weights are to express an agent’s attitude to risk, the agent’s attitudes must meet certain constraints. The agent cannot be averse to the risk of losing money by theft but not by playing poker, or be averse to risks arising from the weakness of evidence for probability assignments, as agents are in Ellsberg’s paradox (which Section 6.1.2 reviews). Because comprehensive outcomes that include risk. Pettigrew (2016) presents a general method of representing preferences among options that follow risk-weighted expected utilities as preferences that follow expected utilities. The representation takes options’ outcomes to be more comprehensive than the REU formula takes them to be. I do not adopt Pettigrew’s general method because if preferences following risk-weighted expected utilities are sometimes not rational, then their representation as following expected utilities misrepresents them, granting that preferences following expected utilities are rational.
Rational Management of Risks 133 rationality does not impose these constraints, they are restrictions on the theory’s applications. The restrictions mentioned all reduce the REU theory’s generality.16 The traditional expected- utility formula has good psychological credentials. The generation of strengths of belief by evidence, the generation of strengths of desire by basic intrinsic attitudes, and the introspective accessibility of strengths of belief and strengths of desire support their psychological reality and therefore the explanatory power of expected utilities. Suppose that the REU theory claims that an agent’s attitude to risk expresses itself as the formula’s risk weights for probabilities and that the risk weights are not mere fudge factors that facilitate representation of preferences but explain the preferences. Does the REU theory’s formula, so interpreted, have the psychological grounding of the traditional expected-utility formula? In contrast with degrees of belief and degrees of desire, an agent does not have introspective access to her risk weights. Furthermore, the REU theory does not explain how an agent’s attitude to risk generates risk weights. The REU theory does not introduce risk itself as an object toward which an agent may have an aversion that is accessible by introspection, that expresses itself through avoidance of the aversion’s object, and that produces risk weights to guide formation of preferences among risky options. Consequently, the REU theory’s formula does not produce an explanation of an agent’s preferences among options superior to the explanation that uses the traditional expected- utility formula. Moreover, the REU theory’s formula for evaluating an option reaches mistaken conclusions about rational preferences (and choices) in some cases.17 Suppose that an agent, for whom the utility of money is linear and for whom the probability of heads on a toss of a fair coin is ½, is indifferent between $0 and a gamble that pays $1 if heads and loses $1 if tails when a fair coin is flipped, and also is indifferent between losing $1 and a gamble that pays $1,000 if heads and loses $1,000 if tails when a fair coin is flipped. The first comparison entails that the agent’s risk weight for a probability of 50% is 1, and the second comparison entails that it differs from 1. So the REU theory judges that the agent is irrational, as no risk weights allow the comparisons 16 Choquet expected- utility theory, as introduced by Schmeidler (1989) and examined by Chateauneuf and Cohen (2013), is similar to rank-dependent utility theory and to the REU theory, and faces similar limitations. 17 Safra and Segal (2008) show that using risk weights to represent aversion to risk makes small aversions to small-stakes gambles imply enormous aversions to moderate-stake gambles.
134 Rational Responses to Risks to fit the REU theory’s formula for evaluating options. However, rationality does not prohibit the agent’s neglecting risks when stakes are small and being averse to risks when stakes are large, as Armendt (2014) observes. Next, suppose that an agent cares only about money and not about risk in the sense of exposure to chance and yet is indifferent between losing $1 and a gamble that pays $1,000 if heads and loses $1,000 if tails when a fair coin is flipped. This agent should have preferences that follow expected monetary values but does not in this case. The agent is irrational (given his attitudes) but rational according to the REU theory. The preference is embeddable in a set of preferences that has an REU representation with risk weights different from 1, and so is rational according to the theory. The theory’s representation of the preferences attributes to the agent a constructed aversion to risk despite the agent’s not having an aversion to risk in the sense of exposure to chance. Its representational norm fails to judge the agent irrational because the agent appears to have, without having, an aversion to risk in the sense of exposure to chance. The REU theory has no way of obtaining an agent’s risk weights other than deriving them from preferences and so cannot claim that an agent uses risk weights irrationally in forming preferences. Section 1.2.2 presents methodological advantages of counting an option’s risk among its consequences. The REU theory adopts the general view that an option’s risk, in the sense of its exposure to chance, is the product of a relation among the option’s possible outcomes and is not itself included in the options’ possible outcomes. The general view has two methodological disadvantages. It is not psychologically realistic for an agent averse to risk to exclude an option’s risk from an appraisal of the option’s possible outcomes because plainly its risk would obtain if the option were realized. Moreover, excluding the option’s risk complicates calculation of the option’s utility because the calculation then requires adjustments for risk in addition to the probabilities and utilities of possible outcomes. People may find it easier to evaluate the options in a decision problem if they simplify possible outcomes by excluding risk. Omitting some possible consequences may lower the cognitive cost of evaluating options by facilitating an assignment of utilities to possible outcomes. However, in this chapter’s model, agents are cognitively ideal and do not benefit from ignoring some possible consequences. Moreover, the model’s decision principles accommodate imprecise utility assignments to possible comprehensive outcomes. Simplifying possible outcomes generates no computational advantage in the model.
Rational Management of Risks 135
5.7. Summary This chapter presents a standard of evaluation for decisions about risks, namely, maximization of expected utility. It justifies the standard by pointing out that, in typical decision problems, complying with the standard amounts to following preferences among options. The standard supports for rational ideal agents the decision procedure of computing the utilities of possible outcomes of options and then calculating and comparing the expected utilities of options to identify options that maximize expected utility. This decision procedure moves from utilities of possible outcomes to preferences among options and does not extract utilities of possible outcomes from preferences among options. The standard explains the rationality of an agent’s choice in a substantive way using the agent’s attitudes to the possible outcomes of options and does not reduce the choice’s rationality to consistency with other choices.
6 Combinations of Acts Risky acts come in groups as well as singly. The risks from several acts form a combination of risks. A full account of risk management treats combinations of risky acts. A combination’s risk depends on the interaction of the risks the acts create, and the interaction of the risks depends on their type. Risks in the sense of chances of bad events interact differently than do risks in the sense of exposures to chance. This chapter describes relations between evaluations of the risks in a combination and an evaluation of the combination’s risk, and it formulates standards of rationality for a combination of acts that produce risks. Its standard for a combination of acts performed sequentially differs from its standard for a combination of simultaneous acts because, as it argues, multi- chronic standards derive from synchronic standards.
6.1. Combinations of Risks I treat separately combinations of risks in the sense of chances of bad events and combinations of risks in the sense of exposures to chance. Two issues are whether risks are additive, assuming a measure of their sizes, and whether intrinsic utilities of risks are additive. I do not claim that the risks in a combination add up to the combination’s risk, but rather that, under some conditions, the intrinsic utilities of the risks in a combination add up to the intrinsic utility of the combination’s risk.
6.1.1. Chances of Bad Events For a partition of possible outcomes of a single act, the expected-utility principle requires that risks, in the sense of chances of bad outcomes, have intrinsic utilities that sum to the intrinsic utility of the combination of risks, as
Rational Responses to Risks. Paul Weirich, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190089412.001.0001.
Combinations of Acts 137 Section 4.4 explains. Rationality does not require similar additivity for the intrinsic utilities of risks of this type that a combination of acts generates. Compare (1) betting $1 on heads on one coin toss and betting $1 on tails on another coin toss to (2) betting $1 on heads and betting $1 on tails on the same coin toss. The first pair of bets yields outcomes ranging from a $2 loss to a $2 gain. The second pair of bets with certainty yields no loss or gain. Although in isolation each single bet of the two pairs is as risky as any other, the first pair creates more than a single bet’s risk, and the second pair eliminates risk by combining its two bets’ risks. The combined risk of the second pair is less than the risk from each component. Hence, the risks are not additive. Moreover, although the elements of the two pairs generate risks with the same intrinsic utilities, the pairs’ risks have different intrinsic utilities. Adding a risk an act generates to risks other acts generate sometimes increases and sometimes decreases total risk. One act’s risk, by hedging another act’s risk, may reduce aggregate risk. Consequently, for a combination of acts, the intrinsic utilities of the acts’ risks need not sum to the intrinsic utility of the combination’s risk. Simultaneous acts form a combination of acts and so do sequences of acts. A combination has possible outcomes, and the expected-utility principle may use the possible outcomes to evaluate the combination. The principle adds the intrinsic utilities of chances for possible outcomes to obtain the combination’s utility. The chances of bad possible outcomes are risks and have additive intrinsic utilities. However, these risks involve possible outcomes of the combination and not possible outcomes of an act in the combination. Take, for instance, a sequence of acts. Although the intrinsic utilities of the risks that the acts in a sequence generate are not additive, the sequence has exclusive and exhaustive possible outcomes whose probability-utility products generate the sequence’s expected utility. The intrinsic utilities of the chances of the sequence’s possible outcomes sum to the sequence’s expected utility. Figure 6.1 uses a tree to depict chances of good and bad outcomes of a sequence of choices, B1 and B2, about bets on the result of a single coin toss, either H, a dollar bet on heads, or T, a dollar bet on tails. A terminal node displays the expected utility of a sequence of bets. The sequence’s expected utility equals the sum of the intrinsic utilities of the sequence’s prospects and risks. If B1 yields H and B2 yields H, then the agent
138 Rational Responses to Risks
½ (2) + ½ (–2) = 0 H B2 H
T
B1
1 (0) = 0 1 (0) = 0
H
T B2
T ½ (–2) + ½ (2) = 0 Figure 6.1 Sequences of Bets
makes in sequence two dollar-bets on heads on the same coin toss. The result is a prospect of gaining $2 and a risk of losing $2, each with a probability of ½. Assuming that an amount of money gained equals the utility of the gain, and similarly for losses, the utility of the sequence of bets is ½ (2) + ½ (–2) = 0, as at the top terminal node. If B1 yields H and B2 yields T, then the agent makes a dollar bet on heads and a dollar bet on tails for the same coin toss. The result is a certainty of no net gain or loss, indicated by the probability-utility product 1 ( 0) at the terminal node second from the top. The tree for the possible sequences of bets adds, at a terminal node for a sequence, the intrinsic utilities of chances of the sequence’s possible outcomes to obtain the sequence’s expected utility. The expected-utility formula for a sequence of acts adds the intrinsic utilities of risks in sense of chances of bad outcomes of the sequence. The risks with additive intrinsic utilities are chances of comprehensive outcomes of the sequence rather than risks that the acts taken in isolation create.
Combinations of Acts 139
6.1.2. Exposures to Chance Evaluation of a sequence of choices includes an evaluation of the sequence’s exposure to chance. The expected-utility tree for a sequence using comprehensive possible outcomes yields an evidential-probability distribution of utilities of possible outcomes of the sequence, which, along with the epistemic support for the distribution, yields the sequence’s exposure to chance. I do not advance a general formula going from a distribution for a sequence to the size of the sequence’s risk in the sense of its exposure to chance, but assume that, in special cases, variance is an adequate measure. As Section 4.7 argues, for a rational ideal agent, an act’s risk in the sense of its exposure to chance is separable from the act’s independent consequences. Separability for a range of composites is relative to a division of the composites into parts, and a method of comparing components and composites. Consider whether an act’s risk in the sense of its exposure to chance is separable from other acts’ exposures to chance in a sequence of acts emerging from a sequence of decision problems. Assume a division of sequences into acts, a utility ranking of sequences, and an intrinsic-utility ranking of the acts’ exposures to chance, for the range of sequences of acts consisting of exactly one act from each decision problem, taken in the order of the decision problems. Interaction of acts’ exposures to chance, as in hedging, may prevent separability of an act’s exposure to chance. If the acts’ exposures to chance are not separable, their intrinsic utilities need not sum to the intrinsic utility of the sequence’s exposure to chance. For example, consider a pair of acts performed in sequence. Taking risks as exposures to chance, the intrinsic utility of the first act’s risk added to the intrinsic utility of the second act’s risk does not in general equal the intrinsic utility of the risk of the pair of acts. An act’s exposure to chance depends on factors such as the variance of the act’s probability distribution of the utilities of its possible outcomes, as Chapter 1 explains, and the variances of the distributions for each act need not add up to the variance for the pair’s distribution. Hence, the acts’ risks, taken as their exposures to chance, need not add and need not have intrinsic utilities that add. Take betting a dollar on heads on a coin toss and then betting a dollar on tails on the same coin toss. Although the distribution for each bet has a positive variance, the distribution for the combination of bets has zero variance. Neither the bets’ risks nor their intrinsic utilities are additive.
140 Rational Responses to Risks Consider a rational ideal agent with a basic intrinsic aversion to a risk that depends only on the risk’s size, and assume for technical convenience that zero exposure to chance counts as a risk. Because of proportionality requirements, intrinsic utilities of risks track the sizes of the risks. Hence, additivity holds for sizes of risks if and only if additivity holds for the intrinsic utilities of the risks. A version of Ellsberg’s paradox shows that additivity fails for intrinsic utilities of risks in the sense of exposures to chance and hence fails for the risks themselves. Suppose that an agent knows that urn A has exactly 50 black balls and 50 white balls, and that urn B has 100 balls, either black or white, although he is completely ignorant of the percentages. He considers a choice between these two options: (1) Gaining $100 if black is drawn from urn A (2) Gaining $100 if black is drawn from urn B He also considers a choice between these two options: (3) Gaining $100 if white is drawn from urn A (4) Gaining $100 if white is drawn from urn B The agent prefers exposures to chance arising from precise probabilities to similar exposures to chance arising from imprecise probabilities, and, in particular, has this preference for exposures to chance arising from prospects for gain. Hence, he prefers (1) to (2) and also prefers (3) to (4). Next, he considers a choice between these two options: (5) Gaining $100 if black is drawn from urn A, and gaining $100 if white is drawn from urn A (6) Gaining $100 if black is drawn from urn A, and gaining $100 if white is drawn from urn B He is indifferent between (5) and (6) because both combinations yield with certainty a gain of $100. Assuming that the agent’s preference-ranking of options follows the agent’s intrinsic attitudes to their risks in the technical sense of their exposures to chance, the agent is less intrinsically averse to the risk of (1) than to the risk of (2) and is less intrinsically averse to the risk of (3) than to the risk of (4), but
Combinations of Acts 141 has the same intrinsic aversion to the risk of (5) and the risk of (6). Because (5) is the combination of (1) and (3), and (6) is the combination of (2) and (4), the intrinsic utilities of the risks are not additive. Their nonadditivity shows that the sizes of the risks are not additive. The same result about sizes of risks also follows from the assumption that the sizes of the two risks involving urn A are each less than their counterparts with urn B, without any appeal to points about intrinsic utilities. Under the assumption, the sum of the two risks involving urn A is less than the sum of the two risks involving urn B. Given additivity, the combination (5) must produce a smaller risk than does the combination (6). Because the combinations (5) and (6) produce risks of the same size, the sizes of their components’ risks are not additive.1
6.2. Evaluation of a Combination of Acts An agent’s decision at a time may be simple or complex. Rationality evaluates all an agent decides at a time, as Section 5.2 explains. Its evaluation is with respect to other decisions the agent could have made at the time, both simple and complex decisions. If an agent makes multiple decisions at the same time, rationality evaluates the combination of decisions to evaluate its components. The evaluation of the combination treats it as an option in a decision problem at a time with options that combine resolutions of all decision problems at the time. If an agent decides at the same time whether to make two bets, then an evaluation of the two decisions begins with an evaluation of the combination of decisions, taken as an option in the decision problem with the options of making neither bet, making the first bet but not the second, making the second bet but not the first, or making both bets. The combination should maximize utility among these options, and components should contribute to a combination that maximizes utility. An evaluation of the decision to make both bets grounds an evaluation of the decisions to make each. 1 Section 3.6 reviews prospect theory. It attributes to agents an aversion to loss in a technical sense that generates an attraction to risk as defined by the convexity of a utility curve for a commodity. Although Section 3.6 argues that such an aversion to loss is irrational in an ideal agent, suppose that an agent has it, and place the agent in a version of Ellsberg’s paradox involving losses rather than gains. Then the agent’s aversion to loss may counteract the agent’s aversion to uncertainty, or ambiguity, taken as aversion to reliance on imprecise probabilities, as in Section 1.2.4. Consequently, the agent’s behavior may confound the two types of aversion and may not reveal either.
142 Rational Responses to Risks An evaluation of the decisions in a combination does not just give each the status of the combination. An irrational combination may include some rational decisions. Perhaps deciding to make both a small bet and a big bet is irrational although deciding to make the small bet is rational. Perhaps given an opportunity to make two middle-sized bets, it is rational to decide to make either but not both bets. I do not advance a standard of evaluation for the decisions in a combination of simultaneous decisions but just for the combination of decisions. The combination constitutes a single option in a decision problem at a time with possible combinations of decisions at the time as options. Assuming that the decision problem meets the usual conditions, the agent’s complete decision at the time is rational if and only if it maximizes utility. The acts that form options in a decision problem are acts that an agent can perform at will. A combination of these acts from multiple decision problems occurring at the same time form an option in a decision problem that the combination of decision problems form. For an ideal agent, a combination of compatible options at the same time forms an option at the time because the agent may realize it at will by realizing its components at will. The standards that govern a single option apply; in a combination of standard decision problems all at the same time, the combination of options adopted should maximize utility. The combination’s utility equals its expected utility, which is not the sum of the options’ expected utilities, but rather is the probability- weighted average of the utilities of the possible comprehensive outcomes of the combination of options. The combination, taken as a single option, is rational if and only if it maximizes expected utility among rival combinations of options. A different standard of rationality governs a sequence of acts. A sequence of acts is not an option in a decision problem, that is, an act performable at will, because the acts occur at different times. An agent cannot at will perform an act if he must wait for the time of its realization. Applications of the principle of utility maximization take as options only acts that an agent can perform at will, as Weirich (2010a: Chap. 2) argues. Unlike other future events, an agent’s own future acts are in her control, but they are not in her direct control; only her current acts are in her direct control. A sequence of acts maximizes (expected) utility with respect to a set of possible sequences. A plausible specification of the set uses sequences of options drawn in order from the sequence of decision problems that produce the sequence of acts. However, rationality does not require maximizing
Combinations of Acts 143 utility among the sequences of options in this set because the sequences are temporally extended. Because rationality’s evaluation of options is relative to a time, rationality evaluates differently an option at a time and a sequence of options spread out through time. Rationality evaluates an option by comparing it to the other options in a decision problem, whereas it evaluates a sequence of options by evaluating each option in the sequence. A sequence is rational if and only if it is, or is pragmatically equivalent to, a sequence with rational components. Hence, the rationality of all the options suffices for the sequence’s rationality.2 Evaluation of an option in a sequence of options considers the option’s possible outcomes. They depend on the option’s environment. The options in the sequence, except the option to be evaluated, constitute the choice environment for the option evaluated. Given its choice environment, the option settles a probability distribution of utilities of its possible outcomes and also a probability distribution of utilities of the sequence’s possible outcomes. These distributions yield, respectively, the option’s expected utility and the sequence’s expected utility. Rationality’s requirements for a choice in a sequence may depend on the other choices in the sequence. A current choice may affect rationality’s permissions concerning future choices. For example, suppose that a bomb’s detonation requires pushing two buttons and furthermore that pushing a single button does not increase the danger of detonation because it is certain that no second pushing of a button will occur. Although the detonation is bad, pushing just one button does not detonate the bomb, or increase the probability of its detonation, and so is permissible. After pushing one button, pushing the other button detonates the bomb and so is no longer permissible. The rationality of an agent’s choice depends on its consequences, and its consequences depend on its context, including the agent’s choices in a sequence to which the choice belongs. An earlier choice may affect a current choice’s consequences, and a current choice may have a later choice as a consequence (say, by increasing its utility) or may affect a later choice’s consequences. Evaluation of a choice in a sequence takes account of the other choices in the sequence, if they are known. Because each choice in a sequence
2 For pragmatic equivalence to a sequence with only rational steps, a sequence with some irrational steps must have the same utility and its irrational steps must be acceptable in the sense of Weirich (2004a: Chap. 6). This chapter’s arguments do not rely on a full account of the pragmatic equivalence of two sequences, and so I postpone constructing one.
144 Rational Responses to Risks of rational choices looks ahead to future choices, the sequence coordinates choices if this is advantageous. A decision tree depicts the options in a sequence of decision problems. Adopting a complete strategy for a sequence of decisions (a choice at each decision node) entails adopting a plan (an intention to realize a sequence of choices). It is common in game theory to suppose that an agent’s adoption of a strategy for a sequence of choices reduces to a single choice of a set of instructions to give a proxy for executing the strategy. However, given that choosing a set of instructions settles the act at each choice node, it is a form of binding and constitutes an option at the start of the tree. If a rational agent’s execution of a strategy does not involve binding, then the agent chooses according to preferences at each choice node. The sequence of choices does not reduce to a single choice.3 Maximization of utility suffices for a rational choice in a standard decision problem for a rational ideal agent, and leaves no room for independent requirements that derive from the choice’s belonging to a sequence of choices. Belonging to a sequence of choices affects a choice’s rationality only by affecting its consequences and thereby its utility. No requirement on a sequence of choices affects the rationality of a choice independently of the requirements on the choice itself.
6.3. Hedging In the case of a risk in the sense of a chance of a bad event, either a decrease in the probability of the bad event, or a decrease in the badness of the event produces a reduction in the risk’s severity, that is, the product of the probability and utility of the bad event. Taking on a risk to reduce the severity of another risk is hedging in one sense. Diversifying an investment portfolio introduces some risks to reduce total risk and so hedges. A farmer hedges against crop failure by betting that the crops will fail. If the crops fail, she wins her bet that they will fail, and so receives some compensation for her crop failure. Taking on the bet’s risk reduces the risk of crop failure by reducing the loss crop failure brings.
3 Al-Najjar and Weinstein (2009: 261) define a decision maker’s dynamic consistency as “the decision maker’s ability to commit, by fiat, to carry out his ex ante optimal plan.” This commitment is a form of binding.
Combinations of Acts 145 Hedging and insuring are distinct types of act, because some acts of hedging are not acts of insuring. However, an act of insuring counts as hedging, that is, taking on a risk to reduce another risk. Insuring counts as taking on a risk, granting an extension of the definition of a risk in the ordinary sense to count certainty of a bad event as a risk so that paying an insurance premium counts as taking on a risk of losing the premium. Also, insuring reduces the severity of a risk, in the sense of a chance of a bad event, by reducing the badness of the event. Home insurance reduces the severity of a risk of a fire by reducing the cost of a fire. It does not reduce the chance of a fire but reduces the badness of a fire by arranging compensation for damage.4 Insuring, by reducing the severity of a risk of a bad event through a reduction in the badness of the event, also reduces the chance of another bad event, the occurrence of an uncompensated loss. For example, a bad event that occurs given an uninsured fire that destroys a house is the loss of the value of the house. Insuring reduces to zero the chance of this event, and so reduces the severity of the risk of its occurring. It reduces the risk by paying an insurance premium and so by taking on a risk, in the extended sense, of losing the premium. Insuring against fire therefore counts as hedging the risk of losing the house’s value. Section 6.1 observes that risks need not be additive. When risks from acts fail to add, hedging is possible. Adding a risk may reduce total risk. Hedging may reduce a risk in the sense of a chance of a bad event, as when farmer bets on crop failure to reduce the risk of crop failure. It may also reduce a risk in the sense of an exposure to chance, as when an investor adds bonds to a portfolio of stocks to reduce the volatility of returns. An agent may hedge risks with other risks at the same time or in a sequence. Evaluation of a sequence’s risk, in the sense of the sequence’s exposure to chance, may lead to hedging as a rational extension of the sequence. In a sequence of acts, acts may hedge other acts’ risks to lower the sequence’s exposure to chance. After a sequence of choices, an agent may evaluate the risk he has taken on, understood as an exposure to chance. The evaluation guides future acts to reduce risk, in the sense of exposure to chance, such as hedging risks already incurred. The rationality of a choice may depend on a prediction of future choices. A sequence of choices may be rational only given prediction of a particular extension of the sequence. For example, a sequence of risks may generate a total risk that makes future hedging rational. 4 Ghirardato and Siniscalchi (2018) discuss risk sharing.
146 Rational Responses to Risks Although each risky option is rational, and the whole sequence of risky options is rational, the sequence’s rationality may assume, as a next step, hedging the risk created so far, and the sequence may be rational only because the agent predicts future hedging of the sequence’s risk. For an investor, buying stocks may be rational only given the investor’s prediction that she will subsequently purchase bonds to hedge against a decline in stock prices. A theory of hedging should explain when an agent’s creating a risk to reduce total risk is rational; it should explain when creating the risk maximizes utility. For an agent with an aversion to risk, this means explaining how the new risk lowers the total risk because of its interaction with other risks. To explain the effects of introducing a hedge, the theory needs an account of risks, their types, and principles governing combinations of risks according to type. A mentalistic account of aversion to risk lays the foundation for a theory of hedging. It has more resources, than has a representational account of aversion to risk, for explaining the operation of hedges. Consider an account of aversion to risk, in the sense of exposure to chance, that defines the aversion using the shape of a utility function that represents preferences among options. It does not treat the aversion to a risk, in the sense of an exposure to chance, that a sequence of choices generates, as a sequence of options selected is not itself an option. A representational view takes aversion to risk as the concavity of a utility function for a commodity, given that the function represents preferences among options as following expected utilities. It cannot treat an aversion to a sequence’s risk using preferences among sequences of options because a sequence of options is not an option in the sense that the principle of expected-utility maximization adopts; an agent does not perform it at will. Preferences among sequences of options do not have the backing in choice that the representational view demands. An agent does not choose a sequence of options, but only the options in the sequence, each at its time. Gathering information, as a step in a sequence of acts, may reduce the risk later steps generate, and this reduction may figure in an explanation of the total risk the sequence generates. Similarly, a step that hedges another step’s risk may figure in an explanation of the sequence’s total risk. A sequence’s risk, in the sense of its exposure to chance, depends on how in the sequence acts’ risks combine. Acknowledging aversions to risks, taken as attitudes definitionally independent of preferences among options, adds depth to an explanation of a rational attitude to the risk a sequence of options generates.
Combinations of Acts 147 Aversions to options’ risks interact to produce an aversion to the sequence’s risk. A mentalistic, as opposed to a representational, account of an agent’s attitude to a risk may use the combination of the risks of options in a sequence to explain the sequence’s risk and the agent’s attitude to it.
6.4. Rational Sequences of Choices The utility function for sequences, constructed from preferences among the sequences, does not identify rational sequences of options. A sequence’s utility comparisons with other sequences does not settle its rationality. Rationality evaluates a combination of choices at a time as a single choice by comparing it to other combinations available at the time. However, it evaluates a sequence of choices at different times by evaluating the choices in the sequence and not by comparing the sequence to alternative sequences, as Section 6.2 explains. Distinguishing rationality’s standards for single choices and sequences of choices handles a point about aversion to risk that Samuelson (1963) makes. He notes that in the long run risk aversion is costlier than is risk neutrality. An agent with an aversion to risk turns down bets sequentially that together are almost certain to bring a gain. For instance, suppose that the agent refuses to pay $1 for a gamble that, after a coin toss, pays $4 if heads and $0 if tails. The agent refuses each offer in a sequence of 1,000 such offers involving independent coin tosses, applying backward induction from the last to the first offer. However, taking all the offers in the sequence is almost certain to yield close to $1,000 profit. The risk-averse agent makes a sequence of choices inferior to another sequence of choices he could have made.5 An agent’s forgoing potential profits from the sequence of offers is utility maximizing if the agent is very averse to risk and makes avoiding it a higher priority than potential profits. However, an agent with a modest aversion to risk wants the sequence of 1,000 gambles because it is almost certain to yield a substantial gain.
5 Samuelson thinks that an agent who turns down a single bet in the sequence should turn down every bet in the sequence. Rabin and Thaler (2001) disagree because of the nearly certain large gain that the whole sequence brings. They hold that the case standard expected-utility theory presents for turning down the whole sequence arises from the extreme risk aversion concerning large sums that follows, as Rabin (2000) shows, from modest risk aversion concerning small sums, under the assumption that risk aversion’s expression is the concavity of the utility curve for money.
148 Rational Responses to Risks Despite wanting the sequence of gambles, a rational agent faces an obstacle to its realization. To realize the sequence, the agent must realize its steps. The sequence is desirable but the last bet in the sequence is not desirable. An agent with a modest aversion to risk does not want a bet with small positive expected monetary value, because of the chance of loss. A rational ideal agent with a modest aversion to risk maximizes comprehensive utility in each decision problem of the sequence by declining the gamble. To realize the sequence, the agent has to act contrary to preferences at the last step and also at all earlier steps. A rational ideal agent does not realize the sequence, although realizing the sequence maximizes utility among sequences made up of options in the decision problems that generate the agent’s sequence of choices. Granting that each refusal of a gamble is rational, the sequence of refusals is rational. Rationality’s standard for the sequence is not utility maximization with respect to rival sequences but the rationality of choices in the sequence, granting that all choices affect the relevant outcome of the sequence. A sequence of choices is rational if and only if the stages are rational, or together are pragmatically equivalent to rational stages. Rationality evaluates the sequence of 1,000 choices by evaluating its components. Declining each gamble in the sequence of offers is rational, and so the sequence of rejections is rational, despite the sequence of acceptances having greater utility than the sequence of rejections. Although a rational agent, because of aversion to risk, may decline each gamble in the sequence, if the agent has a means of binding himself to accepting all the gambles in the sequence, he should exercise this option. Binding is a useful mechanism for realizing the benefits of profitable sequences of choices. The binding makes utility maximizing a choice that without the binding would not be utility maximizing. To bind himself to accepting all the gambles in the sequence, an agent may arrange to forfeit $10 if he refuses a gamble. Then, given that only money and risk motivate him, accepting each gamble maximizes utility.6 In cases without the possibility of binding, a rational sequence of acts need not maximize utility among rival sequences. However, when binding is possible, it may ensure that rationality yields a maximizing sequence. 6 As McClennen (1990) notes, an agent who cares about being resolute has a means of achieving a sequence of maximum utility. He can decide to follow the maximizing sequence and resolutely adhere to his decision. However, this resoluteness need not be contrary to maximizing expected utility at each stage in the sequence. The value of being resolute increases the value of sticking to the resolution and makes sticking to the resolution maximize expected utility at each step, granting that the agent’s being resolute is rational and has sufficient intrinsic utility for the agent.
Combinations of Acts 149 Another example that elucidates rationality’s evaluation of sequences is Kavka’s (1983) toxin puzzle. An agent may gain a large amount of money today by forming an effective intention to drink a nonlethal toxin tomorrow. When tomorrow comes, drinking the toxin does not maximize utility. The intention, not the drinking, earns the reward. Among feasible two-step sequences concerning, first, an effective intention to drink and, second, drinking, it is best, first, to form an effective intention to drink and, second, to drink. However, the rational sequence is not to form an effective intention to drink and then not to drink. Not drinking is rational. The maximizing sequence of acts, namely, the effective intention to drink and then drinking, is irrational. It is irrational because each step is irrational. In the toxin puzzle, circumstances reward irrationality. A sequence of rational moves forgoes the reward. Binding again offers a way of securing the benefits of the maximizing sequence. Although an effective intention need not bind an agent to execute the intention, suppose that for an agent in the toxin puzzle, the agent’s effective intention to drink in fact binds the agent to drink. Then forming an effective intention to drink is rational, but drinking is still irrational, and the sequence with these two steps is still irrational. An agent may rationally bind himself to performing an irrational act; the rational binding need not make the act rational. The binding need not work by changing incentives. It may take the form of a pill that affects choice without removing free will, changing the decision problem’s options, or changing preferences among them.
6.5. Nondomination of Sequences The previous section argues that in some cases, rationality permits a sequence of choices although an alternative sequence has higher utility. It may also permit a sequence even if an alternative sequence (strictly) dominates by having higher utility in every possible state of the world belonging to a partition of possible states of the world that are causally independent of the agent’s choices. Such cases fail to meet standard conditions for appraisal of sequences of choices, however. This section argues that in standard conditions, rational choices yield a sequence not dominated by another sequence. A combination of choices to accept or reject bets on offer at the same time is dominated if it ensures a loss. The combination is dominated because an alternative combination, namely, rejecting all bets, brings no loss. Rationality
150 Rational Responses to Risks prohibits a dominated combination of bets. Because rationality evaluates a combination of choices at the same time as a single choice, it requires a combination of simultaneous choices to maximize utility and hence to avoid domination. A rational sequence of choices, in contrast with a rational combination of simultaneous choices, may be dominated. Rationality does not, in general, require that a sequence of choices maximize utility and so avoid domination; it just requires each choice in the sequence to maximize utility. This section argues that given standard idealizations that eliminate extenuating circumstances, a sequence of rational, and so maximizing, choices produces a maximizing sequence. It does not produce a sequence dominated by a rival sequence. An ideal agent who foresees a sequence of decision problems, foresees her choices in the problems, because of her knowledge of herself, if she knows she will not gain any unanticipated information during the sequence. Over a period without new, unanticipated information and without changes in basic goals, principles of rationality demand a nondominated sequence of choices if (1) the agent is cognitively unlimited and predicts at each stage her future choices and (2) faces a finite sequence of choices represented by a tree without chance nodes or decision nodes for other agents and without rewards for irrationality. A sequence’s nondomination emerges from the rationality of choices in the sequence. Nondomination is not an independent requirement. The principle of nondomination for a sequence, although multichronic, derives from synchronic requirements, as Weirich (2018a) argues. In many cases, the rationality of each choice in a sequence of choices prevents the sequence’s domination by making each choice consider future choices. A current choice may affect rationality’s permissions concerning future choices, and future choices may affect the consequences of current choices so that the result is a nondominated sequence of choices. However, consideration of future choices is not enough to prevent nondomination of sequences of choices in all cases. In a case that Elga (2010) presents, H expresses a hypothesis for which Sally’s probability assignment is imprecise; it may be the hypothesis that it will rain tomorrow. Sally will be offered, first, gamble A, which loses $10 if H and gains $15 if not-H, and then, immediately afterward, gamble B, which gains $15 if H and loses $10 if not-H. Information bearing on H is constant during the pair of offers. Sally’s accepting both gambles ensures that she gains
Combinations of Acts 151 $5, and her rejecting both gambles maintains the status quo; accepting both gambles dominates rejecting both gambles. Sally, if she is a rational ideal agent, will not reject both gambles although, given her imprecise probability assignment, rationality permits rejecting gamble A and also permits rejecting gamble B; rejecting A need not make rejecting B impermissible through changes in B’s consequences. To handle such cases, Sahlin and Weirich (2014) and Weirich (2015b) argue that a rational ideal agent, given knowledge of herself and future decision problems, predicts her future choices and makes current choices that together with future choices produce an undominated sequence of choices. For a rational ideal agent, who predicts her exercise of permissions, a standard sequence of rational decisions is not dominated. In Elga’s case, Sally’s accepting both gambles yields $5 with certainty, so rejecting both gambles is dominated. Sally, being a rational ideal agent, does not make this sequence of choices. Sally rationally rejects A only if she predicts her accepting B. If she then rejects B, her prediction is wrong, and that mistake does not happen given that Sally can predict her choices. May Sally reject A even if she accurately predicts that she rejects B? Her rejection of A is not rational if her rejection of B is predicted and rational given her rejection of A. Her only good reason for rejecting A is to put herself in position to accept B. So, Sally does not reject both gambles. For rational ideal agents, a sequence of rational decisions is not dominated. For a nonideal agent, an inability to predict future choices may excuse domination of a sequence of choices. If in Sally’s case, rational choices about gamble A and about gamble B forgo a sure gain, then rationality permits the sequence of choices despite its being dominated. Limitations excuse the sequence. However, if Sally is a rational ideal agent, who predicts her choices, she does not reject A and then reject B. She does not realize a dominated sequence. In Sally’s case probabilities are imprecise. Consider a variant of her case in which probabilities are sharp. Suppose that the probability of H is ½, and gamble A′ brings $10 if H is false and otherwise a cost of $10, and gamble B′ brings $10 if H is true and otherwise a cost of $10. Suppose that an agent cares only about money and risk, the utility of money is linear, and the risk of passing up A′ when it wins equals the risk of taking A′ when it loses, and similarly for B′. In each decision problem, accepting the gamble has the same utility as rejecting the gamble, even considering risk, so each choice is rational when evaluated in isolation. However, accepting both gambles eliminates
152 Rational Responses to Risks risk, as does rejecting both gambles. Thus, accepting one gamble but not the other is irrational, considering the agent’s aversion to risk. It appears that such a sequence of choices is irrational despite having rational components, contrary to evaluation of sequences by evaluation of components. However, rationality comprehensively evaluates each choice in the sequence, considering the agent’s knowledge of her other choice in the sequence. Given that the agent will accept the second gamble, it is irrational for her to reject the first gamble, as accepting it eliminates risk. Because of rationality’s circumspection, the case does not discredit evaluating each choice in a sequence to evaluate the sequence.7 Suppose that the requirement of nondomination for sequences of choices were independent of requirements on choices in the sequence. Then it would prohibit a dominated sequence even when changes in information during the sequence change the requirements for choices in the sequence; the relation of dominance among sequences does not change with new information. However, changes in information may make a dominated sequence permissible. In Sally’s case, if after rejecting gamble A, planning to accept gamble B, she learns that H is false, then she maximizes utility by rejecting gamble B. Her sequence of choices is dominated but still rational. A dominated sequence’s status may change from irrational to rational, as new information changes the status of options in the decision problems that generate the sequence. A sequence’s nondomination emerges from an ideal agent’s meeting requirements for individual choices in the sequence, assuming standard conditions for the sequence, including no unanticipated relevant information.
6.6. Changes in Information and Goals The standards for a sequence of acts consider changes in doxastic and conative attitudes during the sequence. Changes in information during the sequence
7 Ahmed (2014: Chap. 7), in the case of Newcomb insurance, applies maximization of utility to sequences of choices and notes that problems arise for causal decision theory’s account of an option’s utility. However, as this chapter argues, rationality imposes no multichronic requirement on a sequence of choices that is independent of its synchronic requirements on the choices in the sequence. It does not require realizing a sequence that maximizes utility according to causal decision theory’s method of calculating utility applied, not to the options in a decision problem, but to sequences of choices. An agent lacks the direct control over realization of a sequence of choices that the agent has over realization of a single choice.
Combinations of Acts 153 affect doxastic attitudes and, through them, conative attitudes. During a sequence of choices, goals as well as information change. For instance, some conditional goals become unconditional; the goal of performing an act given arrival at a stage in the sequence becomes, when the agent reaches the stage, the goal of performing the act. Rationality regulates changes in goals, especially derived goals, but does not regulate changes in information that come from experiences the agent does not control; often nature, not the agent, is the source of new information. According to subjective Bayesianism, rationality permits a set of agents with the same information, but with divergent epistemic tastes, to have divergent precise probability functions. I adopt a version of objective Bayesianism that recognizes just one rational set of probability functions for an ideal agent given the agent’s total body of evidence. A rational ideal agent’s single set of probability functions replaces subjective Bayesianism’s set of permissible functions. I treat mainly cases in which information is sufficient to make the set a singleton. In these cases, objective and subjective Bayesians may agree that the evidence warrants a single probability function. For instance, they may agree that the evidence agents have warrants the usual probability assignments to the possible outcomes of common games of chance, such as roulette. The principle of conditionalization assists rationality’s evaluation of a sequence of risky options when relevant information changes during the sequence. For a rational ideal agent, conditionalization updates probabilities. When a set of probability functions paired with utility functions represents an agent’s imprecise probability and utility assignments, conditionalization updates each probability function in the set, and these changes then update the set of paired probability and utility functions. The principle of conditionalization assumes that an agent acquires information by becoming certain of propositions. Jeffrey conditionalization generalizes by treating as new information experiences that change probabilities without making certain any evidential proposition, such as a proposition reporting an observation. However, if the experience that changes probability assignments is represented by an introspective proposition that one had the experience, say, a proposition that one had that experience, then ordinary conditionalization using the proposition may replace Jeffrey conditionalization; Diaconis and Zabell (1982) make this point. Also, conditionalization, unlike Jeffrey conditionalization, does not make probabilities sensitive to the order in which elements of a body of information
154 Rational Responses to Risks are incorporated into the body of information; this strengthens its support. Therefore, I assume that rational ideal agents update by conditionalization. Proponents of Jeffrey conditionalization may take this as a restriction on the cases I treat. Rationality requires degrees of belief at a time to fit the evidence at the time. Conditionalization emerges from the rational degrees of belief that yield an agent’s probability assignments before and after new evidence arrives. Conditionalization is not a requirement independent of the requirement to have at each time degrees of belief that fit the evidence. An argument for conditionalization’s derivative status points out that there is no room for independent requirements on degrees of belief at a time. The requirement of fit with the evidence at the time crowds out any independent requirement of conditionalization because fitting the evidence is necessary and sufficient for rationality at a time. Updating by conditionalization is a diachronic feature of an agent’s doxastic attitudes that emerges from the agent’s having doxastic attitudes that fit the agent’s evidence before and after acquiring new evidence. Subjectivists, who do not acknowledge that evidence settles doxastic attitudes, do not show that an agent who acquires new evidence should conditionalize rather than make a fresh start assigning degrees of belief. They do not make a case for conditionalization’s being a requirement independent of rationality’s synchronic requirements.8 The progression of time during a sequence of choices brings a change in information and updated probability assignments that respond to the new information. The new information may tell an agent her position in a decision tree that represents her sequence of choices, and the tree may have some chance nodes, along with decision nodes, that the agent passes. If information changes during a sequence of choices, then evaluation of options uses probability assignments updated by conditionalization. A rational ideal agent may change basic, underived goals. For example, a person who once enjoyed loud music may now enjoy instead quiet music. Although new information does not provide reasons for changing basic goals, an agent’s utility assignment to a possible world, representing an option’s outcome, is relative to a time and the agent’s position in the world. Section 2.7 notes that an agent’s intrinsic evaluation of a risk may change as the agent moves past the risk. 8 In particular, the diachronic Dutch book argument does not govern an agent who makes a fresh start.
Combinations of Acts 155 Derived goals are sensitive to information. So, comprehensive utility, which evaluates a proposition with respect to all goals, basic and derived, changes with information. An option’s comprehensive utility, its expected utility, changes as an agent updates her probability function in light of new information. Rationality’s evaluation of an agent’s choice uses the agent’s information about coming, foreseen changes in his basic, underived goals. A typical agent anticipates some but not all changes in goals, and currently endorses some but not all anticipated changes in goals. The agent should desire satisfaction of future goals he endorses, but may discount satisfaction of future goals he does not endorse. The agent’s current utility assignment registers the agent’s desires concerning satisfaction of future desires. An option that maximizes utility according to the agent’s current utility assignment accommodates the agent’s future desires, as Weirich (2018a) explains. Figure 6.2 presents a sequential version of Allais’s paradox, as Section 4.4 describes the paradox. The circles depict chance nodes, and the branches represent epistemic possibilities. Each possibility comes with a probability. The figure depicts the agent’s choice at a decision node by thickening the line representing the option chosen. At the start of the tree, the agent selects
$3,000
Offer A E Chance if E for offer A
not-E
1/4
4/5 chance of $4,000
3/4 $0 $3,000
Offer B
E
not-E
1/4
3/4 $0
Figure 6.2 A Sequential Version of Allais’s Paradox
156 Rational Responses to Risks a chance for offer A, which comes if event E occurs. However, if the offer arrives, he takes the sure $3,000. His choices considered in isolation from each other follow reasonable preferences at each node, but the preferences are inconsistent taken together, assuming that the agent cares only about money and ways of obtaining it. At the beginning, selection of the chance for offer A provides a (1/4 × 4/5) or 1/5 chance of $4,000, which the agent prefers to a 1/4 chance for $3,000. If the offer A arrives, he prefers $3,000 to a 4/5 chance of $4,000. Using amounts of money as final outcomes, the first choice expresses the inequality (1/5)U($4,000) > (1/4)U($3,000), and so (4/5)U($4,000) > U($3,000). This is contrary to the inequality U($3,000) > (4/5)U($4,000) that the second choice expresses. Suppose that the agent, instead of being neutral toward risk, has an aversion to risk, so that final outcomes include risks. Despite the enrichment of final outcomes, the agent’s choices indicate a change of mind. At the first decision node, the agent passes over a 1/4 chance for $3,000 to have a chance to move to the second decision node, but at the second decision node his choice makes it the case that at the first decision node he has in effect selected a 1/4 chance for $3,000. With neutrality toward risk, the change of mind exercises a permission. With aversion to risk, it has a justification. A justification of the change of mind comes from the agent’s change in information. At the first decision node, the agent does not know that if he takes the chance for offer A, he will reach the second decision node. His movement from the first to the second decision node brings new information, even if it is information that the agent anticipates having should he reach the second decision node. Suppose that at the first decision node he prefers at the second decision node to take the 4/5 chance of $4,000. At the first decision node, he knows the information he will have at the second decision node if he reaches that node but does not know whether the event E occurs. At the second decision node, the agent has the new information that the event E occurred. This new information may justifiably change his preference between the option of $3,000 and the option of a 4/5 chance of $4,000. The preferences among options that an agent has at a decision node may differ from the preferences that earlier he had, if at the node the agent has new, relevant information. At the second decision node, the agent’s new information about E and the movement into the past of the risk that E does not occur, may support new preferences concerning the options at the second decision node. The movement of a risk into the past may justify a change in preferences as an agent progresses through a sequence of choices. In Figure 6.2, at the
Combinations of Acts 157 first decision node, the agent’s aiming for $4,000 involves a risk. The risk undertaken is part of the final outcome of each option of the second decision node. Passing through the chance node for the event E brings information about E that affects the agent’s attitude to that risk, the agent’s ranking of final outcomes, the agent’s ranking of sequences of choices, and the agent’s preferences among the options at the second decision node. Moving past the chance node and learning whether the event E occurs may provide a reason for reversing an earlier conditional preference between $3,000 and a 4/5 chance of $4,000. Successfully passing the risk of not receiving offer A may justify a change in preferences among final outcomes and the sequences of choices that produce final outcomes. As the agent moves along a branch from the first decision node through the second decision node to the branch’s final outcome, the final outcome does not change, and the agent does not acquire unanticipated information. However, the agent’s temporal position in the final outcome, a possible world, changes, and his evaluation of the final outcome changes because the change in his temporal position brings a change in his information. The resultant change in attitudes to options is rational.
6.7. Summary Hedging takes advantage of the nonadditivity of some risks. It adds a risk to a combination of risks to reduce the overall risk of the combination. In combinations of risks, additivity of the intrinsic utilities of the risks depends on the combination’s type and the risks’ type. Sequences of choices may produce a combination of risks. A sequence’s (expected) utility, examines the sequence’s probability distribution of utilities of possible outcomes, and evidential and experiential support for the distribution, to assess the sequence’s exposure to chance. The sequence’s rationality, however, depends, not on the sequence’s utility, but on the sequence’s having rational components.
PART III
IL LU ST R AT IONS A ND GE N E R A L IZ AT IONS Principles of rationality guide practical decisions about investments, professional advice to clients, and government regulation of risk. A philosophical theory of risk refines the direction such principles give, as several illustrations show. Generalizing the principles by removing idealizations extends their reach.
7 Return-Risk Evaluation of Investments Chapter 4 formulates a way of obtaining an act’s utility using a division of the act’s consequences into its risk and its independent consequences. This chapter presents, as an illustration, a special case in which the independent consequences are exclusively monetary. In an illustration of a method of evaluating an act, simplifying assumptions may put aside points that the method’s practical applications address. Practical applications treat complications such as an agent’s cognitive limits and the interaction of an agent’s multiple goals. Illustrations, using assumptions to put aside such complications, highlight the operation of other factors driving evaluations of acts. In financial management, evaluation of an investment commonly uses the investment’s expected return and its risk, as Brigham and Houston (2009: Chap. 8) explain. This chapter formulates a version of the return-risk method of evaluating an investment and then justifies it using Chapter 4’s mean-risk division of an act’s utility. Taking risk as exposure to chance, the method derives an investment’s utility, under some assumptions about an agent’s goals, from (1) the investment’s utility ignoring its risk and (2) the intrinsic utility of the investment’s risk. The assumptions include Chapter 4’s assumption that an agent has a basic intrinsic aversion to risk and the additional assumption that an agent cares about only money, the means to money, and risk. The justification of return-risk evaluation, precisely formulated within the chapter’s normative model, draws on distinctions between types of risk and between types of utility.
7.1. The Return-Risk Method The return-risk method generally recommends, of two investments, the one with the higher expected return, but if this investment has only slightly higher expected return, and is much riskier than the other investment, the method may recommend the other investment. Assuming an aversion to risk, the return-risk method of evaluation adjudicates trade-offs between Rational Responses to Risks. Paul Weirich, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190089412.001.0001.
162 Rational Responses to Risks expected return and risk by taking the value of an investment to be its expected return, reduced by a proportion of its risk. This chapter, to put aside factors, such as inflation, that complicate an investment’s appraisal, assumes that investments produce returns instantaneously. According to the return-risk method, the value of investing $100 for an immediate return of $103 is the value of $103. Because the investment brings no risk, its value is its expected return without any reduction for risk. If another investment yields instantaneously either $97 or $109, each with a probability of 50%, then its expected return is also $103, but because of an adjustment for its risk, its value descends below the value of $103. Return-risk evaluations of the two investments favor the first investment because it yields an expected return of $103 without risk and so without a discount for risk. Rationality requires an intrinsic aversion to an investment’s risk, in the sense of its exposure to chance. The required intrinsic aversion explains why a rational investor prefers, among investments with the same expected return, those with less volatility and so with less risk. Some features of an investment may compensate for its exposure to chance. Rationality permits an extrinsic attraction to an investment’s risk if an accompanying prospect of gain compensates for the risk. To assess an investment’s expected return and its risk, an investor may use data about the investment type’s past returns and their variation. The data may ground a calculation of the investment’s expected return and a conative attitude toward the investment’s risk in the sense of its exposure to chance. Some definitions of an investment’s volatility assume repetitions of the investment so that the investment has a track record of returns. The version of the return-risk method I consider does not assume an investment’s repetition and does not define an investment’s risk using the investment’s track record. An investment’s volatility is variation in the investment’s epistemically possible outcomes. Taking volatility as exposure to evidential chance, and so risk as exposure to evidential chance, the return-risk method applies to investments that are single cases. The return-risk method of evaluation treats risk, in the sense of exposure to chance, as an attribute of an investment and treats aversion to the investment’s risk as an aversion to this attribute, not as a feature of a utility curve for amounts of money that represents preferences among gambles. Investing in bonds has a lower expected return than investing in stocks, but brings less risk. Because of an aversion to risk and a willingness to reduce return to reduce risk, an investor may buy bonds rather than stocks.
Return-Risk Evaluation of Investments 163 The reduction in risk compensates for the lower expected return. Markowitz (1959) explains, using a return-risk evaluation of investments, how diversification of a portfolio of investments reduces risk, taken as volatility. Adding bonds to a portfolio of stocks may lower the portfolio’s exposure to chance because bond prices are inversely correlated with stock prices; purchasing bonds hedges against losses from the portfolio’s stocks. The return- risk method of assessing an investment evaluates an investment’s expected return, and then adjusts the result using an evaluation of the investment’s risk in the sense of its exposure to chance. Although the investment’s expected return covers risks in the sense of chances of bad outcomes, it does not cover the investment’s risk in the sense of its exposure to chance. Expected return appraises an investment’s chances for possible monetary outcomes but not the investment’s overall exposure to chance. Compare an investment that offers a gain of $10 for sure with another investment that offers a gain of $20 with a probability of 50% but otherwise no gain or loss. These investments have the same expected return, but the second runs a risk of gaining nothing. The risk decreases the investment’s utility, given an aversion to exposure to chance. Risks in the ordinary sense are chances of bad events, and probability- utility products appraise these chances. A risky investment is a collection of chances for possible returns, typically a mixture of good and bad possible returns. The intrinsic utilities of an investment’s chances for monetary outcomes, for exclusive and exhaustive possible outcomes, sum to the investment’s expected utility ignoring risk, but need not sum to the investment’s expected utility all things considered. An investment’s expected utility discounts its expected monetary return in light of its risk in the sense of its exposure to chance. The return-risk method of evaluating an investment treats risk as an interaction effect of the various chances for gain or loss that the investment generates but that the option’s expected return omits. As Jorion (2006: 75) and Baron (2008: Chap. 14) note, the interaction effect of chances that the return-risk method calls risk differs from a chance of a bad event, or a risk in the ordinary sense. Although risk in the ordinary sense includes the chances for loss an investment generates, risk as a component of a return-risk evaluation of an investment does not cover these chances because expected return covers them. An investment’s volatility differs from a chance of financial loss that the investment generates, although this chance and other chances of possible outcomes produce the investment’s volatility and so the investment’s
164 Rational Responses to Risks risk in the sense of its exposure to chance. A return-risk evaluation of an investment assumes that the evaluation of the investment’s risk in the sense of its exposure to chance covers every relevant feature of the investment not evaluated by the investment’s expected return, in particular, the interaction of chances for gains and chances for losses. Taking account of the investment’s risk in the technical sense of exposure to chance completes an investment’s evaluation when combined with the investment’s expected return. Return-risk evaluation of an investment takes the value of an investment to be its expected monetary return minus some proportion of a measure of its risk in the sense of its exposure to chance. I treat a version of the method of evaluation that takes an investment’s value as its utility. Under the assumption that an investor cares only about money and risk, expected return may substitute for an investment’s utility ignoring risk. An investment’s return- risk evaluation uses a measure of the investment’s risk, and a proportion of the risk’s size according to this measure, to represent the intensity of the investor’s aversion to the investment’s risk, considered by itself. It sometimes uses the variance of the probability distribution of the investment’s possible consequences, a source of exposure to chance, as a measure of the investment’s risk.1 I put aside assessment of measures of an investment’s risk until Section 7.4. I justify a version of return-risk evaluation of an investment that uses the intrinsic utility of the investment’s risk, and this intrinsic utility need not come from a calculation using a measure of the investment’s risk. Finance traditionally calls volatility risk, although, when the only volatility is among positive returns, no risk exists in the sense of a chance of a bad event. I follow tradition in using risk in the sense of volatility to describe a factor in evaluation of an investment, and I take volatility as exposure to chance, as Section 1.2 describes it. Although risk in the ordinary sense focuses on bad events, an investment’s risk in the technical sense of its exposure to chance considers both good and bad events that the investment may produce. The investment’s chances for good events and for bad events together create the investment’s exposure to chance. Stretching the meaning of risk to cover volatility, even without a chance of a bad event, suits principles of evaluation 1 Another name for return- risk evaluation of investments is mean- variance evaluation of investments, which yields mean-variance preferences among investments. Maccheroni, Marinacci, and Ruffino (2013: 1076) say about the mean-variance preference model, “This model is the workhorse of asset management in the finance industry.” They extend the model to handle cases in which an investor evaluates investments not knowing the probability distribution of possible monetary outcomes, and so with ambiguity about the distribution, and suppose that the investor is averse to this technical type of ambiguity.
Return-Risk Evaluation of Investments 165 that weighs variation in possible bad outcomes the same way as variation in possible good outcomes. Moreover, extending the sense of risk in this way does not depart radically from the common understanding of risk, as Section 1.2 notes. Volatility of any sort creates a risk of the unknown, or an exposure to chance, and so a type of risk. An investment creates a type of evidential risk arising from uncertainty, even if the investment has only good possible outcomes. Also, application of the principle of expected utility arbitrarily selects a zero point for its utility scale. Taking an investment’s greatest possible return as the zero point, any lesser return is bad in the sense of being below the zero point. Hence, any volatile investment creates a chance of a bad event, namely, a chance of an event worse than the greatest possible return. Volatility, by convention, may involve a chance of an event bad relative to selected events.
7.2. Derivation from the Mean-Risk Method In a decision problem, Chapter 4’s mean-risk evaluation of an option separates the option’s risk, in the sense of its exposure to chance, from the option’s other basic consequences. It combines evaluations of the two types of consequence to obtain the option’s utility. A calculation of the option’s causal utility uses the expected causal-utility of the option’s basic consequences besides its risk and the intrinsic utility of the option’s risk. In contrast, a calculation of the option’s comprehensive utility uses the expected comprehensive-utility of the option’s entire outcome ignoring its risk, and hence its consequences independent of its risk, and the intrinsic utility of the option’s risk. The first calculation, targeting causal utility, supports return-risk evaluation that uses expected gain as expected return, and the second calculation, targeting comprehensive utility, supports return-risk evaluation that uses expected level of wealth as expected return. The two types of return-risk evaluation yield the same ranking of investments. Because the second type, involving outcomes rather than just consequences, uses standard, comprehensive utilities, this section targets its justification. The formula for an option’s expected utility uses the probabilities and utilities of the option’s possible outcomes taken comprehensively to include all of an option’s relevant consequences. If an agent is averse to risk, as return- risk evaluation assumes, an option’s risk counts as a relevant consequence and affects the option’s evaluation; the option’s risk belongs to every possible
166 Rational Responses to Risks outcome of the option. Expected-utility calculations thus accommodate attitudes toward risk. According the return-risk method, as this chapter formulates it, an investment’s utility depends on the investment’s expected return and the intrinsic utility of the investment’s exposure to chance. Chapter 4’s mean-risk method and this chapter’s return-risk method alike use an option’s risk in the sense of the option’s exposure to chance. However, Chapter 4’s general mean-risk method of evaluating options does not assume that an option’s relevant consequences besides risk are just monetary. Everything that matters to an agent besides risk goes into the outcomes that generate an option’s (expected) utility putting aside risk. An option’s utility equals the sum of the intrinsic utilities of the option’s chances for its possible comprehensive outcomes. Because the option’s risk is a consequence of the option in every possible outcome, subtracting its intrinsic utility from the utility of every possible outcome amounts to subtracting its intrinsic utility from the option’s utility. The mean-risk method therefore obtains an option’s utility by adding the option’s expected utility ignoring its risk and the intrinsic utility of the option’s risk. The latter evaluates the option’s risk taken by itself and ignoring accompaniments of the risk, such as possible monetary gains. The mean-risk method of evaluation generates the return-risk method of evaluation by having expected return replace expected utility ignoring exposure to chance. The replacement’s justification rests on the assumption that an investment’s expected monetary return comprehends all relevant consequences besides risk. The return-risk method assumes that an investor, besides having an intrinsic aversion to risk in the sense of exposure to chance, cares only about monetary outcomes and chances of them, and cares about an amount of money in proportion to the amount. The investor’s basic intrinsic attitudes are limited to a basic intrinsic aversion to risk and a basic intrinsic desire for money proportional to the amount. The investor may be acting as the administrator of a pension fund and accordingly may adopt a linear utility function for money. Restricting in this way the grounds of an investor’s utility assignments to an investment’s possible outcomes, the utility of an investment’s possible outcome ignoring risk reduces to a possible return, assuming that the utility scale adopts a monetary unit as its unit. Given the restrictions and the convention for utility’s scale, the expected utility of an investment ignoring its risk equals the expected utility of the investment’s return, a probability-weighted average of the utilities of the possible monetary gains and losses from the investment, and the expected utility of the
Return-Risk Evaluation of Investments 167 investment’s return, in turn, equals the investment’s expected return. Hence, the investment’s expected utility ignoring risk equals the investment’s expected return. Under the assumptions mentioned, mean-risk evaluation yields a return- risk evaluation of an investment that calculates an investment’s utility from its expected return and the intrinsic utility of its risk in the sense of its exposure to chance. An investment’s utility is the sum of its expected return and the intrinsic utility of its risk, assuming coordination of scales for utility and intrinsic utility. This derivation justifies the return-risk method. The derivation of return- risk evaluation from mean- risk evaluation adopts the assumptions of mean-risk evaluation that make independent (1) evaluation of an option’s risk and (2) evaluation of the option’s other basic consequences. It relies on these assumptions because the return-risk method assumes the independence of an evaluation of an investment’s return and an evaluation of the investment’s risk. The next section adds depth to this section’s justification of the return-risk method by using the assumptions to establish the independence of evaluation of an investment’s return and the intrinsic utility of the investment’s risk.2
7.3. Independence Chapter 3 adopts principles of rationality concerning risk, including principles about the rationality of attitudes toward risk. This chapter restricts the return-risk method to cases with rational ideal agents who comply with Chapter 3’s principles of rationality. According to Chapter 3, rationality requires an intrinsic aversion to an option’s risk taken as its exposure to chance. The justification of return-risk evaluation of investments assumes that an investor has an intrinsic, and moreover a basic intrinsic, aversion to an investment’s risk in the sense of its exposure to chance. Because of the nature of the aversion, the intrinsic utility of an investment’s risk is independent of the investment’s expected utility ignoring its risk. This independence 2 Multi-attribute utility theory, which Weirich (2012) reviews, replicates this section’s justification of return-risk evaluation. It yields as a special case return-risk evaluation, assuming independence of attitudes to risk and to return, and assuming that risk and return are the only relevant attributes of an investment. A return-risk evaluation of an investment is a type of multi-attribute utility analysis with a justified method of comparing utilities from diverse attributes to obtain an overall evaluation of the investment. Keeney and Raiffa (1993) present a form of multi-attribute utility theory for decisions with multiple objectives. Peterson (2007) critically appraises multi-attribute utility theory.
168 Rational Responses to Risks justifies return-risk evaluations of investments. Maximizing expected utility is equivalent to selecting an investment that maximizes the sum of the expected utility of the investment ignoring its risk and the intrinsic utility of the investment’s risk. Chapter 4 shows how the assumption that an agent has a basic intrinsic aversion to risk supports mean-risk analysis of an option’s utility. This section reviews the main points, applied to a return-risk analysis of an investment’s utility, and their implications for rational investment choices. Using the independence of basic intrinsic attitudes, Chapter 4 establishes the compositionality, separability, and additivity of an option’s utility divided into its expected utility ignoring its risk, in the sense of its exposure to chance, and the intrinsic utility of its risk. These structural relations among utilities justify mean-risk evaluation of an option, and as a special case, return-risk evaluation of an investment, given an investor’s interest in only money and risk. The reasons for basic intrinsic attitudes are independent. No basic intrinsic attitude is a reason for another basic intrinsic attitude. A basic intrinsic attitude’s realization therefore makes a constant contribution to a world’s utility, whatever other basic intrinsic attitudes the world realizes. Because the attitude’s realization makes a constant contribution, compositionality, separability, and additivity hold for composites of realizations of basic intrinsic attitudes. Given an interest in only money and risk, and a basic intrinsic aversion to risk, an investment’s utility is a function of the investment’s expected return and the intrinsic utility of the investment’s risk. Because an investment’s utility has this type of compositionality, two investments with the same expected return and the same risks are equivalent. A gamble concerning a toss of a fair coin that gains $1 if heads and loses $1 if tails has the same expected return as a gamble that gains $10 if heads and loses $10 if tails. If a gambler prefers the first gamble’s risk to the second gamble’s risk, then the preference-separability of risk and return gives him a reason to prefer the first gamble. In general, given that an investment’s risk is preference-separable from its other basic consequences, an investor who cares only about risk and expected return should prefer the first of two investments, if he prefers the first’s risk to the second’s risk, and the two investments offer the same expected return. Principles of choice using compositionality and separability resolve some decision problems about investments even in the absence of probability and
Return-Risk Evaluation of Investments 169 utility assignments to the possible outcomes of the investments. However, they do not resolve decision problems in which investments are unequal with respect to both return and risk. In such decision problems, given standard idealizations, an investment is rational if and only if it maximizes expected utility, or if probabilities and utilities are imprecise, maximizes expected utility given some admissible pair of a probability and a utility assignment. Because the independence of the intrinsic utilities of risks grounds return- risk additivity, return-risk evaluations identify rational choices.
7.4. Measures of Risks A return-risk evaluation of an investment assumes an intrinsic aversion to the investment’s risk proportional to the risk’s size when it uses a proportion of the investment’s risk as a substitute for an investor’s assignment of an intrinsic utility to the investment’s risk. As Section 3.5 notes, a principle of proportionality governs an investor’s intrinsic aversion to an investment’s risk, in sense of its exposure to chance, if the investor is indifferent to features of the risk besides its size, an assumption about the investor that this chapter’s version of return-risk evaluation adopts. An investor applying the return- risk method to obtain an investment’s utility identifies the intrinsic utility of the investment’s risk, and to do this, the investor need not, but may, use the size of the investment’s risk, her intrinsic attitude to risks of that size, and the principle of proportionality. Applying the principle of proportionality requires a suitable measure of an investment’s risk, namely, a way of obtaining its size. This section reviews proposed measures of a risk’s size. A measure of an investment’s risk, in the sense of its exposure to chance, attends to the nature of exposure to chance. Exposure to chance is a theoretical entity. Therefore, before assessing ways of measuring risks for applications of the return-risk method of evaluating an investment, I review characteristics of exposure to chance, as Section 1.2 presents it. An option’s risk, in the technical sense of its exposure to chance, is a consequence of the option. An option’s exposure to chance arises from the option’s leaving to chance significant features of its outcome. An option is risky in the sense of creating an exposure to chance if any of its possible relevant consequences is uncertain. This happens if the option generates any chance, distinct from a 0% or a 100% chance, of a good or a bad event. An option with
170 Rational Responses to Risks a sure outcome, specifying all that matters, generates no exposure to chance; its exposure to chance has a null value. The intrinsic utility of its exposure to chance is zero on a scale that uses indifference as a zero point. An option produces for sure its risk in the technical sense of its exposure to chance, so the risk, despite being a bad event, is not a risk in the ordinary sense of a nonextreme chance of a bad event. An option’s risk in the sense of its exposure to chance depends on the option’s probability distribution of utilities of the option’s possible outcomes. The risk is not the probability of some single bad event that the option may generate but rather is a risk in a technical sense that depends on the probabilities and utilities of all possible consequences, including good consequences, and also depends on the evidence and experience grounding the agent’s probability and utility assignments to possible outcomes. Any unknown feature of an investment’s outcome is left to chance and, if significant, creates exposure to chance. The feature may be unknown because of insufficient evidence, or because of the indeterminacy of physical events. An option’s probability distribution of utilities of possible outcomes may involve either physical or evidential probabilities. Hence, an investment’s risk in the sense of its exposure to chance has both an evidential and physical interpretation. If the investment’s risk is evidential, the risk arises from variation in epistemically possible outcomes of the investment. If the investment’s risk is physical, and its analysis adopts a frequentist interpretation of physical probability, the risk shows itself in variation in the actual outcomes of multiple trials of investments of its type. In an evaluation of an option using physical probabilities of possible outcomes, the option’s physical risk in the sense of its exposure to physical chance depends not only on the shape of the probability distribution of the utilities of possible outcomes but also on the extensiveness of the agent’s experiences grounding the agent’s utility assignments to the option’s possible outcomes. The physical probabilities are independent of the agent’s experiences, but the utilities are not. The less experience an agent has to ground his utility assignments, the less stable are the agent’s utility assignments and the greater is the option’s risk. An option’s risk is partly the risk that its outcome will not be as good as the agent imagines. Realization of the option’s outcome may bring experiences that revise the agent’s assignments of utilities to the possible outcomes. An option’s exposure to evidential chance is information-sensitive and grounded in evidential probabilities. In an agent’s evaluation of an option
Return-Risk Evaluation of Investments 171 using evidential probabilities of epistemically possible outcomes, the option’s evidential risk in the sense of its exposure to evidential chance depends not only on the shape of the probability distribution and the experiences grounding the agent’s utility assignments to possible outcomes but also on the extensiveness of the evidence grounding the probability distribution. The less extensive is the relevant evidence, the greater is the option’s evidential risk. The more extensive is the evidence—that is, the weightier it is—the more robust are the probabilities with respect to changes in information, and the less is the option’s risk. Extensive evidence reduces the evidential chance that the agent discovers additional evidence that significantly changes his probability assignments to an option’s possible outcomes and therefore changes the ranking of options according to their expected utilities. Evidential probabilities equal physical probabilities when the latter are known, assuming that principles of direct inference apply. However, because agents typically do not know the physical chances their options create, general methods of evaluating their options use the evidential chances their options create. Ideal agents know these evidential chances and can use them to assess options. Consequently, this section examines measurement of an option’s risk in the sense of its exposure to evidential chance. An investment’s risk is an equilibrium value for a risk factor in an investment’s possible outcomes. The risk depends on the utilities of the possible outcomes, which include the risk, as Section 1.2.3 explains. The risk-component r of possible outcomes has an equilibrium value when calculating r, using the probability distribution D of the utilities of the possible outcomes including r, yields r; that is, the equilibrium value of r is a fixed point of the function f yielding r when applied to D. Letting {oi} be the set of exclusive and exhaustive possible outcomes indexed by i = 1, 2, 3, . . . , n, with r being the risk-component of a possible outcome oi, and with U(oi) yielding the utility of oi including r and so discounted for r, assuming an aversion to risk, f(D{U(oi)}) = r when r has its equilibrium value. Granting that the risk-component r is separable from the other independent components of a possible outcome and decreases the utility of each possible outcome by the magnitude of IU(r), that is, the intrinsic utility of r, a negative quantity assuming aversion to risk, if {oir–} is the set of possible outcomes ignoring risk, then when r has an equilibrium value, f(D{U(oir–) + IU(r)}) = r. An investment’s realization of basic intrinsic attitudes besides aversion to risk yields the utility of the investment ignoring its risk. This utility, under the chapter’s assumptions, equals the investment’s expected monetary return.
172 Rational Responses to Risks Taking the investment’s risk as the variance of the probability distribution of possible monetary returns makes a proportion of the variance the equilibrium value of the investment’s risk in a possible outcome because subtracting a proportion of the variance from each possible outcome’s utility does not change the variance. Using variance v as the function f computing an option’s risk r (the equilibrium value of r) from utilities of possible outcomes, and applying it using possible outcomes ignoring risk, and so using possible monetary outcomes assuming that the agent cares only about money and risk, v(D{U(oir–)}) = v(D{U(oir–) + IU(r)}) = v(D{U(oi)}) = r. If in a particular case variance is an adequate measure of an investment’s risk, a return-risk evaluation of the investment uses a proportion α of the variance to represent the magnitude of a rational aversion to the investment’s risk. Hence, v(D{U(oir–) – αv(D{U(oir–)}) = r. An agent is free to change his attitude to a risk with the passage of time, and without reason, because he is free to settle as he pleases his attitude at a time to the risk, within the constraints that Chapter 3 presents, including consistency and proportionality. The mean-risk method of evaluating an option, and the special case of the return-risk method of evaluating an investment, use an agent’s attitudes to risks at a time. However, they adopt a fine individuation of an option’s possible outcomes, using propositions as objects of probability and utility assignments. Hence, an agent’s attitudes toward risks in the sense of exposures to chance may rationally vary with details of the risks. An agent, at a time, may have reasons for a variation in attitudes to risks that differ only in fine-grained possible outcomes. For example, the strength of an intrinsic aversion to a risk of financial loss may depend on the source of the loss. An agent may evaluate differently the risk of a loss from an investment in a green enterprise and the risk of a loss from investment in automobile production, because of differences in the fine- grained consequences of each type of loss, even though the consequences are the same taken, more coarsely, as just financial loss. An investor need not have one attitude toward risk—one that questionnaires elicit and that guides all investment decisions. However, the chapter’s assumption that an investor in its normative model cares only about risk, money, and ways of obtaining money simplifies the representation of an investment’s consequences and moves into the background issues concerning the grain of an investment’s possible outcomes. In particular, this chapter’s assumption that an investor is indifferent to features of risks besides their sizes simplifies evaluations of risks by justifying a coarse-grained individuation
Return-Risk Evaluation of Investments 173 according to size. Because only the sizes of risks matter, other features of risks may be ignored. An investment’s risk in the sense of its exposure to chance is the risk that an investment’s expected return omits. Financial experts, with the technical sense of an investment’s risk in mind, debate measures of risk, measures of aversion to risk, and the method of combining aversion to an investment’s risk with an evaluation of its expected return to obtain an overall evaluation of the investment.3 This section uses return-risk evaluations of investments only to illustrate risk’s role in evaluation of options, and so does not explore in detail issues concerning limitations and refinements of measures of risk.4 An option’s risk, in the sense of its exposure to chance, depends on the option’s distribution of chances for good and bad outcomes. The greater the variance of the possible monetary outcomes, the greater the risk, other things being equal. Variance is a common measure of a risk’s size, however, only in special cases does variance accurately represent an option’s risk in the sense of exposure to chance. Variance is not in all cases a good measure of an option’s risk because it ignores risk-relevant features of a probability distribution of possible monetary outcomes. One option may be riskier than another although both options have the same expected return and also have probability distributions of monetary outcomes with the same variance.5 This may happen because an option’s risk responds differently to chances for good events and to chances for bad events so that variation in losses creates more risk than variation in gains. Aversion to a distribution with variation of monetary losses may be greater than aversion to a mirror-image distribution with variation of monetary gains because money has diminishing marginal utility. Variance as a measure of risk works better if it uses utilities of monetary outcomes instead of just monetary outcomes. Variance concerning utility losses and utility gains accommodates money’s diminishing marginal utility. This chapter’s model for return-risk evaluations assumes that the utility of money is linear and so puts aside the diminishing marginal utility of money 3 Holton (2004) describes the difficulties of defining risk adequately for its role in finance. 4 Weirich (1987) reviews some of the issues concerning a measure of risk. Textbooks on financial management, such as Brigham and Houston (2009: Chap. 8), also explore them. A classic article on the topic is Rothschild and Stiglitz (1970). Machina and Rothschild (2008) provide an overview. Chen and Wang (2008) propose a measure of risk that is sensitive to loss aversion and to fat tails in the probability distribution of utilities of possible outcomes. Aumann and Serrano (2008) propose a measure of objective risk, taken as what risk averters hate. A related measure is in Foster and Hart (2009) and extended in Riedel and Hellmann (2013) and Michaeli (2014). 5 Raiffa (1968: 55–56) objects along these lines to using variance as a measure of risk in return-risk evaluations that discount an investment’s expected return using a proportion of its variance.
174 Rational Responses to Risks as an objection to using the variance of the probability distribution of possible returns as a measure of risk. Other objections arise, however. An investor may care about features, besides the mean and variance, of the probability distribution of the utilities of possible returns. Two identical probability distributions of utilities of possible returns may generate different risks because of differences in their possible comprehensive outcomes. An investment’s risk may be sensitive to possible comprehensive outcomes and not just to returns. As noted, a rational investor may count as unequal in risk two investments that are equivalent except for differences in the way a possible monetary loss arises, even if the investments create the same probability-utility product for the monetary loss. The monetary loss may combine differently with other possible consequences of the investments to form different comprehensive outcomes of the investments; one investment may produce the loss because of the investor’s error, and the other investment may produce the loss because of a competitor’s astuteness. Then the distribution of possible returns does not accurately represent the utility distribution of possible comprehensive outcomes, save a constant for risk, as it must to yield a satisfactory return-risk evaluation of an investment, and the variance of the distribution of possible returns does not accurately represent the size of the investment’s risk. This chapter’s model assumes that an investor cares only about risk, money, and ways of obtaining money to put aside cases in which two ways of losing money have unequal utility assignments. The variance of the probability distribution of utilities of possible returns reduces to the variance of the probability distribution of the possible returns. As noted, the assumptions about an investor’s goals put aside the fine-graining of outcomes as an objection to taking variance as a measure of an option’s risk. Despite the model’s assumptions, variance among possible returns does not handle risk that weakness of evidence generates. A rational ideal investor meeting the model’s assumptions may have a preference between two distributions of possible monetary returns that have the same mean and variance. The investor’s evidence may more strongly support, for the first distribution than for the second distribution, the agent’s probability assignments to possible outcomes; the weakness of the evidence concerning an investment’s outcome, sometimes called the evidence’s ambiguity, increases the investment’s exposure to chance. Thus, the return-risk method for evaluating investments, if it uses variance as a measure of risk, besides restricting itself to cases in which the method is justified, also restricts itself to cases in
Return-Risk Evaluation of Investments 175 which variance is a suitable, approximate measure of risk, yielding approximate return-risk appraisals of investments. Besides variance, another measure of an investment’s risk is beta, which is standard deviation from a benchmark, such as return on US treasury bonds. The standard deviation equals the square root of the variance, and so beta has the limits of variance as a measure of risk. It does not discriminate between distributions of same mean, variance, and, thus, standard deviation. Also, standard deviation increases less quickly than does variance when a distribution spreads, and so in return-risk evaluation imposes a smaller discount for risk than does variance, assuming a discount of a fixed proportion of a risk’s size. Increasing the proportion of a risk’s size that discounts expected return does not compensate because variance does not increase linearly as standard deviation increases. Choosing between variance and standard deviation as a measure of risk draws attention to the difficulty of selecting an adequate measure of risk. A principle of proportionality cannot use both a distribution’s variance and its standard deviation as a measure of risk in a return-risk formula. If proportionality to standard deviation is appropriate, then proportionality to variance is not, and vice versa. Some methods of evaluating investments combine assessments of risk and return using division instead of subtraction. These methods also assume the independence of the assessments of risk and return. An investment’s coefficient of variation is σ/μ, where σ is the standard deviation for returns from the investment type, and μ is the mean of returns from the investment type, or the investment’s expected return. The smaller the coefficient of variation, the better the investment. A refinement of the coefficient of variation that Sharpe (1994) presents is the Sharpe ratio: (R–Rf )/σ, in which R is the investment type’s expected return, Rf is the risk-free rate of return, and σ is the standard deviation for returns from the investment type. The factor R–Rf is expected return in excess of return obtainable without risk, or excess expected return. Thus, the Sharpe ratio is excess return divided by standard deviation. The larger the ratio, the better the investment. Because these methods of evaluation incorporate standard deviation in their measures of risk, they inherit some objections to using variance as a measure, such as neglect of the effect of sparse evidence. Jorion (2006) proposes replacing variance and measures using standard deviation with a measure of risk known as value-at-risk or VAR. Value-at- risk measures the risk in a situation that produces many risks of losses of various amounts. It lumps together losses at least as severe as some particular
176 Rational Responses to Risks loss. For a risky situation, the VAR is a loss such that, during a time period, an equal or larger loss has a probability no greater than some target such as 1% or 5%. This is a measure of a risk in the sense of a chance of a bad event, taking the bad event as a disjunction of possible losses, but VAR uses this chance as a measure of a situation’s overall exposure to chance, taking exposure to chance to depend only on possible losses. When the risk in a situation exceeds the target VAR, a rational response is to reduce the risk. Value-at-risk appraises the overall risk a financial institution faces, for example, and may motivate steps to reduce it. Although VAR assesses a situation’s, rather than an investment’s, exposure to chance, it generates a measure of an investment’s exposure to chance when applied to a situation an investment produces. I assess VAR as a measure of an investment’s risk and so applied to the situation an investment creates. Because VAR considers only possible losses, it rates as equal in risk investments differing only in possible gains. When used as a measure of risk in return-risk evaluations of investments, it needs supplementation with a measure of exposure to chance arising from variation in possible gains. Defining VAR using utility losses rather than monetary losses may improve its accuracy as a general measure of risk by accommodating the diminishing marginal utility of money. A probability distribution of utilities of possible outcomes may represent a situation, with VAR examining utility losses arising from monetary losses. Also, replacing monetary outcomes with comprehensive outcomes may improve VAR’s accuracy. Two losses of the same magnitude may have different utilities because they arise from different sources. Hence, two risky situations with the same VAR may have different expected utilities. However, under this chapter’s assumptions, the utility of money is linear, and agents do not care about the sources of monetary gains and losses. So, using possible utility losses from comprehensive outcomes, instead of just possible monetary losses, does not change comparisons of situations with respect to VAR. Under the chapter’s assumptions, introducing utilities and comprehensive outcomes does not improve VAR’s accuracy as a measure of risk. Harder problems arise. Value-at-risk is not an accurate measure of exposure to chance of loss because probability distributions with the same VAR may yield different risks of loss. A situation’s risk is more severe if all the probability weight of outcomes at or below the VAR is far below the VAR than if it is barely below the VAR. Also, diversification reduces risk, but does not reduce VAR if the diversification does not change the probability of losses below the VAR. Finally,
Return-Risk Evaluation of Investments 177 VAR omits risk arising from weakness of the evidence supporting probability assignments to possible outcomes. All things considered, VAR works as a measure of an investment’s risk of loss only in special cases.6 Because common measures of an investment’s risk, in the sense of the investment’s exposure to chance, work only given restrictions, this section does not propose any general measure of the size of an investment’s risk, in the sense of the investment’s exposure to chance, to use in calculating the intrinsic utility of the investment’s risk according to a principle of proportionality. The section assumes that an investor may form the intrinsic utility of an investment’s risk without applying a principle of proportionality to the risk’s size. Having this intrinsic utility, however formed, and an investment’s expected return, an investor meeting the model’s assumptions may use the return-risk method to evaluate the investment.
7.5. Summary This chapter illustrates mean- risk evaluation of options. Given some assumptions about an investor’s goals, this general method of evaluating options justifies finance’s return-risk method of evaluating investments. Section 7.2 derives a version of the return-risk method of evaluating an investment from Chapter 4’s general mean-risk method of evaluating an option without committing the return-risk method to any particular measure of a risk’s size. This chapter treats cases in which an investor conducts her own assessments of investments when applying the return-risk method. In these cases, return-risk evaluation need not adopt any general measure of risk to guide evaluation of an investment’s risk. However, in many cases an investor does not on her own assess an investment but instead consults an advisor to assist assessment of the investment. Chapter 8 explains how a financial planner may use the return-risk method to advise a client about investment decisions. The method applied this way requires a measure of a risk’s size appropriate for the investor’s options.
6 Delbean (2002) argues that VAR is not a coherent measure of risk; using it to guide behavior may lead to self-contradictory or self-defeating behavior.
8 Advice about Decisions Professionals advise clients, and sometimes, with authorization, decide for clients. Using expert information, a professional may construct preferences for a client, or advise the client about preferences to form, following procedures that the client uses to form preferences. For example, a financial planner advising a client about investment options may seek an investment that the client would select if rational and informed. This chapter formulates a method a professional may use to help a client reach a decision. Chapter 4 justifies for an agent in a decision problem a mean-risk evaluation of an option under the assumption that the agent has a basic intrinsic aversion to risk in the sense of exposure to chance. A professional may apply mean-risk evaluation of options on behalf of a client, provided that the client has such an attitude to this type of risk. Applying mean-risk evaluation of options in decision problems that real clients face calls for professional expertise outside philosophy. To simplify, I apply the method of evaluation only to idealized decision problems with rational ideal agents. Idealized cases of a professional helping a client with a decision problem illustrate, rather than apply practically, mean-risk analysis of an option’s utility. A treatment of idealized cases guides professional advice in real cases.
8.1. Decisions for Another Parents make decisions for a child considering the child’s interests instead of the child’s goals, which may conflict with the child’s interests, and use informed evaluations of options instead of the child’s evaluation of options. Proxies make decisions for others by considering the principal’s goals and the principal’s evaluation of options. Trustee decisions, as I define them, are intermediate in their attention to a client’s evaluation of options and goals. A trustee advances the client’s goals using an evaluation of options resting on expert information about the options’ possible consequences. The trustee
Rational Responses to Risks. Paul Weirich, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190089412.001.0001.
Advice about Decisions 179 identifies options that serve the client’s goals according to the trustee’s expert information. In a trustee decision, as opposed to a paternalistic decision, the trustee serves the client’s goals, and does not in evaluations of options replace the client’s goals with other goals. Also, in a trustee decision, as opposed to a proxy decision, the trustee uses expert information rather than the client’s information to identify a choice-worthy option. The objective is not to decide as the client would, but as the client would if informed and rational. People rely on professionals to make decisions for them. A doctor may prescribe medication using expert information to serve a patient’s goals. A lawyer may adopt for a client a legal strategy that in her expert opinion best serves the client’s goals. An architect may use knowledge of building materials to design for a client a house that best serves the client’s goals. These professionals make trustee decisions for their clients. This chapter treats trustee decisions assuming that a client authorizes a trustee to decide on the client’s behalf by using the trustee’s expert information to serve the client’s goals. Although some clients may want to reserve for themselves decisions on a course of action, other clients reasonably authorize a professional, with expert information, to decide as a trustee on their behalf. A patient ignorant of available medications for his illness reasonably authorizes a physician to prescribe a medication, and the physician appropriately does this under such an arrangement. Instead of deciding for a client, however, a trustee may advise a client on a course of action. Professionals advising clients often want to recommend an act that, according to available information, best serves the client’s goals. Whether deciding for a client or advising a client, the trustee’s objective, I assume, is to identify an act rational for the client to choose if the client had the trustee’s expert information and were in ideal circumstances for making a decision. I present a procedure for identifying such an act.1
8.2. Illustrations Treating ideal versions of decision problems in which a trustee assists a client illustrates the usefulness of mean-risk evaluation. Applying this form of evaluation within a normative model of trustee decisions guides such decisions 1 Weirich (1988) examines methods of making trustee decisions.
180 Rational Responses to Risks in actual contexts. Explanations of solutions to the idealized problems display the operation of some factors that settle the solutions of real-life trustee decision problems, as Weirich (2004a: Chap. 3; 2011) explains. In ideal cases, using expert information, a professional may form preferences for a client, given the client’s authorization, following procedures that the client may use to form preferences. The client may use a mean-risk evaluation of options, and the professional may use the same procedure for the client attending to the client’s goals and to expert information that the client lacks. A professional may apply on behalf of a client the mean-risk method of evaluating options, and then use the evaluations obtained to construct preferences among options on behalf of the client. A professional can evaluate an option for a client by computing its expected utility according to an assignment of evidential probabilities that expert information settles and according to an assignment of utilities that the client’s goals settle. The expected utilities of options computed this way can then direct preferences among options. This method assumes that the professional knows the client’s utility assignment to options’ possible basic consequences, excluding risk, and knows the intensity of the client’s aversion to risk. Extensions of the method to cases in which the professional does not have this information can direct the professional to standard methods of deciding given incomplete information. For example, the professional can assign expected utilities to possible recommendations, given the objective of recommending an act that the client may perform if rational and informed. A recommendation’s expected utility considers, among other things, the gravity of its failing to meet this objective. Also, Section 8.5 presents, for special cases, ways of applying the mean-risk method without having full information about all the quantities it involves. This method also assumes that, in calculations of expected utilities, the professional’s information is sufficiently extensive to settle probabilities and that the client’s experience is sufficiently broad to settle utilities. That is, for possible consequences of an option, the trustee’s expert information, being extensive, settles the probability of the possible consequences, and the client’s goals, resting on extensive experience, settle the utility of the possible consequences. This chapter treats primarily decision problems with probability and utility assignments to the possible consequences of options, but it is straightforward to extend its methods to other cases using principles for making decisions with imprecise probability and utility assignments.
Advice about Decisions 181 Finally, the model for a trustee’s mean-risk evaluation of an option for a client assumes that the trustee is an expert on the topic of the client’s decision problem and so has more extensive relevant information than has the client; the trustee’s information includes the client’s relevant information. In the model, a trustee’s mean-risk analysis of an option’s utility for a client divides the option’s possible consequences into the option’s risk in the sense of its exposure to chance and the option’s independent consequences. After the division, the professional may recalculate the option’s utility using expert information in place of the client’s information about the option’s risk and its independent consequences. To illustrate, consider two medical treatments with the same probabilities of cure but with one treatment having been tested more extensively. A physician may for a patient construct an informed utility comparison of the treatments, replacing the patient’s assessment of their risks with an expert assessment, because a treatment’s risk, taken as exposure to chance, is separable from the treatment’s independent consequences. According to a mean-risk analysis, the utility of a treatment is a sum of the treatment’s expected utility ignoring risk and the intrinsic utility of the treatment’s risk. A physician’s calculation of a treatment’s informed utility for the patient replaces the expected utility of the treatment ignoring risk, assessed using the client’s probability assignment, with an assessment using the physician’s probability assignment. Similarly, the physician’s calculation replaces the intrinsic utility for the patient of the treatment’s risk, as the patient characterizes the risk, with the intrinsic utility of the treatment’s risk, as the physician characterizes the risk. The patient, if ignorant of the difference in the treatments’ testing, may think that the two treatments have equal risks of failure. However, the physician, knowing about the difference in testing, judges that the more extensively tested treatment is less risky and so is the choice available information supports. The following sections elaborate this mean-risk method of evaluating an option in a trustee decision problem. They show in more detail how a trustee may use expert information to evaluate an option for a client.
8.3. Consequences I take an option’s utility as the option’s comprehensive utility. An option’s comprehensive utility evaluates the option by reviewing the option’s
182 Rational Responses to Risks possible outcomes. Another evaluation of an option, an option’s causal utility, evaluates the option by reviewing the option’s possible consequences; they are the parts of the option’s possible outcomes that settle comparisons of the options in a decision problem. Comprehensive utility and causal utility rank options the same way, but using an option’s causal utility evaluates the option efficiently by processing only the considerations relevant to its comparison with other options. In this chapter, I apply the mean-risk method of evaluating an option in a trustee decision problem, sometimes targeting the option’s causal utility and sometimes targeting the option’s comprehensive utility. I use the type of evaluation that best fits the context either by simplifying the type of utility the evaluation uses or by focusing on the consequences that settle comparisons of options. Given the complexities of a client’s decision problem, a professional may appropriately simplify the consequences of the client’s options. A physician assisting a patient’s choice between two surgical procedures may, for simplicity, use life and death as the relevant consequences independent of risk. Even if other consequences are relevant, life and death may cast them into the shadows. Given the simplification, the physician may obtain for the client the expected utility of a surgical procedure’s consequences independent of risk using just the probability-utility product for life and the probability-utility product for death. However, because this chapter’s model removes cognitive limits, a professional’s mean-risk evaluation of an option for a client uses, for reliability, a fine individuation of consequences that attends to every relevant consequence, that is, every consequence the client cares about. An option’s consequences comprehend all relevant consequences. Besides increasing the accuracy of an option’s mean-risk evaluation, this fine-graining of consequences ensures that the option’s evaluation produces an informed utility assignment for the option. Although an individual deciding for herself may use any partition of possible consequences to calculate an option’s utility, a trustee deciding for a client, to calculate an informed utility assignment for the option, uses a partition of possible consequences in which each cell specifies every relevant consequence. The client’s evaluation of the cell is thus informed because the cell’s specification of consequences leaves no relevant matter for additional information to settle. A cell specifies basic consequences, that is, realizations of basic intrinsic attitudes, and a
Advice about Decisions 183 client’s evaluation of an option’s basic consequences is stable during increases in information. In particular, a professional may obtain for a client an informed assessment of an option’s causal utility, or its causal-utility ignoring risk, using a probability assignment and a causal-utility assignment to the option’s possible consequences, in a possible world that might be the option’s world, with possible consequences specified finely so that expert information affects only the probability assignment, and not the causal-utility assignment, to possible consequences. Given a fine individuation of possible consequences, the client’s causal-utility assignment for a list of an option’s possible consequences is the same as the client’s intrinsic-utility assignment for the list. The client’s intrinsic-utility assignment for the list, given that it rests on a priori implications of a propositional specification of the possible consequences, is informed. The client’s acquiring additional information does not provide reasons for changing the assignment, as Chapter 1 explains. Imagine an investor who cares only about money, the means of gaining money, and risk. The causal utility of a change in the investor’s portfolio considers as possible consequences, ignoring risk, only possible monetary returns. It does not use possession of a certain mixture of stocks and bonds as a possible consequence, because expert information may change the investor’s appraisal of the mixture. In contrast, expert information does not provide reasons for the investor to change an appraisal of a possible monetary return, such as gaining $1,000. Gaining this money, because it is a basic consequence for the investor, has a causal-utility assignment that is stable during acquisition of information. The mean-risk method divides an option’s consequences into the option’s risk in the sense of its exposure to chance and the option’s independent consequences. Its evaluation of an option adds the intrinsic utility of an option’s risk to its expected causal-utility ignoring the option’s risk. A professional may conduct an informed evaluation of the option for the client by substituting an expert characterization of the option’s risk for the client’s characterization in the mean-risk evaluation of the option and by substituting an expert probability assignment for the client’s probability assignment in a calculation of the option’s expected causal-utility ignoring risk. In this way, the professional recalculates the option’s causal utility for the client using expert information. The recalculation does not change the client’s causal-utility assignment to an option’s possible consequences ignoring risk, given that the consequences are finely specified as realizations of basic intrinsic attitudes.
184 Rational Responses to Risks It just uses expert probabilities of possible consequences to characterize an option’s risk and to calculate the option’s expected causal-utility ignoring the option’s risk. The possible consequences have a fine individuation so that gaining the trustee’s expert information does not motivate the client to change their causal-utility assignment.
8.4. A Client’s Attitudes to Risks Rationality handles differently a risk in the sense of a chance of a bad event and a risk in the sense of an exposure to chance. The trustee uses expert information to revise the client’s assessments of both types of risk, according to rationality’s constraints for risks of each type. In an option’s mean-risk evaluation targeting an option’s causal utility, a trustee calculates for a client both (1) the expected causal-utility of the option’s basic consequences besides risk, using expert probabilities of both good and bad consequences, and (2) the intrinsic utility of the option’s risk in the sense of its exposure to chance, assessed according to an expert probability distribution of possible consequences. A trustee’s evaluation of a risk for a client, in the sense of a chance of an event bad for the client, uses a probability-utility product for the bad event. The probability comes from the trustee and the utility comes from the client. Their product settles on the client’s behalf an intrinsic utility assignment to the risk. Using this intrinsic utility, the trustee calculates, for the client and with respect to available information, the expected utility of an option that generates the risk. An option’s exposure to chance is a type of evidential risk, given that it depends on the evidential probabilities of an option’s possible outcomes. An agent’s intrinsic aversion to this evidential risk settles the intrinsic utility of the option’s risk. A professional’s mean-risk evaluation of an option constructs for a client an informed intrinsic utility of the option’s risk, taking it as an evidential risk. Rationality regulates but does not settle an agent’s attitude to exposure to chance. It requires an intrinsic aversion to exposure to chance but is permissive about the strength of the aversion. In the chapter’s normative model, a client has a basic intrinsic aversion to a risk, in the sense of exposure to chance, that responds only to the size of the risk because the client is indifferent to features of the risk besides its size. For such a client, rationality
Advice about Decisions 185 requires that the intensity of the client’s intrinsic aversion to an option’s risk be proportional to the risk’s size. The constant of proportionality depends on the intensity of the client’s intrinsic aversion to risk in the sense of exposure to chance. For example, in a trustee decision problem with a financial planner advising an investor, the model assumes that the investor’s intrinsic aversion to an investment’s risk is proportional to the size of the investment’s risk. The investor’s intrinsic aversion to an investment’s risk is constant across changes that do not affect the risk’s size. The investor has the same aversion to the risk of buying stock in one corporation as to the risk of buying stock in another corporation, if each risk has the same size. By assumption, features of risks besides their sizes do not matter to the investor. In a mean-risk evaluation of an option, a trustee replaces the client’s intrinsic utility of the option’s risk with a recalculated intrinsic utility that rests on an informed appraisal of the risk’s size. The risk’s separability from other basic consequences justifies a trustee’s substituting an expert estimate of the risk’s size for the client’s estimate in a mean-risk evaluation. Given a client’s indifference to features of risks besides their sizes, a professional, functioning as a trustee, may construct for the client an informed attitude to an option’s risk and use it to evaluate the option on behalf of the client. Using an informed assessment of the size of the option’s risk, and the client’s intrinsic aversion to a risk of that size, the professional may calculate for the client the intrinsic utility of the option’s risk and then combine it with an informed expected utility of the option ignoring its risk to obtain for the client an informed assignment of utility to the option. Although the client assesses the size of an option’s risk using his information, the trustee reassesses the size of the option’s risk using expert information. The trustee uses expert information to revise the intrinsic utility a client assigns to an option’s risk by changing the risk’s characterization. The trustee assesses the size of the option’s risk and then calculates the client’s intrinsic utility for a risk of that size. The result is the client’s intrinsic utility of the option’s risk according to an informed characterization of the option’s risk. An assessment of a risk’s size depends on information, given that it is an evidential risk. For the client, the intrinsic utility of a risk of the size of the option’s risk is insensitive to information, although the size of an option’s risk is sensitive to information. The assumption that the client’s intrinsic utility of an option’s risk depends only on the size of the option’s risk, and not on other details of the option’s risk, allows a trustee to use expert information to revise
186 Rational Responses to Risks the client’s assignment of an intrinsic utility to the option’s risk after revising the client’s assessment of the risk’s size. Intrinsic utility, as Section 2.6 introduces it, attaches to a proposition. The client’s intrinsic utility of an option’s risk is underspecified without a specification of the proposition to which the intrinsic utility attaches. The option’s risk, taken as an evidential risk, is information-dependent, so that a proposition that underspecifies the option’s risk has an intrinsic utility that varies with information even though intrinsic utilities themselves are insensitive to new information. A proposition characterizing an option’s risk may say that the client experiences a risk with the size that the option’s risk has, given the client’s evidence concerning the option’s possible consequences, or it may say that the client experiences a risk with the size that the option’s risk has, given the trustee’s evidence concerning the option’s possible consequences. By specifying the risk’s size, the trustee adequately specifies the object of the relevant intrinsic utility. The trustee’s assessment of the option’s risk on behalf of the client uses the client’s attitude to risks of various sizes and an expert assessment of the risk’s size. A trustee’s assessment of the size of an option’s risk needs an account of the size of an option’s risk appropriate for the client’s decision problem. Measures of risk are controversial. To put aside controversy, this chapter treats cases in which the variance of the probability distribution of the utilities of an option’s (partitioned) possible basic consequences is a sufficiently accurate measure of the option’s risk, in the sense that a more accurate measure does not change the trustee’s recommendation. The trustee obtains this distribution on behalf of the client once the client reports a utility assignment to possible basic consequences, excluding risk, and intrinsic utilities for risks of various sizes. Because the intrinsic utility of an option’s risk is proportional to the risk’s size, some proportion of the variance, settled by the intensity of the client’s aversion to exposure to chance, yields the intrinsic utility of the option’s risk. Obtaining the intrinsic utility of the option’s risk involves finding an equilibrium between its intrinsic utility and the probability distribution of the utilities of the option’s possible consequences including its risk, assuming as in Chapter 4 that the causal utility of an option’s possible consequences is the sum of the intrinsic utility of the option’s risk and the option’s causal utility ignoring the option’s risk. The trustee finds the size of the option’s risk that yields the equilibrium value for the intrinsic utility of the option’s risk when it contributes to the causal utility of possible consequences including
Advice about Decisions 187 the option’s risk. As Section 1.2.3 explains, finding the equilibrium value is straightforward if the discount for risk is a proportion of the variance. A financial planner advising a client may use a questionnaire to assess the client’s attitude toward return and toward risk, in the sense of exposure to chance, and may use financial data to obtain an investment’s expected return and the size of the investment’s risk for someone with the client’s attitude toward the investment’s possible consequences. The planner uses a measure of the size of a risk and calculates for the client an intrinsic attitude to the size of an investment’s risk, according to this measure, to obtain the intrinsic utility for the client of the risk according to an informed assessment of the risk’s size. The planner replaces the client’s intrinsic attitude to a risk that the client does not know well with an attitude to an informed characterization of the risk, including its size. The planner constructs for the client an informed attitude to a risk of this size. Because of a client’s aversion to risk and willingness to reduce return to reduce risk, a financial planner may recommend investing in bonds rather than in stocks. The investor may not know enough about the comparative returns and risks of investing in stocks and investing in bonds to form this preference on his own. The financial planner may help the investor adopt informed preferences among investment options. For example, suppose that the planner learns that the client cares only about return and risk and that client’s discount for risk is an investment’s variance. Imagine that the planner compares for the client an investment in stocks and an investment in bonds. The expected return from the investment in stocks is $2,000, and its variance is $300. The expected return from the investment in bonds is $1,900, and its variance is $100. The client may initially lack this information about the investments and not have a preference between the investment in stocks and the investment in bonds. The planner calculates for the client an informed assessment of the investments’ utilities using the mean-risk method. The calculation uses intrinsic utilities of risks, in the sense of exposures to chance, characterized using their sizes, in this case as measured using variance. The intrinsic utility of the risk of the investment in stocks is the client’s intrinsic utility of a risk arising from a variance of $300. For the investment in bonds, it is the intrinsic utility of a risk arising from a variance of $100. Hence, the utility of investing in stocks is the utility of $2,000 –$300 = $1,700, and the utility of investing in bonds is the utility of $1,900 –$100 = $1,800. Therefore, the planner advises the client to invest in bonds.
188 Rational Responses to Risks
8.5. Nonquantitative Methods Suppose that an option’s consequences divide into two types. The two types are compositional if the utility of an option’s consequences is a function of utilities of its two types of consequence. The two types of consequences are mutually separable if the order of options is “as if ” they are ordered by preferences among one type given that the other type is constant. They are “as if ” ordered by preferences among one type if preferences among options are the same given any way of fixing consequences of the other type. The “as if ” ordering represents conditional preferences among consequences of a type given fixed consequences of the other type. The two types of consequences are mutually utility-separable, for a kind of utility, if the utility of one type of consequence is the same despite changes in the other type of consequence. The mean-risk method of evaluating an option divides an option’s causal utility into the intrinsic utility of its risk, in the sense of its exposure to chance, and the causal utility of the option’s basic consequences besides its risk. Because the method is additive, it implies the compositionality and the separability of an option’s consequences according to its division of an option’s consequences, and, moreover, implies the preference-separability, equivalence-separability, and utility-separability of these consequences, according to the method’s kind of utility. Therefore, an agent may form a preference between two options by comparing the risks they involve, if the options are alike in other basic consequences or in their causal utilities. A trustee may conduct such comparisons for a client. Comparisons of options suffice for resolving a decision problem. A trustee adequately handles a decision problem for a client by forming on behalf of a client preferences among options. In some problems, a trustee does not need a quantitative evaluation of each option to construct for a client preferences among options. Comparison of options with respect to risk and other basic consequences may generate comparisons of options. Compositionality settles trustee decisions in some cases, even without appeal to the function that grounds compositionality; in these cases, compositionality itself suffices for identification of choice-worthy options, that is, options such that no option is preferred. This happens if only two options are contenders, and they have risks of the same size, according to the trustee’s rather than the client’s comparison of their risks, and are the same in relevant basic consequences besides risk, perhaps monetary consequences. In this case, the options are equivalent according to compositionality, and
Advice about Decisions 189 rationality requires indifference between them. A trustee may reach such a conclusion for a client by using expert information to judge that the two contending options are equivalent for the client ignoring risk and that their risks have the same size and so are equivalent for the client considered by themselves. In real cases, the substitution of one risk for another equivalent risk may be approximate. It may be a substitution of rough equivalents similar to a substitution that a car-rental company performs when it substitutes for the car a client ordered an equivalent car. In a greater range of cases, separability suffices for a preference-ranking of options, and so a trustee decision. In some cases, a trustee needs only a comparison of the sizes of options’ risks, and comparisons may be clear without assuming any measure of their risks’ sizes and without calculating their risks’ intrinsic utilities. If one option is better than another in risk and the same as the other in expected utility ignoring risk, then it is better than the other overall, assuming the equivalence-separability of the attributes. Such comparisons of options suffice for a decision because they identify choice- worthy options. A trustee may in some cases dispense with calculation of options’ utilities, as long as options’ risks are comparable and options’ expected utilities ignoring their risks are comparable. The mean-risk method obtains an option’s utility by adding (1) the option’s expected utility ignoring the option’s risk and (2) the intrinsic utility of the option’s risk. Suppose that expected utility ignoring risk is the same for two options despite their having different possible basic consequences besides risk. Then, if the intrinsic utility of the first option’s risk is greater than is the intrinsic utility of the second option’s risk, as a comparison of the risks’ sizes may show, the first option has greater utility than has the second option. The trustee should advise the client to prefer the first option to the second option. A comparison of expected utilities ignoring risks and a comparison of intrinsic utilities of risks yield this conclusion without calculating the options’ utilities; the route to the conclusion bypasses the calculations. Mean-risk evaluation directly supports this comparison of options if the utilities involved exist but are not calculated. If the utilities do not exist because of insufficient evidence or experience, support for the comparison assumes a generalization of mean-risk evaluation that requires comparisons among options to be as they would be under some way of making doxastic and conative attitudes precise. As mentioned, the variance of the probability distribution of utilities of possible outcomes of an act is not an accurate general measure of the act’s
190 Rational Responses to Risks risk, although it yields a useful approximation in some cases. A factor it ignores is weakness in the evidence that yields the probability distribution. In some cases, mean-risk comparison of acts skirts the problem. Comparisons are possible without quantities. The relations of compositionality and separability offer a means of bypassing problems with the measure of a risk. Suppose that two investments have the same expected utility ignoring risk, and plainly the first has less risk than has the second although the risks’ sizes are unknown. In a mean-risk evaluation, the investments are alike in the first factor, and the second factor favors the less risky investment. Therefore, assuming preference-separability, a mean-risk comparison favors the less risky investment. To illustrate a trustee’s use of separability to reach a decision for a client, take Section 8.2’s example of two medical treatments with the same probabilities of cure but with one treatment having been tested more extensively. A treatment’s risk, taken as exposure to chance, is separable from the treatment’s other basic consequences. Hence, comparison of the two types of consequences may suffice for a comparison of the options. An informed comparison of the two treatments replaces the utility of running the risk involved in a treatment, as the patient assesses it, with the utility of running the risk, as the physician assesses it. A basic intrinsic aversion to the size of a treatment’s risk is independent of information, including the physician’s expert information. The patient’s intrinsic aversion to risk, taken as exposure to chance, assuming it is a basic intrinsic aversion, is stable with respect to information—expert information offers no reason to change a basic intrinsic aversion to risk, or its proportionality to a risk’s size. The more extensively tested treatment for the ailment has a lower information-sensitive risk of side-effect than has the less extensively tested treatment, and the two treatments have the same probability of cure and of side-effect. The two treatments have the same expected utility ignoring risk, but the more extensively tested treatment is less risky. The comparison of the treatments’ risks, given equivalence-separability, then settles the comparison of the treatments in favor of the more extensively tested treatment. In this example, the measure of the size of an option’s risk considers not only the variance of the option’s probability distribution of utilities of possible consequences but also the extensiveness of the evidence behind the probability distribution. The extent of the evidence may be measured approximately by the size of the sample supporting the probability distribution, after placing the patient in a statistically appropriate population. Increasing
Advice about Decisions 191 the extent of the relevant evidence, by, for example, increasing the sample supporting the probability distribution, reduces a treatment’s information- sensitive, evidential risk, in the sense of exposure to chance. Because of equivalence-separability, the physician may compare treatments on behalf of the patient by comparing risks and without using a measure of risk.
8.6. Inductive Risk An objection to this section’s method of making trustee decisions holds that its separation of probability and utility assignments, with probability assignments coming from the trustee and utility assignments coming from the client, assumes an unjustified separation of fact and value, in particular because it ignores inductive risk, a type of risk that blends fact and value. Proper attention to inductive risk subverts the fact-value distinction in science, the objection maintains, and also blocks any method of evaluating the options in a decision problem that separates responsibility for probability assignments and responsibility for utility assignments. Probability assignments and utility assignments are comingled in a single person and cannot be handled adequately by different people, the objection claims. An inductive risk is a chance of a mistaken, inductive judgment about a scientific hypothesis, that is, a chance of an erroneous scientific conclusion. It arises in testing statistical hypotheses. A test may accept a hypothesis that is false or reject a hypothesis that is true. A risk of an error of this sort is an inductive risk. An inductive risk is an epistemic risk in the sense that it is a risk of an epistemic error. However, not every epistemic risk, in the sense of an epistemic chance of a bad event, with the chance involving an evidential probability, is an inductive risk.2 In Neyman-Pearson hypothesis testing, treatment of a hypothesis runs an inductive risk of a type 1 error, rejecting a true hypothesis, or a type 2 error, accepting a false hypothesis. One should make the probability of a type 1 error very small if making the error has very bad consequences. For example, one should not reject the hypothesis that a vitamin supplement will eradicate a disease if mistakenly rejecting the hypothesis leads to the disease’s continuing unchecked. Similarly, if accepting a false hypothesis would have grave 2 A generalization of inductive risk that Babic (2019) presents considers not just the risk of an erroneous conclusion but the risk of graded inaccuracy in a credence function.
192 Rational Responses to Risks consequences, then rational acceptance requires exceptionally strong support by the data. Handling well inductive risk requires considering both facts and values. Making a good inference from data to a hypothesis takes account of both evidence and stakes, as Douglas (2000, 2009), Elliott and McKaughan (2014), Scarantino (2010), and Steel (2014: Chap. 7) claim. Scarantino (2010) asserts, “The more undesirable the consequences of making a mistake in accepting a scientific hypothesis, the higher the degree of confirmation required for its acceptance.”3 Points about inductive risk, taken as an objection to this chapter’s method of making trustee decisions, contend that a trustee’s deliberations cannot effectively separate evidence and goals, using a separation of probability and utility assignments. The objection holds that a trustee should consider the inductive risk that conclusions drawn from data are mistaken, and should draw conclusions cautiously in decision problems with high stakes, such as decision problems concerning medical treatments for life-threatening illnesses. According to the objection, a trustee cannot use a probability assignment that rests only on the evidence, and does not consider goals or values, to assess an act on behalf of a client. No probability assignment independent of a client’s goals assists a trustee’s evaluation of an option available to a client, the objection claims. Sugden (personal communication, 2016) notes that information can itself be loaded with value judgments, and that the distinction between fact and value is not clear in statements such as, “Cigarettes are not healthful.” I do not assume any separation of fact and value besides the distinction between a person’s probability assignment and the person’s utility assignment. Suppose that a decision procedure begins with evidential probabilities of options’ possible outcomes, and their utilities, and uses these quantities to obtain expected utilities of options. A complaint against this procedure’s separation of fact and value may be that the decision procedure must accept the evidential probabilities of options’ possible outcomes, and the rationality of this acceptance depends on the probabilities and utilities of the acceptance’s possible outcomes. Utilities affect acceptance of probabilities. This complaint targets only decision procedures, and not evaluations of decisions, and takes acceptance of probability and utility assignments as an act. However, an option’s evaluation may use an agent’s probability and utility 3 Brown (2013), De Melo-Martín and Intemann (2016), and Hicks (2018), in response to this point, assess the value-free ideal of science.
Advice about Decisions 193 assignments without considering whether the agent accepted the probability and utility assignments, and in general without considering how the agent decided to decide, and so without launching a regress of decisions.4 A mean- risk evaluation of options is not a decision procedure. It uses an agent’s probability and utility assignments without considering whether the agent accepts them. A person may have a probability assignment without selecting one. Having a probability assignment is a doxastic state and not an act. The person’s probability assignment influences the person’s choices, and so, in a sense, the person uses the probability assignment, but this use is not an act and, in particular, not an option in a decision problem. Use of the assignment is not under the person’s direct control. A person’s probability assignment responds to evidence and guides choices. The probability assignment’s exercising these functions is not the person’s accepting the assignment. In a trustee decision, a mean-risk evaluation of an option uses the trustee’s probability assignment and the client’s utility assignment for finely individuated possible consequences of the option. It does not consider whether, in addition, the trustee accepts her probability assignment and the client accepts his utility assignment. The mean-risk evaluation uses the trustee’s probability assignment and the client’s utility assignment whatever attitudes the trustee and client have toward their assignments, under the assumption that both are rational ideal agents. I treat primarily risks that acts rather than judgments create, and treat especially risks that decisions create. The expected-utility principle handles such risks by assigning probabilities to an act’s possible consequences according the strength of the evidence that the act will have the possible consequences and by treating an act’s risk in the sense of its exposure to chance as a consequence of the act. The expected-utility principle need not make additional adjustments for inductive risks, even inductive risks that judgments during deliberations create. Steps taken to reduce inductive risks are acts guided by values. General principles for managing risks apply to the management of inductive risks. Attention to inductive risk does not require revising general principles of risk management. Inductive risk does not form a normatively significant kind of risk, in the sense the Introduction specifies, with distinctive principles of 4 I assume that an agent’s following an irrational decision procedure does not change standards of evaluation for the agent’s decision.
194 Rational Responses to Risks rationality for its management. Chapter 5’s principle of expected-utility maximization covers management of inductive risks. Given that an act of acceptance or rejection of a scientific hypothesis may have as a bad consequence mistaken acceptance or rejection of the hypothesis, an evaluation of the act processes this inductive risk, this chance of a bad consequence, as it processes other chances of bad consequences, to obtain the act’s expected utility. Rational beliefs do not respond only to evidence, but also respond to epistemic utility, which covers factors such as the value of beliefs high in content even if not high in likelihood of truth. Having an explanation of a phenomenon is desirable, and one may accept a putative explanation, although it is not highly probable, because it takes a step toward meeting the cognitive goal of having an explanation of the phenomenon. Rational beliefs may also respond to stakes by, say, requiring strong evidential support when stakes are high. Belief-desire deliberation simplifies probability-utility deliberation. It uses beliefs and desires rather than probabilities and utilities. One form of belief- desire deliberation moves from an end, and a belief that an act will achieve the end, to the act. The literature on inductive risk may observe that belief- desire deliberation fails to separate fact and value. Because belief mixes evidence and goals, using beliefs in deliberation mixes fact and value in a sense. Although belief-desire deliberation may fail to separate fact and value, probability-utility deliberation separates fact and value, in the sense of separating the influence of evidence and the influence of goals in evaluation of options. Probability-utility deliberation is more complex than belief-desire deliberation, but its separation of fact and value assists trustee decisions.
8.7. Summary Treating risk as an option’s consequence, and justifying a mean-risk evaluation of an option, together support a method of comparing options that professionals can employ to advise their clients. This chapter shows how a professional may evaluate an option’s risk for a client and use this evaluation to make an informed assessment of the option that is suitably responsive to the client’s basic goals. A professional may use an expert probability assignment to calculate for a client an option’s expected utility ignoring the option’s risk in the sense of the option’s exposure to chance, and then combine this
Advice about Decisions 195 expected utility with the intrinsic utility of the option’s risk, according to an expert calculation of its size and according to the client’s intrinsic aversion to a risk of that size, to obtain for the client an informed evaluation of the option. Mean-risk evaluation of options usefully directs a professional’s advice to a client.
9 Regulation of Risks Congress through federal agencies imposes government regulations to reduce risks. The Food and Drug Administration (FDA) regulates genetic therapy for “bubble boy” disease and requires labeling food for allergens, the Occupational Safety and Health Administration (OSHA) controls workplace exposure to carcinogens, and the Securities and Exchange Commission (SEC) limits offerings of financial instruments.1 This chapter presents a method of justifying an agency’s regulation of a risk within a regulatory model. The model assumes that a regulatory agency functions as a trustee for the public. The method of justification highlights the role of social institutions in management of risks, including institutions for compensating those who lose because of a regulatory policy that benefits society. Using the method, with slight adjustments, a case for a regulation that reduces either a physical risk or an evidential risk can meet standards of legal objectivity. The regulatory model illustrates theoretical points about risk and displays the operation of some factors that justify regulations. The chapter does not tackle any contemporary regulatory issue because a resolution requires empirical investigation of the consequences of regulatory options, and a method of evaluating regulatory options and reaching regulatory decisions that dispenses with the model’s simplifying assumptions.
9.1. A Regulatory Model Because the chapter’s regulatory model merely illustrates theoretical points about risks, and does not attempt to treat regulation generally, I construct it with the United States in mind, as I am most familiar with regulation in 1 Works in various disciplines treat government regulation of risks: in psychology, Fischhoff, Lichtenstein, Slovic, Derby, and Keeney (1981); in statistics, Mayo and Hollander (1991); in philosophy, Cranor (1993, 2006); in comparative political science, Jasanoff (2005); and in jurisprudence, Sunstein (2002, 2005).
Rational Responses to Risks. Paul Weirich, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190089412.001.0001.
Regulation of Risks 197 this country. The points about risks that the model illustrates carry over to models of regulation in other countries, and to general models of regulation. The chapter’s regulatory model assumes a democratic society with a federation of states. The federal government has executive, legislative, and judicial branches. The legislative branch creates federal agencies to formulate regulations. The other branches of government and the citizenry play a role in a federal agency’s operation. The president may issue executive orders that the agencies provide cost-benefit analyses justifying regulations. The courts may settle whether the parties involved comply with all laws, including, for example, the congress’s mandate to a regulatory agency. Citizens may accept an agency’s invitation to submit e-mail responses to regulatory proposals.2 State and local government handle many public health and safety issues, but I treat regulation by federal agencies. These regulatory agencies effectively address some, but not all, risks. Suppose that only a minority is at risk from an allergen in food products, but a regulation, say, labeling products with the allergen, imposes costs on all consumers of the product. The majority may decline these costs. If a risk threatens only a minority of a country’s population, addressing the risk through an agency that the legislative branch of government controls may not be effective. Protecting the interests of the minority may be a job for the executive or judicial branch of government. For example, the courts may protect a minority through enforcement of product liability laws. Good governmental design assigns some safety issues to the legislature and others to the administration and to the courts. The chapter’s model assumes that the legislature assigns appropriate regulatory issues to an agency. It assumes that the agency’s regulatory procedures are reasonable, and it evaluates only regulations that an agency advances following its regulatory procedures. The chapter’s model includes assumptions about the basic goals of citizens and the legal framework for regulation. It assumes that the basic goals of citizens are moral and that a constitution and an agency’s mandate adequately protects individual rights, so that an agency issuing reasonable regulations to promote the public’s basic goals acts morally. In particular, the utility assignments of citizens express the value of equity and justice in the distribution of risks and, more generally, are moral. I do not specify the constraints a citizen’s utility function must satisfy to be moral but just assume that a 2 Gregory, Dieckmann, Peters, Failing, Long, and Tusler (2012) discuss methods of reaching regulatory decisions that involve the public as well as experts.
198 Rational Responses to Risks citizen’s utility function meets all moral constraints. Assuming compliance with moral constraints throws into relief issues concerning the rationality of responses to risk. In the model, a congress of elected representatives has primary responsible for regulating risks. For assistance, the congress creates regulatory agencies with a mandate to take feasible steps to reduce risks of harm. A regulatory agency’s charter issues directives and imposes constraints. Its charter instructs it to use the best available information to impose regulations that advance the public’s interests, provided that the regulations are economically feasible. The congress articulates the public’s basic goals in its directives to the regulatory agency. The agency uses expert information and principles of rational risk assessment to evaluate regulatory measures. An agency’s goal in the model is to reach a decision that the public would make for itself, within ideal social institutions for collective action, if everyone had expert information and decision-making tools. Its institutional objective is to use expert information and principles of rationality to serve the public’s basic goals. Agencies are trustees acting for the public, in the sense of Chapter 8, and Chapter 8’s method of evaluating a trustee decision provides a framework for assessing the rationality of a regulatory agency’s response to a risk. Given a regulatory issue, an agency has the sole goal of taking action the public would endorse if citizens were cognitively ideal, fully rational, had expert information, and were in ideal conditions for negotiating a resolution of the regulatory issue with, as in elections of representatives, equal voices for citizens. This is a traditional assumption about the goal of a regulatory agency. Viscusi (1998: 64) states, concerning OSHA’s regulations, “The appropriate basis for protection is to ask what level of risk workers would choose if they were informed of the risks and could make decisions rationally.” The model’s assumptions that a regulatory agency’s mandate assigns appropriate regulatory issues to the agency, that constitutional limits and an agency’s mandate adequately protect individual rights, and that the public has rational, and also moral, basic goals support in the model an agency’s using expert information to impose regulations that serve citizens’ basic goals. An agency aims to serve citizens’ basic goals because their derived goals may change when they review all available information. For example, the public may be against putting fluoride in water because it does not realize that doing this is a harmless way of preventing cavities. An aversion to the
Regulation of Risks 199 fluoride disappears given expert information. In contrast, the public’s desire for health is stable with respect to information, in the sense that more information does not provide reasons to revise the desire. Rationality requires an intrinsic aversion to a risk in the sense of a chance of a bad event, with an intensity settled by the probability-utility product for the event. It also requires an intrinsic aversion to a risk in the sense of an exposure to chance, but permits a range of intensities for this aversion. The chapter’s model assumes that a citizen’s intrinsic aversion to exposure to chance is basic, with an intensity proportional to the size of the exposure to chance. Moreover, it assumes that citizens’ basic intrinsic attitudes do not change given acquisition of expert information and do not change given the regulation itself. An agency may count on the stability of citizens’ informed, rational evaluations of risks and regulations to reduce them. A democratic society may reasonably use majoritarian methods to elect representatives to its congress, but an agency acting for the public may reasonably use more information about the public than citizens express on ballets. It need not aim for a policy that an informed public would endorse at the polls. It may turn out that, for many responses to a regulatory issue, each is such that an informed majority supports it. To move beyond this majoritarian dilemma, an agency may act using hypothetical negotiations of citizens in ideal conditions for negotiation that include means of instant cost-free communication and binding cost-free contracts. The citizens’ negotiations reach an outcome when all agree on a resolution of their regulatory issue, which may just maintain the status quo, or negotiations break down because some coalition of citizens withdraws and acts independently of the whole group of citizens, perhaps offering benefits to its members to swell its ranks. Idealizations influence the results of hypothetical negotiations about a regulation in two ways. First, idealizations of the model create a social structure that promotes agreement, for example, social institutions that compensate citizens for bearing a regulatory burden. Second, idealizations of the hypothetical negotiations promote agreement, for example, by making communication cost-free. An agency identifies a regulation that citizens endorse in ideal hypothetical negotiations, and implement it through social institutions that the model attributes to their society. The model assumes that citizens want to resolve a regulatory issue and so have an incentive to reach an agreement. If each possible resolution of a regulatory issue imposes a cost on some citizens, then a consensus backs only
200 Rational Responses to Risks an agreement that offers compensation to those who bear the burden of the agreement. Social institutions of compensation provide a means of offering side-payments when needed for unanimity. Because citizens may exit society if the prospects of staying dim, the model assumes that regulation operates within social institutions that compensate losers. A cohesive democratic society compensates losers at least with prospects for future gains. Each citizen gains from having a process of reaching resolutions of regulatory issues even if not every citizen gains from every resolution. A policy for reaching future settlements may offer losers in a current settlement prospects for gains in future settlements. Also, negotiations concerning a regulatory issue may modify a regulatory option to lighten its burden. Workers may accept lower pay to cover the costs of safety regulations, and consumers may accept higher prices to cover the costs of food inspections. Negotiated modifications of a regulation function as side payments to win support for the regulation; their objective in the model is unanimous consent. Democracy allows support by a majority to suffice when practical considerations prevent unanimity, but the model’s idealizations remove impediments to unanimity. Regulations aim for social benefits. Posner and Weyl (2012) propose a new Financial Products Agency that before authorizing use of a financial instrument requires proof of its safety and efficacy for socially beneficial practices such as hedging. Regulations often impose forms of coordination and cooperation. For example, members of a society may each have incentives for taking a risk although if most members take the risk, the collective result is an excessive risk for their society. A regulation by the SEC may therefore benefit society by imposing a joint response to financial risk-taking. Rationality demands more of a collective agent with advanced capabilities for coordination and cooperation than it demands of a collective agent with primitive capabilities for such tasks. If the members of a society lack a means of coordinating and cooperating, then their society lacks a means of attaining goals of rationality, such as efficiency. Good social institutions for coordination and cooperation put the society in position to meet goals of rationality that are otherwise out of reach. The chapter’s model assumes ideal conditions for coordination and cooperation, in particular, social institutions to facilitate coordination and cooperation on regulatory issues. It assumes a society with institutions for compensating those who lose because of a regulation.
Regulation of Risks 201
9.2. Trustee Evaluation of a Regulation Application of Chapter 8’s account of trustee decisions to an agency’s regulatory acts needs an account of collective action because a regulatory agency and the public it serves are groups of individuals. This section supplies a method of using individuals’ assessments of regulations to identify a reasonable collective response to a regulatory issue. The method, and accompanying assumptions, extend the previous chapter’s model of trustee decisions to regulatory acts.3 Individuals make tradeoffs between conflicting goals such as safety and prosperity. For example, a person may decide to save money by forgoing purchase of a safety device for herself. The model uses a person’s own way of comparing values, that may be incomparable from an impersonal perspective because reasonable people compare the values differently when making tradeoffs. A government justifiably imposes a regulation if citizens with equal power, when rational and informed, would agree to the tradeoffs the regulation adopts. An evaluation of a regulation in the model relies on inductive logic, decision theory, and game theory to ascertain whether the regulation has the public’s informed support. Inductive logic revises a citizen’s probability assignment using expert information, decision theory analyzes a citizen’s appraisal of a regulation, and game theory aggregates citizens’ appraisals of the regulation into a collective appraisal. A reasonable regulation is a solution to an ideal negotiation game among informed and rational citizens who personally appraise regulatory options. To evaluate a regulation, the chapter’s model frames the regulatory issue, calculates for each citizen an informed probability assignment, specifies the negotiation game concerning the issue, and considers whether the regulation is a solution to the game. An economic market aggregates individuals’ appraisals of the value of a commodity into a price for the commodity. The commodity’s price in the market indicates the collective value of the commodity. The market offers a game-theoretic way of settling collective value. The classic formula for computing collective utility presents a non-game-theoretic way of aggregating interpersonal utilities to form collective utility; it is a sum of interpersonal utilities. Maximizing collective utility so defined agrees with a game-theoretic characterization of rational collective action given the model’s assumptions 3 Weirich (2010a, 2015d) treats the extension of standards of rationality to collective acts.
202 Rational Responses to Risks about the negotiation game that brings citizens to an agreement about a regulatory issue, as Section 9.3 argues. An agency’s trustee decision about a regulation requires data about citizens’ preferences. Information from market exchanges about citizens’ willingness- to- pay for safety offers some information about citizen’s preferences and utility assignments, but a regulatory agency may lack information about the public’s basic goals and their importance, or it may lack expert information held by private citizens and corporations, so that it does not know whether a regulatory act would win the public’s informed consent. When an agency lacks, and cannot obtain, information sufficient for identifying such an act, a reasonable subordinate objective is regulation that probably has the public’s informed consent. However, for simplicity, the chapter’s model assumes the availability of the data required by its methods of evaluating regulatory options. Inductive logic applied to pooled information yields the expert assessments of risks that the model uses to reach regulatory decisions. In some regulatory settings, the result of pooling information is adoption of a causal model with a joint probability distribution derived from shared statistical data, following methods of causal model selection in Pearl (2009) and Spirtes, Glymour, and Scheines (2000). A causal model then indicates the results of a regulatory intervention. It may, for instance, predict that a regulatory intervention will reduce the incidence of cancer among workers. Cox (2012) presents examples.4 Bayesian methods revise a citizen’s probability assignment to possible consequences of a regulatory option using information from the experts an agency consults. In the chapter’s model a citizen is a rational ideal agent, although perhaps uninformed. An agency calculates the citizen’s informed utility assignment to a regulation. The calculation begins with a citizen’s set of probability assignments. The assignments in the set meet all constraints that inductive logic imposes. Next, the calculation uses the pooled data of experts; the data are free of contradictions, I assume, even if the opinions that experts draw from their data differ. Given all available data, a rational citizen
4 An alternative, I do not explore, to pooling information and maximizing collective utility using the utility functions of individuals is to (1) form a collective utility function for possible outcomes of regulatory options using the intrinsic utility functions of individuals, (2) apply inductive logic using the common stock of evidence to obtain probabilities of possible outcomes of regulatory options, and (3) maximize expected collective-utility using for a possible outcome its constructed probability and its constructed collective utility. This procedure obtains a group’s response to goals and information, following methods that an individual may use, after forming collective substitutes for a person’s doxastic and conative states constructed from the doxastic and conative states of the group’s members.
Regulation of Risks 203 updates probability assignments using Bayesian conditionalization. For simplicity, the model assumes that a single probability assignment represents a citizen’s attitudes toward relevant propositions after acquisition of all available evidence.5 The model uses informed personal probability assignments of the regulation’s possible consequences to obtain informed personal utility assignments to regulatory options. It does not need a collective probability assignment for the possible consequences.6 However, if expert information settles a probability assignment concerning the possible consequences of a regulation, then each citizen’s informed probability assignment equals this probability assignment. Construction of a citizen’s informed utility assignment uses different principles for a chance of a bad event and for exposure to chance because these are different types of risk. For a regulatory act, the intrinsic utility of a chance of a bad outcome equals the probability-utility product for the outcome, and this product, when informed, contributes to calculation of an informed expected utility for the act. A citizen has an intrinsic aversion to the act’s risk, in the sense of its exposure to chance, proportional to the risk’s size, and uses it in mean-risk evaluations of the act. An informed assessment of the risk’s size contributes to an informed mean-risk evaluation of the act. The model uses a citizen’s revised probability assignment, and the extent of the evidence on which it rests, to assess the size of an act’s risk, in the sense of its exposure to chance. An agency may assess a risk’s magnitude for a citizen with respect to expert information and assess the citizen’s aversion to a risk of that size to obtain the citizen’s informed aversion to the risk. The agency combines the citizen’s informed evaluation of the regulatory act’s risk with the citizen’s informed evaluation of the act’s other basic consequences, using the citizen’s revised probability assignment, to obtain the citizen’s informed evaluation of the regulatory act, as in Chapter 8. 5 Kohl et al. (2015) advocate pooling information by conducting a systematic review of the literature (and other information) on a regulatory issue. They list the pros and cons of using systematic reviews, in particular, to reach regulatory decisions about the introduction of a genetically modified organism. Agents who use all available information have the same evidence both before, during, and after Bayesian conditionalization. Agents who have different evidence before conditionalizing on the same evidence may find that conditionalization drives their probability assignments apart. To alleviate this problem, I assume that information sharing covers all available information, and not just new information. 6 Dietrich (2019) shows that only geometric aggregation of individual probability assignments makes collective probability assignments comply with Bayesian updating when individual probability assignments comply with it. Such constraints suggest not aggregating to obtain a collective probability assignment.
204 Rational Responses to Risks In the model, a regulatory agency aims for a policy that the public would adopt given informed and unconstrained rational negotiations in a democratic framework. The agency identifies solutions of a negotiation game among informed citizens concerning the regulatory issue the policy addresses. The negotiation game’s solutions aggregate informed personal utility assignments; the assignments settle the game’s solutions. The regulatory policy is justified if it is among the game’s solutions. In realistic cases, an agency may approximate this method of evaluating a policy.7 The negotiation game for a regulatory issue specifies the citizens, who are players in the game, and the regulatory options. The options include imposing a regulation, and not imposing it, and may include a variety of regulations and adjustments to regulations that citizens propose to promote consensus. The game specifies the possible outcomes of negotiations, the citizens’ utility assignments to the possible outcomes, and the citizens’ information about their game and each other. In an ideal game the citizens are fully informed about their game and each other. Their information about each other includes information about their rationality, their information, and their types, that is, their relevant psychological features, such as their probability and utility assignments. The citizens have common knowledge of the negotiation game, their rationality, and their types. In the model that the idealizations and restrictions create, citizens’ negotiations form a cooperative game, in particular, a coalitional game. As Binmore (2007: Chap. 18) and Weirich (2010: Chap. 8) explain, a cooperative game offers opportunities for joint action, including, in a coalitional game, coalition formation. In a coalitional game, the core is the set of outcomes in which each coalition of players gains at least as much as it can gain on its own. An inefficient outcome is not in the game’s core.8 7 I use a negotiation game to combine individuals’ doxastic and conative attitudes to reach a collective act. Another tradition uses a social welfare function going from preferences of individuals to social preferences. Fleurbaey (2010) offers a recent example of work in this tradition that explicitly targets risks. His method of aggregation serves goals of equity that attend to the distribution of utility among people, whereas the method of aggregation I use makes background assumptions that ensure that a rational solution to the negotiation game yields an outcome that is just. List and Pettit (2011) present methods of aggregating the judgments of individuals into a collective judgment that might direct collective action. They use analogies with standards for judgments of individuals to guide the aggregation. 8 In the citizens’ negotiations, a coalition without all can act on its own, foregoing agreement of citizens excluded, to secure a benefit for its members. Thus, the negotiations form a coalitional game rather than a bargaining problem. A bargaining problem is a type of cooperative game in which only the grand coalition of all players can achieve a benefit.
Regulation of Risks 205 Although solutions to coalitional games are controversial, the conditions for negotiation in the model include special conditions that support maximization of collective utility, that is, adopting a regulatory option with collective utility at least as great as any other option’s collective utility, as Section 9.3 argues. The society has constant membership, membership in the society is voluntary, and membership brings equal social power in the negotiation game. Because the members are cognitively ideal and fully rational, they have rational goals and rational personal utility assignments. The society compensates members, if any, who lose if the regulation maximizes collective utility. Given the model’s assumptions and the game’s conditions, a solution, as the next section argues, arises from a unanimous agreement that achieves an efficient outcome and maximizes collective utility. Accordingly, a reasonable regulation has benefits that outweigh its costs, given a comprehensive assessment of costs and benefits that include all factors that matter to citizens. The model’s assumptions make the solution calculable without retreating to general principles of cooperative game theory. The model assumes that a regulatory agency has the information it needs to construct interpersonally comparable utility assignments. Interpersonal comparability grounds computation of an outcome’s collective utility, defined as a sum of the outcome’s utilities for citizens, given that these utilities are all on the same scale.9 An agency simulates the citizens’ negotiations about a regulatory issue after constructing for citizens informed utility assignments concerning the possible outcomes of negotiations. The agency uses each citizen’s informed utility assignment to each regulatory option to obtain the collective utility of each regulatory option and recommends an option with maximum collective utility. An option that maximizes collective utility is a solution to the citizens’ negotiation game for the regulatory issue, given the model’s assumptions about compensation. A regulation of a risk emerges only if the regulation significantly reduces the risk, without creating greater problems, so that the risk’s reduction justifies the regulation’s cost. An evaluation of a regulation weighs pros and cons to settle whether the regulation maximizes collective utility. Some traditional ways of weighing 9 Some theorists hold that interpersonal utility is meaningless. Weirich (1984b; 2001a: 181–83) defends its meaningfulness. The defense rejects the operationalist theory of meaning and adopts a physicalist theory of mind. Accordingly, a person and a physical duplicate assign the same utility to gaining a dollar. Comparability of interpersonal utility in this case shows that interpersonally comparable utility is meaningful.
206 Rational Responses to Risks pros and cons are cost-benefit analysis and multi-attribute-utility analysis. Cost-benefit analysis faces the objection that it overlooks moral values and other considerations not measurable in terms of money, or else distorts these considerations to facilitate quantification. However, under Section 9.1’s assumption that citizens’ evaluations attend to all relevant moral considerations, an assessment of a regulatory option using their evaluations does not omit moral considerations. Multi-attribute utility analysis faces the objection that it lacks a justified method of comparing utilities from diverse attributes. However, a regulation’s utility for a citizen obtained using a mean-risk analysis, a special case of multi-attribute utility analysis, compares attributes according to methods that Chapter 4 justifies.10
9.3. From Efficiency to Collective Utility The chapter’s model of regulation applies the account of collective rationality in Weirich (2010a). This account adopts only standards of rationality for collective acts that directly follow, using epistemic game theory, from standards of rationality for individuals. In ideal coalitional games, it derives in this way the collective standard of efficiency. Economics introduces two types of efficiency, which I sketch and which Chipman (2008) explains thoroughly. Each type of efficiency, for acts as well as for outcomes they produce, depends on a type of superiority. One collective act is (strictly) Pareto superior to another if and only if the first is better for all, and it is (weakly) Pareto efficient if and only if no alternative is (strictly) Pareto superior to it. One collective act is Kaldor-Hicks superior to another if and only if switching from the second act to the first produces gainers who can compensate losers without becoming losers themselves, and it is Kaldor-Hicks efficient if and only if no alternative is Kaldor-Hicks superior to it. Kaldor-Hicks efficiency coupled with suitable compensation constitutes Pareto efficiency. The arrangements for compensation that I consider are not just possible but executed to give individuals reasons to participate in collective acts that maximize collective utility. Adequate compensation for an individual’s loss from a collective act that maximizes collective utility considers the alternative that would have been 10 The essays in Hahn (1996) argue that tools drawn from economics, such as cost-benefit analysis, improve regulations.
Regulation of Risks 207 realized if the individual losing had not participated in the collective act, and consequently the collective act had not been realized. A collective act that maximizes collective utility and provides adequate compensation is Kaldor-Hicks superior to the collective act that would be realized in its place. Adequate compensation for participation in the maximizing act gives a loser at least what he obtains in the nearest alternative, if this is the same collective act given any individual’s withdrawal from the maximizing act, as in bargaining problems. If the collective act that would be realized in place of the maximizing act depends on who withdraws from the maximizing act, as in coalitional games, then adequate compensation yields a collective act that is Kaldor-Hicks superior to, for each individual, the collective act that would be realized if that individual were to withdraw from the maximizing act. In a regulatory problem, the model assumes that adequate compensation is possible, that the mechanism of compensation operates without cost, and that the sequence of collective acts consisting of maximizing utility and then compensating losers has maximizing components and itself maximizes collective utility because without cost it redistributes collective utility. Compensation gives a member who loses by helping to maximize collective utility an amount of personal utility at least equal to the amount of personal utility he loses by helping. It is collectively rational to maximize collective utility if the gainers compensate the losers with utility transfers to produce an outcome that is better for all than failing to maximize collective utility. Compensation ensures that each member, and moreover each coalition of members, does at least as well with the regulatory option as acting alone. The result is a core allocation. The players agree to its realization by binding contract. The contract, a binding agreement to adopt a regulatory option and compensate losers, is an outcome of the citizens’ negotiations. Compensation, which may come in the form of an expectation, enlists the participation of all in collective acts that maximize utility. Given any collective act that fails to maximize collective utility, agents have incentives and the means to maximize instead. Adopting a contract to realize a regulatory option and compensate losers maximizes collective utility and is a solution of the citizen’s negotiation game, a rational collective act. The model adopts collective-utility maximization, not as a general method of resolving social issues, but as a method of resolving regulatory issues given the model’s assumptions. Support for the requirement of collective-utility maximization assumes, as Weirich (2017) shows, that collective rationality requires efficiency when
208 Rational Responses to Risks conditions are ideal for coordination and cooperation, as in the model, and notes that the model’s assumptions also make maximizing collective utility necessary for efficiency. With a mechanism for adequate compensation, only collective acts that maximize collective utility are efficient. For any act that fails to maximize, another exists that maximizes and after compensation realizes an outcome Pareto superior to the first act’s outcome. Maximizing collective utility in a group with a suitable institution of compensation is (weakly) Pareto efficient because no alternative benefits each member of the group. Moreover, because each member prefers an act that maximizes collective utility conjoined with an act that compensates losers in a way that maximizes collective utility to any alternative that does not maximize collective utility, no such alternative is (weakly) Pareto efficient. The negotiations produce an agreement concerning a package of a regulatory option and compensation. The package maximizes collective utility, and its two steps each maximize collective utility. The first step may be a regulatory option that maximizes, and the second step may be a compensatory, collective-utility preserving transfer of utility to those under a net burden from the regulation. Given that an agency’s proposal may include compensation as well as regulation, and may even count compensation as part of a regulation, the agency may propose not just a maximizing regulation but a maximizing package of a regulation and compensation, with the components bound together by law. The model endorses as reasonable only a regulation and compensation package such that the regulation, compensation, and package each maximizes collective utility, either because compensation transfers utility without losing utility or because compensation achieves compliance with the law.11 Rescinding the model’s simplifying assumptions generalizes the model. A significant generalization comes from dropping the assumption that citizens have precise, informed probability and utility assignments and then making adjustments to accommodate regulatory pros and cons that resist quantitative appraisal. Given imprecise probability and utility assignments, assessments of risk become imprecise, principles of rational decision become permissive, as Chapter 5 explains, and therefore the set of solutions 11 A series of negotiation games yields consistent regulations if each maximizes collective utility, assuming ideal conditions for the sequence of regulatory decisions and the constancy of relevant information and citizens’ basic goals. Weirich (2018a) argues for the consistency of sequences of an individual’s rational choices under these conditions, and this consistency for individuals brings collective consistency given these conditions.
Regulation of Risks 209 to a coalitional game expands.12 A solution, with adequate compensation, maximizes collective utility in its steps and overall according to, for each citizen, some admissible set of probability and utility assignments. A justification of a regulation does not need precise probability and utility assignments for each citizen but only constraints on these assignments that establish that a regulatory option is a solution to the citizens’ negotiation game. I do not explore this generalization, but instead, beginning in the next section, address other points that arise when applying the current model to realistic regulation. An objection to regulations that express the people’s will notices the possibility of conflict between the people’s will and the people’s interest. How should a regulatory agency handle a conflict between the democratic duty of responsiveness and the fiduciary duty of promoting the public’s interests? If the agency is responsive to the fully informed will of the people, and the people’s basic goals are rational, its responsiveness promotes the people’s interest. The people’s will and its interest do not conflict if people are rational and fully informed. Suppose, however, that expert information is incomplete so that, even given it, a rational public does not see its interests. In this case, a regulatory agency, being similarly ignorant of the public’s interests, does not have evidence of a conflict of duties. The conflict does not have a practical bearing on the rationality of the agency’s regulatory actions. Another objection targets the game-theoretic assumption that the players in a game reach a collectively rational outcome, a solution of their game, if each player’s contribution to the outcome is rational. The assumption about collective rationality presumes, as stated, that rationality’s requirements for individuals depend on their conditions for cooperation and coordination and, also, that the players’ strategies completely constitute their realization of the game’s outcome. An event, represented by a proposition that individuates the event finely, has a context-sensitive occurrence. Constitution of acts by other acts is context-dependent. For example, moving the white queen constitutes putting the black king in check only given the context of a game of chess and a certain position of pieces on the chess board. Constitution of a collective act by acts of individuals is similarly context-sensitive. The objection typically advances examples to argue that rational acts of individuals may constitute an irrational collective act. Suppose that the 12 Sahlin and Persson (2004) recommend using a range of probability assignments when evidence is sparse.
210 Rational Responses to Risks members of a regulatory agency each act rationally in a way that an irrational law requires, and consequently the agency imposes an irrational regulation. By the members’ acts, the agency imposes the regulation, but the members’ acts do not constitute the imposition of the regulation. An additional constituent is the law’s authorization of the agency to impose the regulation. Without this authorization, acts of the agency’s members do not impose a regulation. The members’ acts together with the law’s authorization constitute the imposition of the regulation. The collective act that the agency’s act produces depends on the legal context, and not just on the acts of the agency’s members. The rational acts of the agency’s members entail the rationality of the agency’s act that their acts constitute, namely, the formulation of a regulation. The agency’s act is rational although the law’s imposition of the regulation is not rational because the law is irrational. The case does not refute the game-theoretic assumption about collective rationality, because the case does not meet the assumption’s presumption that only acts of the agency’s members constitute the collective act evaluated by evaluating the participation of each member. The acts of the agency’s members do not exclusively constitute imposition of the law. The chapter’s model assumes that social institutions, including the law, are rational so that an agency’s act is rational given the rationality of the contribution of each member of the agency. Similarly, the context for the negotiation game resolving a regulatory issue ensures that an agreement reached is rational given the rationality of each citizen.
9.4. Comparisons This chapter’s method of evaluating a regulation, and other proposed methods of evaluating a regulation, do not directly compete because of differences in objectives and assumptions. I compare this chapter’s evaluation of a regulation using collective utility with other methods of evaluation mainly to highlight the distinctive features of evaluation using collective utility. Shrader-Frechette (1991) proposes a method of rationally evaluating societally imposed risks to decide whether to accept them. As a rational method of acceptance, she proposes “scientific proceduralism,” which uses science within a democratic process that resolves conflict between experts and lay
Regulation of Risks 211 people over evaluation of a risk according to moral criteria such as equity in the distribution of risks. In contrast, this chapter offers a general account of regulation of risks, not just societally imposed risks, and instead of a procedural account of rational choice, formulates a substantive account that evaluates responses to risks by considering their consequences. In the chapter’s model, moral criteria influence each individual’s evaluation of a policy that affects the distribution of risks but may not bring consensus. If the criteria do not bring consensus, then the model uses the theory of coalitional games to resolve differences among the individuals’ evaluations of the policy. The model’s normative principles evaluate a regulation for rationality, not the procedure that produced the regulation. The principles identify rational acts, independently of the acts’ occurrences, and do not derivatively classify an act, as rational or not rational, according to the procedure that produces the act, as some acts evaluated never occur and so cannot be evaluated according to the procedure that generated them. For example, a regulatory agency may fail to enact a policy rational for it to enact; the policy’s classification as rational is independent of procedures for generating policies. The chapter separates evaluations of acts and evaluations of procedures that yield acts, and allows for an irrational act’s issuing from a rational procedure, and for a rational act’s issuing from an irrational procedure. Thaler and Sunstein (2009) and Sunstein (2014) promote a society’s nudging a person toward beneficial decisions by influencing the person’s deliberations. To identify beneficial behavior, they use a person’s preferences corrected to remove inconsistencies and to accommodate available information.13 The regulatory procedure in this section’s model also revises an agent’s preferences using available information but assumes that agents begin with consistent and rational preferences. The model, as an idealization, takes an agent’s basic goals to be rational singly and together. It saves for future research ways of rolling back the idealization that citizens have rational goals. Correcting a citizen’s preference may have indeterminate results because of multiple ways of making the correction. However, the model’s correction of preferences involves just adding information and revising information- sensitive preferences without changing basic preferences. A trustee’s job is 13 Hausman (2012: 83–87) argues against taking a person’s welfare as satisfaction of the person’s purified preferences.
212 Rational Responses to Risks only to use expert information to recalculate utilities, and thereby revise preferences, for a client. In the model, the results are determinate. Besides maximization of collective utility, another common regulatory principle is the Precautionary Principle. It urges regulation to reduce risk despite ignorance of the results of such regulation. The Precautionary Principle states that if an act may, but does not demonstrably, cause harm, a reasonable regulation may nonetheless prohibit the act. An act that the principle targets may not generate a physical risk of harm but may still create an information- sensitive, evidential risk of harm because the act is not demonstrably safe; with respect to available information, the act has a significant evidential probability of harm. Public policy sometimes ignores evidential risk, that is, the risk of the unknown. The Precautionary Principle attempts to correct this neglect. Lack of evidence that a regulation will reduce a risk is not a sufficient reason against the regulation, it says. The Precautionary Principle responds to information-sensitive, evidential risks, but responds imprecisely, because it does not state necessary and sufficient conditions for regulatory measures. Sunstein (2005) criticizes the Precautionary Principle as a means of addressing risk. It is inconsistent, he says, because it authorizes preventing a risk by means that create a risk it authorizes preventing. Peterson (2006) makes a similar objection.14 Steel (2014) refines the Precautionary Principle for a decision about the regulation of a risk given scientific uncertainty about the results of regulatory options, including the option of not regulating the risk. In particular, he revises the principle to put aside charges that it is inconsistent (2014: 199, 218–24). As he formulates it, the Precautionary Principle proposes a proportionate and efficient response to a risk of harm despite scientific uncertainty about results of options. The principle takes such scientific uncertainty to exist when no well confirmed scientific model is available for predicting results. Steel takes his version of the principle to be independent of, and in conflict with, standard decision principles.15 This chapter takes the Precautionary Principle as a plea for attention to evidential risks that arise given ignorance. It reconciles the principle’s message with standard decision principles by attending to the effect of regulatory policies on evidential risks. Taking an option’s evidential risk as a consequence of the option makes the expected-utility principle take account of
14 Weirich (2005) comments on Sunstein’s approach to regulation of risks.
15 Randall (2011) also articulates a role for precaution in risk management.
Regulation of Risks 213 the option’s evidential risk and so, within a standard decision framework, accommodates the motivation for the Precautionary Principle. Chapter 5’s version of expected-utility maximization acknowledges risk generated by the unknown, that is, evidential risk, as a consequence of some public policies, and this chapter’s evaluation of a regulatory option using collective utility assesses the evidential risk the option generates. Maximizing collective utility after assessing evidential risks, and providing appropriate compensation, ensures proportionate and efficient responses to risks. Established principles of decision and game theory yield a precise, consistent method of evaluating a regulation using collective utility. A plausible interpretation of the Precautionary Principle counts in favor of a policy a reduction in evidential risks and counts against rival policies failures to reduce evidential risks. Thus, the principle agrees with maximization of collective utility, after discounting the collective utility of a policy according to the evidential risk that the policy brings. Building evidential risks into the consequences of policies reconciles the Precautionary Principle with standard principles of collective choice.
9.5. Evidential Risks Information-sensitive, evidential risks, as well as physical risks, motivate reasonable regulation. Evidential risks exist objectively, although they are relative to evidence.16 An information-sensitive, evidential risk and a rational basic intrinsic aversion to it may persist given a rational assessment of all available scientific information. Principles of democratic government allow for economically feasible regulations that reduce, or that permit citizens to reduce, information-sensitive, evidential risks.17 This section justifies regulation of evidential risks.
16 Dawid (2017) reviews interpretations of probability and adopts a personalist interpretation for the probability involved in the risk that a particular bad event occurs. He holds that an agent’s probability assignments, if calibrated with events, agree with other agents’ calibrated, personalist probability assignments, and thereby possess a type of objectivity. 17 Aven, Renn, and Rosa (2011) distinguish physical risk and evidential risk according to the type of probability they involve, respectively, either frequentist or evidentialist. When considering regulations to reduce air pollution, Smith and Gans (2015) distinguish statistical or “aleatory” uncertainty about events from “epistemic” uncertainty, or lack of knowledge of risk relationships among events. They hold that risk analysis should attend to both types of uncertainty. Epistemic uncertainty, as they define it, creates evidential risks, the topic of this section.
214 Rational Responses to Risks The FDA reasonably banned genetic therapy to treat “Bubble Boy” disease (a severe deficiency of the immune system) when subjects in French trials contracted leukemia. It did not know the physical risk of leukemia but responded to an evidential risk of leukemia. Researchers did not have evidence that the genetic therapy was safe, or at least had manageable side effects, nor did they have evidence that it was unsafe in patients older than those in the French trials. Regulators acted reasonably to prevent evidential risks of leukemia. Arguing that no proof exists that the ban reduces a physical risk of harm does not weaken a case that the ban reduces, and in fact provably reduces, an evidential risk of harm. In this chapter’s model, a justified regulation maximizes collective utility. A regulation may maximize collective utility by reducing evidential risk. As another example, consider OSHA’s regulation of the workplace to improve workers’ safety and health. Legislation makes reduction of risks of cancer part of OSHA’s mandate. The legislative process furnishes evidence that a majority is against exposure to carcinogens in the workplace. Although benzene is known to be a carcinogen causing leukemia, the dose-response curve is unknown. No level of exposure to benzene is known to be safe. On the other hand, no level of exposure is known to be unsafe. A regulation to reduce workplace exposure to benzene by having workers use gloves and respirators has a positive evidential probability of reducing cases of cancer among workers despite the absence of a proof that it has a positive physical probability of reducing cases of cancer among workers. In 1978, OSHA proposed new regulations lowering workplace exposure to benzene. The Supreme Court rescinded the new regulations because science had not proven that lower exposure rates would prevent cases of cancer. Science had not discovered the physical probabilities of cancer given various rates of exposure to benzene. However, a reduction of the evidential probability of cancer may warrant a regulation. A reduction in the rate of exposure to benzene may lower cancer’s evidential probability given all available information. To illustrate this type of justification of a regulation, I move to an idealized version of the regulatory issue placed within the chapter’s model of government and citizens. An evidential risk is either an evidential chance of a bad event or else an option’s exposure to evidential chance. In the case of benzene, one pertinent risk is an evidential chance of a bad event, cancer. Every citizen with an aversion to cancer has an intrinsic aversion to this risk, if rational. Another pertinent risk is the exposure to evidential chance resulting from a probability
Regulation of Risks 215 distribution of utilities of possible consequences, with the probability distribution recording evidential probabilities. The model assumes that citizens have a basic intrinsic aversion to risk in the sense of exposure to chance, so that mean-risk evaluation of options applies, and, moreover, assumes that the intensity of a citizen’s intrinsic aversion to a risk of this type depends only on the size of the risk, as in Chapter 8, so that an agency can calculate for a citizen an informed intrinsic utility of the risk, using an informed assessment of the risk’s size. For a citizen, a regulation’s informed evaluation uses the regulation’s expected utility ignoring its risk, in the sense of its exposure to chance, and the intrinsic utility of the regulation’s exposure to chance. The expected utility uses an expert probability distribution of possible consequences, including cases of cancer, and the intrinsic utility targets the regulation’s exposure to chance as an expert assesses the size of the exposure to chance. Information-sensitive, evidential risks arise because of uncertainty that physical risks are absent. These evidential risks are greater the less information exists about dose-response curves, other things being equal. Given ignorance of a safe level of exposure to benzene, lowering exposure levels reduces evidential risks even if scientific studies do not ensure reduction of physical risks. Available information does not establish a safe level of exposure to benzene, and hence any level of exposure to benzene creates an evidential risk. A citizen has an intrinsic aversion to an information-sensitive, evidential risk of cancer from exposure to benzene that robustly persists given acquisition of available scientific information. Although new information may affect the size of the risk, as long as the risk remains, the citizen is averse to it. Given all available scientific information, exposure to benzene carries for her an evidential risk of cancer because of the unknown long-term effects of exposure to benzene. The intensity of the aversion to the risk may change with new information but the aversion persists because available information does not remove uncertainty about the dose-response curve. Reasonable regulation attempts to reduce the evidential risk that available information does not dispel. Of course, regulation has costs that generate opposition. In the chapter’s model, social institutions offer means of compensating those who bear the costs. The government may impose a workplace regulation that increases worker safety and health benefits at a cost to an industry that the government compensates with a tax break funded by savings from health care and disability payments and by increased revenue from income taxes that workers pay
216 Rational Responses to Risks because of their greater longevity in the workforce. The industry, although it loses because of the regulation, gains on balance from other legislation. The compensation for the industry’s costs comes from a third party, government, which belongs to the society with respect to which the regulation, if justified, maximizes collective utility. In the model, a justification of regulation of exposure to benzene shows that compliance with the regulation lowers an information-sensitive, evidential risk by lowering the evidential probability of cancer, and shows that lowering exposure to benzene lowers the evidential probability of cancer by an amount that justifies lowering permissible levels of exposure to benzene. Whether lowering exposure levels produces a net benefit depends on trade- offs among goals. An informed and rational citizen balances the utility of a reduction in information-sensitive, evidential risk against the chance that the reduction in risk will not actually prevent any cases of cancer. Some informed citizens may rationally prefer the regulation because of its reduction in the information-sensitive, evidential risk of cancer from exposure to benzene, but others may rationally oppose the regulation because of the chance that it will not promote health. Assessments of the regulation by individual citizens direct their behavior in the negotiation game that settles the model’s recommendation concerning standards for exposure to benzene. According to the model, lowering permissible levels of exposure is justified if the public agrees on the regulation in an ideal negotiation game. They agree, after arrangements for compensation, if the regulation maximizes collective utility. A regulation with suitable compensatory arrangements is reasonable in the chapter’s model if it maximizes collective utility considering all relevant pros and cons, including evidential risks. This section does not argue that OSHA’s standard for exposure to benzene is justified because it maximizes collective utility. Such an argument requires details of the calculation of citizens’ informed rational assessments of the regulation and its rivals. Also, applying the chapter’s model of regulation to an idealized version of the case of workplace exposure to benzene does not yield a practical recommendation because the model and the idealized version of the case have assumptions that the actual case does not meet. This section only sketches the lines of an argument that an information-sensitive, evidential risk justifies a regulation to reduce the risk because if citizens are reasonable and informed, and those bearing its costs receive compensation, they want the regulation. The illustration just highlights considerations that
Regulation of Risks 217 reasonable regulation reviews because they affect the utility a rational and informed citizen assigns to a regulation. Although the illustration does not settle regulation of exposure to benzene, it displays the role of information- sensitive, evidential risks in identifying reasonable regulations.
9.6. Legal Objectivity Removing the simplifying assumptions of this chapter’s model of government regulation of risks generalizes the model. The model makes simplifying assumptions about regulatory law, and removing the assumptions requires adjusting the justification of a regulation to conform to legal practice, as described by Adler (2003), for example. Justifying application of a law is a public enterprise. Although the model just assumes that the law has access to information about a regulation’s effect on an evidential risk, in realistic cases, courts need ways of publicly establishing that a regulation reduces an evidential risk. When a regulation goes to court, a common complaint against it is that scientific evidence does not establish that the regulation will prevent harm. The complaint asks a regulation to address risks that exist because of physical probabilities of harm. These are frequency-type probabilities that classical statistics investigates. Some risks exist because of evidential probabilities of harm. These are belief-type probabilities that direct every day decisions such as how fast to drive. These probabilities are relative to evidence. Reasonable regulation takes account of risks existing because of either type of probability. Even if science does not show that a regulation reduces a physical risk, the motivation for the regulation need not be a baseless fear but may be instead a reasonable reduction of an evidential risk. No one objects to a citizen’s using her personal, evidential probability assignments to direct a personal decision. An insistence on objectivity arises in public justification of governmental policies. A public justification of an act achieves objectivity in a social sense. Some evidence is introspective and private rather than scientific and public. Some cogent methods of inference are subjective rather than objective. Publicly justifying an act has a social rather than an epistemic motivation. The law makes rulings with momentous consequences, and backs its rulings with force. Justice requires accessible and convincing support for its
218 Rational Responses to Risks rulings and their enforcement. Support for a risk’s regulation demands the risk’s objective demonstration. Regulations targeting evidential risks must satisfy general legal principles that require the regulations to have an objective justification. Policy makers must publicly justify steps to reduce risk. The justification need not support a regulation with philosophical certainty; legally cogent grounds suffice. This section shows the possibility of objectively establishing the existence of evidential risks and establishing the effectiveness of methods of reducing them. It shows that the existence and the reduction of an information-sensitive, evidential risk are objective matters in an appropriate legal sense. An evidential risk a citizen faces is objective, although relative to the citizen’s evidence. It is objective even when its size is imprecise because it arises from imprecise probabilities and utilities. The law commonly recognizes grounds of rulings that are personal and imprecise but nonetheless objective. For example, it recognizes as objective a loss from an auto accident even if the loss is personal and imprecise. Regulators may objectively assess an evidential risk, meeting the standards of legal practice, to publicly justify a regulation that reduces the risk. For example, legal proceedings may show that lowering exposure to benzene reduces the evidential risk of cancer by reducing the evidential probability of cancer. Without showing the precise magnitude of the reduction, they may show that the reduction is significant. The reduction of an evidential risk is an objective fact, not just a perception. Information-sensitive, evidential risks may be objectively assessed with respect to expert information. A reasonable person with all available scientific information may find that genetically modified (GM) food carries an evidential risk because of the unknown long-term effects of eating such food. If a consumer avoids GM food because of this risk, the consumer is not just responding to a perceived risk but also is avoiding an objective, evidential risk. Moreover, it is reasonable to avoid an evidential risk even if one has not personally assessed its magnitude with respect to expert information. Without all available information and so without knowing the magnitude of the objective, evidential risk of an allergic reaction to some GM food, a reasonable consumer may believe that it is large enough to warrant avoiding this food. A regulation for labeling GM food gives a consumer a means of avoiding this risk, as Weirich (2007a) explains. The law in the United States controls the evidence that opposing sides in a court case may present to the jury and maintains standards for expert witnesses who present evidence. Kadane (2008) describes the laws regulating
Regulation of Risks 219 the testimony of expert witnesses.18 An expert witness must apply a reliable method of inference to draw conclusions concerning a case. Reliability is established by publication in appropriate journals, error rates, and general acceptance. If reliable inference backs the existence of evidential risks, then experts may present them in court cases concerning regulations. The evidence is required to be objective, not in a philosophical or statistical sense, but rather in the sense of support by methods of reliable inference, as legally characterized. An evidential probability’s assessment is objective in this sense, even if some experts disagree on its assessment. Evidence in the legal sense may justify a reduction in the evidential risk of illness or injury. Rules of evidence acknowledge means of objectively establishing that a regulatory measure reduces an evidential risk. Finkelstein and Levin (2008: 44) state that a 1993 decision in Daubert v. Merrill Dow Pharmaceuticals takes reliability to require grounding in scientific knowledge. The ruling asks whether prior to an expert witness’s use of a scientific theory, the theory’s reliability had been established. It asks whether “(1) the theory had been tested; (2) the theory had been peer- reviewed and published; (3) application of the theory had a known rate of error; (4) the theory had been generally accepted in the relevant scientific community; and (5) the theory was based on facts or data of a type reasonably relied on by experts in the field.” These indicators are not necessary and sufficient conditions for reliability, so other indicators of reliability may replace them as a particular case warrants. However, Bayesian statistics, which treats evidential probabilities, meets these criteria of reliability. Kadane and Woodworth (2008) demonstrate and defend, according to the standards of reliability stated, the application of Bayesian statistics in legal proceedings. Bayesian statistics conform with the acknowledged indicators of the reliability of an expert’s testimony in court, including general acceptance of the methods the expert uses to reach conclusions. Solutions to coalitional games are controversial, so solutions in the chapter’s model may lack public justification and so fail to publicly support a regulation. Nonetheless, the model’s methods yield a definite verdict about a 18 Kadane (2008: 42) presents Rule 702 of the Federal Rules of Evidence, which Congress adopted in 1975: “If scientific, technical, or other specialized knowledge will assist the trier of fact to understand the evidence or to determine a fact in issue, a witness qualified as an expert by knowledge, skill, experience, training, or education, may testify thereto in the form of an opinion or otherwise, if (1) the testimony is based upon sufficient facts or data, (2) the testimony is the product of reliable principles and methods, and (3) the witness has applied the principles and methods reliably to the facts of the case.”
220 Rational Responses to Risks regulation in cases where the negotiation game has a clear solution, and also in cases where all plausible candidates for a solution support the regulation. The negotiation game just makes vivid the objective features of solutions, such as a reduction in an evidential risk, that lead informed, rational citizens to support the regulation. A reasonable regulatory action may be characterized by its objective features and then justified by them without appeal to the moves of citizens in the negotiation game. The reasonableness of the action is objective although dependent on the basic goals of the citizens and the expert information that directs the action. In some cases, expert testimony, without settling controversies of game theory, presents an objective justification for a regulatory action.
9.7. Summary This chapter shows, within its model, how a regulatory agency may justify a resolution of a regulatory issue using Bayesian methods, mean-risk utility analysis, and game theory. The model, with slight adjustments, shows that government regulatory policy should acknowledge information-sensitive, evidential risks and their objectivity in the relevant legal sense. Evidential risks, as well as physical risks, may objectively justify reasonable regulations.
10 Rolling Back Idealizations A model of rational choice typically incorporates idealizations about agents, their circumstances, and their decision problems. Realism comes from generalizing the model by dropping idealizations and revising principles of rationality to accommodate the changes. A step toward realism moves from (1) a model, with fully informed agents, that adopts the principle to maximize utility to (2) a more general model, with agents who may be incompletely informed, that adopts the principle to maximize expected utility, assuming that agents have a probability function and a utility function for possible outcomes of options. A further step moves to an even more general model, with agents who may have imprecise probabilities or utilities for possible outcomes represented by a set of pairs of a probability function and a utility function, that adopts the principle to maximize expected utility according to some pair in the set. As a sample of additional, future steps toward realism, this chapter sketches ways of generalizing Chapter 5’s model to accommodate nonideal cases with (1) costs and limits on reflection, (2) uncertainty about the content of propositions, and (3) unstable utility assignments.
10.1. Heuristics Deliberation using beliefs and desires lowers deliberation costs and is reasonable in typical decision problems for humans and other agents with limited cognitive abilities. This section shows how this common form of deliberation can win its competition with deliberation that uses probabilities and utilities.
10.1.1. Belief-Desire Deliberation An act’s evaluation is positive if the act is best, or is expected to be best, in the agent’s situation. An outsider may conduct an evaluation of an agent’s act. In a decision problem, identifying a best act and performing it is often a costly Rational Responses to Risks. Paul Weirich, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190089412.001.0001.
222 Rational Responses to Risks procedure for an agent to follow because it requires identifying all possible acts and comparing them. Chapter 5’s model takes maximization of expected utility as a standard that a rational choice meets rather than as a procedure that rational deliberations follow, even if calculating the expected utilities of options has no cost for its cognitively ideal agents. Its standard guides formulation of standards for nonideal agents. A cognitively limited agent’s unconscious, relatively unknown mental activities may suggest an option to perform because of the influence of past experiences. For a limited agent, in some cases following inclination is a better decision procedure than maximizing expected utility, given the costs of a maximizing procedure. Performing the first act that comes to mind, without calculation, may yield a decision that maximizes, or comes close to maximizing, expected utility, especially if the agent cultivates maximizing inclinations. In routine decision problems, people may follow inclination with frequent success, taking success to be approximating a decision that maximizes expected utility. In decision theory, the heuristics and biases literature claims that people use heuristics, that is, shortcuts or rules of thumb, to deliberate about options. A common shortcut uses beliefs and desires, rather than probabilities and utilities, to reach a decision. Its principle says, roughly, that if you want some event to occur and believe that an act will make it happen, then you should perform the act. For example, if you want to graduate from college and believe that meeting the requirements for a college degree will make this happen, then you should meet the requirements. Decision heuristics using beliefs and desires dispense with idealizations that back expected- utility maximization as a standard of rationality, such as the idealization that an agent has probabilities and utilities sufficient for obtaining the expected utilities of the options in a decision problem and also can effortlessly calculate and compare options’ expected utilities. Standards of bounded rationality, that is, standards for agents with limited resources and cognitive ability, must accommodate the costs of deliberation and the absence of resources for deliberation. These factors may affect the rationality of decisions as well as the rationality of decision procedures.1 Success in deliberation is reaching a decision that complies with applicable standards of evaluation, at least approximately. Decision heuristics settle for 1 Icard (2018) considers how to investigate the thought processes of agents who are not cognitively ideal and pay a price for reflection.
Rolling Back Idealizations 223 approximate accuracy to reduce deliberation costs. In some cases, the belief- desire heuristic may be justified because it yields approximate accuracy with low deliberation costs.
10.1.2. A Model Showing Possibility A model’s purpose may be to show how possibly some phenomenon occurs. This section constructs a model that shows how belief-desire deliberations may be advantageous. The model mimics, but simplifies, conditions for humans. It moves toward realism by rescinding the common idealization of a cognitively ideal agent but also by adding simplifying assumptions about an agent’s circumstances that support the advantage of belief-desire deliberations. The model’s assumptions move toward realism in some ways and away from realism in other ways, with, I assume, net progress toward realism. Despite its simplifying assumptions, the model takes a step toward realism by showing how common belief-desire deliberations may be advantageous in a model with features that are more realistic than the features of common models. Whether belief-desire deliberations are advantageous in the new model is an a priori matter philosophical methods can address. The model shows how possibly belief-desire deliberations are advantageous and so rational. A probability assignment, to prevent inconsistency, assumes a probability model. This section assumes that an agent’s belief-desire deliberations use beliefs the agent forms using a model of epistemic possibilities. Possible worlds or sets of possible worlds that are not certainly nonactual represent epistemic possibilities, which are candidates for belief.2 Risks in the model need not be chances for bad events, but may instead be mere possibilities, without associated probabilities or even sets of probability assignments. The model, to accommodate belief, takes a possibilistic rather than a probabilistic view of risks. An agent may form beliefs putting aside certain objections to the belief, namely, certain epistemic possibilities, that models besides the agent’s 2 Taking a possible world as a maximal consistent proposition, as I do, makes the space of possible worlds epistemic rather than metaphysical. That water is not H2O is true in some epistemically possible world, given only a priori information, although it is not true in any metaphysically possible world. Because metaphysically possible worlds may not provide for all epistemic possibilities, maximally specific epistemic possibilities constructed, as in Chalmers (2011), may replace them.
224 Rational Responses to Risks belief-formation model acknowledge. Suppose that an agent cannot distinguish the experience of a red wall from the experience of a white wall bathed in red light. That an agent cannot perceptually discriminate between the two epistemic possibilities does not show the absence of theoretical grounds for favoring one possibility. The agent may believe that the wall is red because he finds far-fetched the possibility that the wall is bathed in red light. He may adopt a model of belief formation that puts aside the epistemic possibility of deceptive lighting that another model acknowledges. Epistemic possibilities of bad events identify evidential risks, and through their model-relativity, make evidential risks model-relative.
10.1.3. Research Programs The belief-desire decision heuristic is familiar and often reliable despite being liable to err. One research program, to support the heuristic for bounded agents, refines the heuristic to make it easier to support. The heuristic may say that an act is rational if the agent believes it will bring something she wants, and she believes that it will not create a possibility of a bad event, that is, a risk. For example, it is rational for an agent to have a drink of water if the agent believes it will slack her thirst and no bad will come from it. This revision accommodates reservations concerning risks that pursuit of a desire may bring. Also, a refinement of the heuristic may move from (1) a belief about a necessary means of reaching a goal and (2) an intention to reach the goal to (3) an intention to adopt the means necessary for attaining the goal. Support for the inference may then come from principles of coherence among intentions and beliefs. Intentions, being all-things-considered, remove the objection that a desire that is not all-things-considered fails to warrant an act that satisfies the desire if the act prevents satisfaction of stronger desires. A second research program proposes a relation between belief and degree of belief, as a step toward supporting the belief-desire heuristic indirectly through its relation to probability-utility deliberations. Noting that in a high-stakes context, belief, because of its role in action, requires stronger evidential support than in a low-stakes context, the relation may involve a context-sensitive threshold such that degree of belief above the threshold produces belief, as in Weirich (2004b). Or the relation may take stable degree of belief to lead to belief, as in Leitgeb (2017). The research program may use
Rolling Back Idealizations 225 probability-utility deliberations, and the relation between belief and degree of belief, to support belief-desire deliberations. A third research program explains, not how principles of rationality support a heuristic, considering the heuristic’s prospects of success in matching a choice issuing from probability-utility deliberations, but how the heuristic has lower cognitive costs than probability-utility deliberations. It also explains why an agent gains from using the belief-desire heuristic instead of using an even simpler heuristic, such as picking the first attractive option that comes to mind. Its objective is to show that the belief-desire heuristic strikes an appropriate balance between cognitive costs and the prospect of a successful choice. This section pursues a blend of the three research programs. It revises belief-desire deliberation to improve its reliability, assumes some points about the relation between belief and degree of belief, and identifies features of belief-desire deliberation that make it advantageous by lowering cognitive costs. It does not treat selection of belief-desire deliberation among rival forms of deliberation but just compares belief-desire deliberation to probability-utility deliberation. In its model, an agent need not select the form of deliberation she uses. Evolution may have produced an agent with advantageous heuristics. Robot designers may have built them into an artificial agent. This section shows only how in some cases belief-desire heuristics lower cognitive costs without reducing steeply the prospects of a successful choice.
10.1.4. Justification Cognitively limited agents such as humans benefit from deliberation using ordinary beliefs, desires, and judgments of epistemic possibility. A justification shows that this form of deliberation often produces a choice similar to one probability-utility principles endorse and shows that less demanding procedures, such as first-idea choice, lower prospects of success without sufficient compensation from lower deliberation costs. Part of the context for the belief-desire heuristic, which affects support for the heuristic, is a mechanism for triggering other forms of deliberation when the heuristic is not advantageous, either because success is unlikely, stakes are high and demand more reliable deliberations, or stakes are so low that they do not justify the heuristic’s modest costs. An agent should abandon the belief-desire heuristic,
226 Rational Responses to Risks if the stakes do not warrant the heuristic, or circumstances undermine confidence in the heuristic. An agent may use the belief-desire heuristic until she senses trouble, then switch to probability-utility methods. A cook who believes she turned off the oven before leaving the kitchen, may go back to check because of the risk of fire from an oven left on. Support for a heuristic does not demonstrate its success in all cases but in a range of common cases. Cognitive psychologists, for example, Kahneman (2011), propose that people have two systems of choice: system 1, which is fast and intuitive, and system 2, which is slow and reflective. A model that idealizes agents for simplicity, but aims to exhibit the operation of some factors explaining the rationality of choices by human agents, may give agents two systems of choice, like system 1 and system 2, and a control system, always active, that directs without deliberation the operation of these two systems of choice. Thinking about the design of an agent clarifies justification of belief-desire deliberation. Artificial intelligence designs a machine that thinks efficiently, and operations research designs a corporation that assigns thinking to personnel to reach efficiently acts that are collectively rational. Both a machine and an organization may use multiple methods of deliberation, and a good design uses a combination of methods to efficiently reach successful decisions. Suppose that a designer wants to build an artificially intelligent robot with the two systems of choice. How may the robot use the two systems to advantage? An overall system of thought settles which subsystem of thought a thinker uses when the thinker has two or more subsystems of thought. It sends a decision problem to the most promising subsystem. It engages a slow, reflective subsystem, given time for its operation, when stakes are high and a fast subsystem is apt to make a mistake, but otherwise engages a fast subsystem to save costs. Efficiency justifies using a fast subsystem in routine situations. The case for the heuristic depends not only on the probability of success but also on the stakes, as measured by the utility difference between a maximizing choice and a minimizing choice. If the difference is large, the stakes are high, and if the difference is small, the stakes are low. That is, stakes are high when the utilities of best and worst options differ greatly, and then reaching the right decision justifies reflection. A good decision heuristic for a case keeps small the expected-utility difference between a successful choice and the result of the heuristic.
Rolling Back Idealizations 227 Belief-desire heuristics lower deliberation costs, compared to probability- utility methods, and in some decision problems produce the same choice as do probability-utility methods. The heuristics reach a successful, or approximately successful, decision without the costs of probability-utility calculations. Their justification in these decision problems also makes a similar comparison with deliberations having even lower costs and shows that belief-desire deliberations improve the prospects of success enough to warrant their costs. Although rationality, for efficiency in deliberation, favors belief-desire deliberations when stakes are middling, it favors exact methods when stakes are high and first-idea choice when stakes are low, assuming resources adequate for the various types of deliberation. Decision theorists often say that the careful analysis of options that probabilities and utilities provide is worthwhile in important decisions, with adequate resources for deliberations, implying that simple belief-desire deliberation, or satisficing, a version of first-idea deliberation, may be good enough in other decision problems.
10.1.5. Belief and Desire Belief itself is a heuristic for representation of an agent’s doxastic state. People use beliefs, instead of degrees of belief, to simplify reasoning and communication. Communication expresses belief. Communication of probability and utility is too nuanced for ordinary conversation, and best fits an individual’s private reasoning. Unlike rational degree of belief, which tracks strength of evidence, rational belief tracks epistemic utility and stakes, as Section 8.6’s discussion of inductive risk notes. Human communication condenses information into propositions that assertions express, and that agents may believe or desire to be true. We do not communicate our perceptions, but their propositional upshot, say, after looking at the sky, the proposition that it will rain. For an agent, it is superfluous to think that his degree of belief that it will rain is 60%. He does not need a proposition that states his degree of belief. His degree of belief does its work in action without a propositional specification that argumentation uses, or that communication requires. Deliberation with degree of belief and degree of desire does not use the currency of communication and therefore has a greater cognitive cost than deliberation with belief and desire.
228 Rational Responses to Risks A belief expressed in conversation is advanced for the audience to count on in making decisions. Whether it deserves to be counted on depends on the stakes. When the stakes are high, belief needs strong support by evidence given the role belief plays in belief-desire deliberations leading to a decision; when the stakes are low, modest support by evidence suffices for belief. Norms of assertion require belief in an assertion’s content, and constraints on belief are in line with constraints on assertion. Degree of belief is not the only factor in belief formation. Stakes also matter because belief prompts assertion, which is sensitive to stakes. Readiness to assert is a sign of belief. This readiness remains if the stakes for other people are low, even if the stakes for oneself are high. Suppose that you are allergic to peanuts but others are not. You tell others that this food does not contain peanuts, but do not count on this in your deliberations about whether you will eat it, because you have more at stake than your audience does. They rely on your assertion in their deliberations, but you rely on your degree of belief, not your belief, in your deliberations. Although readiness to assert is a sign of belief, an agent may have a belief without being ready to assert it. A juror may believe that a defendant in court is guilty as charged, but she may not say that he is because she is not sure of his guilt. Reasonable assertion considers consequences, and an agent may want to be sure about an assertion that carries weighty consequences. Also, one may believe that a fellow traveler is rude but not assert this because the assertion would exacerbate tensions and provoke more rudeness from the traveler. Even the type of evidence possessed may influence assertion without influencing belief. One may believe that a young man was speeding along a stretch of highway because most young men speed there, but not assert this because of the chance that the young man was an exception to the rule. Belief may be sensitive to context through its association with assertion, which, being an act, is sensitive to context. In contrast with belief, no threshold of degree of desire settles desire, not even taking account of context, because desire does not have belief ’s role in conversation. Nonetheless, desire is readily accessible for use in belief-desire deliberation. People commonly tell others what they want because sharing desires facilitates coordination and cooperation. Communication, belief formation, and desire formation are at the core of cultural transmission. Although it is best in deliberation to separate doxastic and conative attitudes by separating probability and utility, it is convenient to simplify deliberations using tools that communication provides, such as beliefs, despite beliefs’
Rolling Back Idealizations 229 responding to nonevidential factors such as stakes and the epistemic utilities of beliefs. Belief is a handy tool for deliberation because its role in communication makes it readily available. The belief-desire heuristic drafts for choice tools of communication. The heuristic may be rational in a model in which an agent in a decision problem may obtain relevant information and enlist the aid of others by communicating with them, as in trustee decision problems.
10.1.6. Refinements The following argument, constructed by a person with a headache, constitutes a practical syllogism, a representation of a type of belief-desire deliberation. I want relief soon. Also, I believe I’ll get relief soon if I take an aspirin now. Therefore, I take an aspirin now.
The premiss about a desire and the premiss about a belief together yield an act as conclusion. A practical syllogism may fail in several ways. (1) The headache sufferer may prefer answering the phone now to taking an aspirin now. Satisfaction of the desire motivating the act may lose in competition with satisfaction of other desires. (2) The headache sufferer may prefer taking an ibuprofen now to taking an aspirin now. A belief motivating the act does not rule out beliefs that other acts are better means of satisfying the desire that motivates the act. (3) The headache sufferer may be against taking pills. The act that the desire and belief motivate may have foreseen, unwanted consequences that outweigh satisfaction of the desire that motivates the act. Support for belief-desire deliberation, besides explaining its low cost and connection to probability-utility deliberation, refines the type of deliberation. To refine belief-desire deliberation, let the desire motivating the act be an all-things-considered desire that a state obtain; restrict belief-desire deliberation to decision problems with just two salient options, namely, performing an act, or not performing it; let the belief motivating the act be a belief that the state desired obtains if and only if the act is performed; and add that, given a way of fixing features of the world that are independent of
230 Rational Responses to Risks the act’s realization, the act’s realization is a matter of indifference. Then an agent’s belief-desire deliberations have the following structure. I have just two salient options: performing act A or not performing it. I desire all-things-considered state S’s realization. I believe that state S’s realization occurs if and only if I perform act A. I am indifferent to A’s realization given a way of fixing the world’s other, independent features. Therefore, it is rational for me to perform act A.
Although this argument represents a form of deliberation immune to earlier problems, a problem remains. The agent may not believe with certainty that the desired state will obtain if and only if he performs the act. Doubts about this biconditional may undermine the conclusion that the act is rational. Suppose that the headache sufferer thinks that taking an aspirin now might prolong his headache. The epistemic possibility that the act prevents the desired state’s realization may provide a reason not to perform the act. I construct a normative model with assumptions that resolve this problem. The model adopts simplifying assumptions about the agent and the agent’s decision problem. According to the model, the agent has access, at a cost, to his beliefs, desires, and preferences, and, paying the cost, is certain of all the premisses in the argumentative support that belief-desire deliberation creates for his performing an act. Although the agent faces these access costs and also reasoning costs, the agent is an inerrant reasoner and calculator. Because of the costs of identifying probabilities, calculating expected utilities, and comparing expected utilities, probability-utility deliberation has a higher cognitive cost than belief-desire deliberation. The model shows how possibly, in a two-option decision problem, belief-desire deliberations are rational for an agent with these cognitive limits. In the model, an agent in a decision problem has available both belief- desire deliberation and probability-utility deliberation. As mentioned, I put aside the method of selecting a type of deliberation and just compare the two types of deliberation to find the one better for resolving the decision problem. To evaluate belief-desire deliberation, I assess its prospects for success and its cognitive costs, and then compare these with the prospects and costs of probability-utility deliberation. For an inerrant reasoner of the model, probability-utility deliberation is sure to succeed in yielding an option that
Rolling Back Idealizations 231 maximizes expected utility, so a comparison with belief-desire deliberation looks at the extent to which belief-desire deliberation decreases the chance of success by decreasing the cost of deliberation.
10.1.7. Support Causal decision theory, as formulated by Gibbard and Harper ([1978] 1981), computes an act’s expected utility using the probabilities of subjunctive conditionals. Stalnaker (1968) uses “>” as a sentential connective to formulate such conditionals. A conditional formed with the sentential connective “>” holds just in case if its antecedent were true, then its consequent would be true. If A is an act and S is a state, (A > S) is the conditional that if A were performed, then S would obtain. Assume that either (A > S) or (A > ~S) holds, and that either (~A > S) or (~A > ~S) holds. Let P stand for the agent’s probability assignment, U stand for the agent’s utility assignment, and EU stand for the agent’s expected utility assignment. To set the scale for utility, let U(S) = 1 and U(~S) = 0. Then EU ( A) = P ( A > S )U (S) + P ( A > ~ S)U (~ S) = P ( A > S)U (S) = P ( A > S) EU (~ A) = P (~ A > S)U (S) + P (~ A > ~ S)U (~ S) = P (~ A > S)U (S) = P (~ A > S)
A type of biconditional with A and S as constituents, which I write as (A S), is equivalent to the conjunction of two conditionals: (1) if A were performed, then S would obtain, that is, (A > S), and (2) if A were not performed, then S would not obtain, that is, (~A > ~S). If P(A S) = 1, so that P(A > S) = 1 and P(~A > ~S) = 1, and consequently P(~A > S) = 0, then EU(A) = 1, and EU(~A) = 0. If P(A S) < 1 because P(A > S) < 1 and P(~A > ~S) < 1, then EU(A) < 1 and EU(~A) > 0 because P(~A > S) > 0. Suppose, as a simplifying assumption, that a percentage decrease in P(A S) produces an identical percentage decrease in P(A > S) and also in P(~A > ~S). If P(A S) drops to 0.6, P(A > S) = 0.6 and EU(A) = 0.6. Also, P(~A > ~S) = 0.6, so P(~A > S) = 0.4 and EU(~A) = 0.4. Then the act A remains a maximizing choice if the decrease in P(A S) is not more than 0.5. Belief that (A S) indicates that P(A S) ≥ 0.5 in all contexts. Under the model’s assumptions, belief that (A S) suffices for the success of belief-desire deliberation. Acting on the belief that the biconditional holds has the same result as maximizing expected utility. Because belief-desire
232 Rational Responses to Risks deliberation reaches the same choice that issues from probability-utility deliberation, and because belief-desire deliberation has lower costs, using it is justified. To generalize the model, one may treat decision problems with multiple options; relax the assumption that a decrease in the probability assignment to the act-state biconditional equally decreases the probability of the state, assuming the act, and the probability of the state’s opposite, assuming the act’s opposite; and treat trade-offs between the prospect of success and the cost of deliberation. However, as it stands, the model shows that in a decision problem, belief-desire deliberation may be superior to probability-utility deliberation because it has lower cognitive costs and recommends the same act as probability-utility deliberation.
10.2. A Perspective in a World Chapter 5’s model assumes that an agent deliberates using transparent expressions of propositions. An ideal agent fully understands the propositions that that-clauses and such sentential names of propositions express if he knows his position in his world. However, even an ideal agent, because of ignorance of empirical facts, may not fully understand the proposition that a nonsentential name of a proposition expresses, for example, the description, “Frege’s favorite proposition.” Moreover, he may not fully understand the proposition that a sentential name of a proposition expresses if he does not know his position in the world. For example, he may not fully understand the proposition that today is Monday if he does not know the day of the week. This section relaxes Chapter 5’s idealization of full understanding of propositions relevant to deliberations and then adjusts the standard of rationality for decisions to accommodate partial understanding of these propositions. It retains Chapter 5’s assumption that an agent in a decision problem rationally assigns probabilities and utilities to relevant propositions, such as propositions representing the possible outcomes of options, taking this assumption to entail the agent’s rationally understanding, perhaps partially, the propositions. Given the model’s assumptions, I hold that for an agent in a decision problem, a rational option maximizes (expected) utility according to some understandings of propositions representing options and their possible outcomes.
Rolling Back Idealizations 233
10.2.1. Relativization By a full understanding of the proposition a sentence expresses in a context of its use, I mean an understanding such that a probability and a utility assignment to the proposition, resting on the understanding, are not affected by, and so are robust with respect to, additional information about the sentence’s content.3 For an ideal agent who knows his position in his world, a probability assignment to a proposition, understood according to a sentential name of the proposition, is robust this way. However, a probability and utility assignment to Frege’s favorite proposition, understood only according to this description of it, may not be similarly robust. The assignments may change after a sentential specification of the proposition. Also, probability and utility assignments to the proposition that today is Monday may not be robust. An agent may fail to understand fully the proposition that a sentence with a temporal indexical expresses because the agent does not know his temporal location. An agent, not knowing the day of the week, may not fully understand the proposition that the sentence, “Today is Monday,” expresses. A possible world, limited to features an agent cares about, has a propositional representation, and I take a proposition to have a structure similar to the structure of a sentence that expresses it. Differences in structure distinguish necessary truths, although the same set of possible worlds represents them. I adopt the theory of direct reference, according to which the proposition that “Today is Monday” expresses contains the denotation of “today.” Fully understanding the proposition requires knowing the indexical’s denotation, which depends on the context of the sentence’s occurrence. A full understanding of a proposition includes an understanding of the proposition’s elements and their structure, as Weirich (2010b, 2010c) explains. An agent’s understanding of the proposition that his use of the sentence, “Today is Monday,” expresses is partial if he does not know the denotation of “today.” In contrast, he has a full understanding of the proposition that the sentence, “May 21, 2018, is Monday,” expresses, assuming that he knows the denotation of its elements and its structure. If today is Monday, then “today” denotes Monday, and the proposition that today is Monday is the same as the proposition that today is today, 3 Whether information illuminates a sentence’s content is not always clear. Oedipus knows who Jocasta is and wants to marry her but not his mother. Information that Jocasta is his mother affects his utility assignment to marrying her. However, this information is about the consequences of marrying Jocasta rather than about the content of the proposition that he marries Jocasta.
234 Rational Responses to Risks because the propositions assert the same identity. For an ideal agent, the probability that today is today, so characterized, is 100%, but an ideal agent may lose track of the day of the week and assign a probability less than 100% to the proposition that today is Monday, so characterized. Even a cognitively ideal agent may not realize that two propositional names denote the same proposition. An agent’s probability assignment seems inconsistent if the agent assigns two probabilities to the same proposition. Suppose that today is Monday. An apparent inconsistency arises if an agent assigns two probabilities to the proposition that today is Monday, one according to its expression by “Today is Monday” and another according to its expression by “Today is today.” To resolve such apparent inconsistencies for an agent with partial understanding of propositions, this section’s model makes an agent’s probability assignment to a proposition relative to a way of understanding the proposition. An agent may understand the same proposition using the sentence, “Today is Monday,” or using the sentence, “Today is today,” and may assign different probabilities to the proposition relative to different understandings of it. Similarly, a proposition’s utility is relative to a way of understanding the proposition. A proposition may have different probabilities and utilities relative to different understandings of it.
10.2.2. Centered Worlds Truth conditions for expressions of propositions that use indexicals specify a set of worlds and a center for each world that fixes in the world the denotations of indexicals and so the content of sentences with indexicals, as in Lewis (1979). The truth of the sentence, “Today is Monday,” depends on the world and a center for it, which specifies a time and place, and in general a perspective in the world. A center is an index relative to which the sentence, “Today is Monday,” yields a proposition. If a world’s representation is a nonindexical description of all relevant facts, the description does not settle the truth of the sentence, “I am hungry,” because it does not settle the denotation of the subject. A center for the world, which settles the denotation of indexicals, supplements the world so that the sentence yields a proposition true or false in the world. Use of the sentence makes the speaker the center, so if Smith asserts the sentence, then he
Rolling Back Idealizations 235 is the world’s center and the denotation of “I.” If in the world Smith is hungry, Smith’s sentence, “I am hungry,” is true. Frege observed that if tomorrow I want to say that today is Monday, I say that yesterday was Monday. On Tuesday, “Yesterday was Monday” expresses the same fact as does on Monday, “Today is Monday.” I express the same fact with different sentences. A sentence I use adopts my perspective to express a fact. I change the sentence that expresses a fact as my perspective changes. A change in centers for my world represents the change in perspective. In an agent’s decision problem, an option’s possible outcome has a propositional representation that specifies the realization or nonrealization of every desire and aversion of the agent. A world has a propositional representation that specifies for every center-independent event that matters to an agent whether the event occurs. That the Berlin Wall fell in 1989 is center- independent. That today is Monday is center-dependent. A centered world involves a world taken as a specification of center-independent events that matter to the agent and hence an outcome. The center settles the agent’s position and perspective in the world, and thereby with other features of the world settles the desires and aversions whose realizations represent an option’s possible outcome in a center-independent way. In Chapter 5’s model, an agent is certain of the center of her world and takes it for granted. Other features of a world yield center-independent events that represent a possible outcome of an option. In this section’s model, an agent may be uncertain of the center of her world. Different epistemically possible centers combined with other features of a world may yield different possible outcomes. Hence a centered world represents a possible outcome. In a decision problem an agent confronts, the agent in her situation furnishes a center for worlds that her options might produce. The agent evaluates an option using an expression of the option with an indexical. When Smith evaluates taking a walk, she evaluates her taking a walk rather than Smith’s taking a walk. The two expressions of the same proposition offer different ways of understanding the proposition. An agent assigns a utility to a proposition relative to a way of understanding the proposition. An understanding of a proposition generates epistemic possibilities concerning the proposition’s world and its center that settle the proposition’s utility. An agent’s utility assignment to a world is relative to a time so that if the agent’s desires and aversions change, the utility assignment may change. The agent may evaluate differently a pain according to whether it is past or future and, similarly, may evaluate differently a risk according to whether it is
236 Rational Responses to Risks past or future, as Section 3.7 notes. The utility of a world may vary with an agent’s position in the world, although the world’s independent features are constant, because an agent evaluates the world’s independent features from a position in the world. If an agent is averse to running risks in the future, then whether in a world running a risk realizes the aversion depends on the agent’s position in the world. Running the risk realizes the agent’s aversion just in case the risk is in the agent’s future in the world. Whether in a world the agent realizes the aversion, given that the world realizes the risk, depends on the agent’s temporal position in the world and so the world’s center. Taking a centered world as a combination of a world and a center for the world makes a world’s features center-independent. An agent in a decision problem may assume that her perspective provides a center for worlds that may be outcomes of options, although she may not know features of the center. A way of expressing the agent’s ignorance takes the agent’s world as a centered world and takes the agent’s ignorance of features of the center to be ignorance of features of her world. Ignorance of her location is ignorance of a feature of her world’s center. For convenience, I take an agent’s world as a centered world and take ignorance of the center’s features as ignorance of features of the agent’s world. The agent’s perspective, a center for her world, settles the agent’s spatiotemporal location and the agent’s information. A proposition’s utility evaluates its truth, but the evaluation may differ as the perspective changes, even given a full understanding of the proposition. If an agent is uncertain of her perspective, her evaluation of a proposition may use perspectives that are epistemically possible to obtain the proposition’s (expected) utility. A way of understanding a proposition, and the proposition’s utility given that understanding, both depend on the agent’s information about her position in her world. If for an agent many centers of her world are epistemically possible, then an evaluation of an option uses the expected-utility principle and evaluates the option’s consequences in each epistemically possible centered world that may be her world.
10.2.3. Effects of Relativization The relativization of probability and utility assignments to ways of understanding propositions affects epistemic risks and their combinations. For
Rolling Back Idealizations 237 someone who does not know that Tully is Cicero, the evidential risk that Tully’s speech will incite a riot is not the same as the evidential risk that Cicero’s speech will incite a riot. The risks involve the same proposition but understood different ways. Combining the risk that Tully’s speech will incite a riot with the risk that Cicero’s speech will incite a riot may increase the total epistemic risk of a riot. Learning that Tully is Cicero may lower that total epistemic risk. The relativization of probability and utility to ways of understanding propositions accommodates the effects of framing on preferences. People may fail to see that two sentences express the same proposition because the sentences frame the proposition differently. If they have no excuse for this failure, they are irrational. Ideal agents see that sentences express the same proposition whenever their expressing the same proposition is an a priori matter. They have no excuse for assigning different probabilities and utilities to a proposition relative to expressions that are equivalent a priori. In contrast, humans may have an excuse if the a priori equivalence of the expressions is not obvious.4 Granting that necessarily water is H2O, if an agent sells for $0.8 a ticket that pays $1.0 if and only if water is H2O, then necessarily the agent loses $0.20. The agent’s ignorance that water is H2O is excusable if the agent’s education has this gap through no fault of his own. That water is H2O is not an a priori truth; so, even a rational ideal agent may be ignorant of this fact. An ideal agent is irrational if he makes a system of bets which on a priori grounds yields a net loss. However, a rational ideal agent may make a system of bets that necessarily brings a net loss if the agent’s ignorance of the system’s defect has a good excuse.
10.2.4. Conditionalization An agent may learn a new way of understanding a proposition he already knows. For instance, an agent who knows the truism that today is 4 Although different frames may lead an agent to assign different utilities to the same proposition, not all effects of frames on preferences have this result. Some frames just highlight features relevant to a decision. A variety of frames for a decision problem may highlight different features of the problem, and these features may nudge an agent toward different options. A fully rational agent rationally resolves conflicts between the factors the various frames emphasize when forming all-things-considered preferences among the options. Bermúdez (2018) examines frames for decision problems.
238 Rational Responses to Risks today may learn usefully that today is Monday, although according to the theory of direct reference, the propositions are the same. On being told that today is Monday, the agent acquires a new way of understanding the proposition that today is today and uses the new understanding to update probability and utility assignments to the proposition relative to ways of understanding the proposition. For the agent, the probability that today is Monday, relative to this sentential way of understanding the proposition, rises to 100%. Conditionalization is a way of updating probability assignments in response to new information. Standard conditionalization covers cases in which evidence is propositional, and a change in evidence comes in the form of a proposition learned. Suppose that from t to t+ an agent learns exactly the proposition E. Also, suppose that Pt is the agent’s probability assignment at time t, and Pt+ is the agent’s probability assignment at the later time t+. According to standard conditionalization, for a hypothesis H, Pt+(H) = Pt(H|E). This section’s model generalizes standard conditionalization (1) to allow for information that is not propositional, so that an agent may acquire new information without learning a new proposition, and (2) to allow for a loss of information. In this section’s model, the principle of conditionalization uses probabilities that attach to a proposition relative to a way of understanding the proposition. The principle covers cases in which the relevant conditional probability involves a proposition and a condition given a way of understanding each. It accommodates information that comes in the form of a new way of understanding a proposition. Suppose that a rational ideal agent, who knows that today is today, learns that today is Monday. He conditionalizes using the new understanding of the proposition. To update, he uses the old conditional probability of the proposition relative to the old way of understanding it to obtain a new probability of the proposition relative to the new way of understanding it. Conditionalization involving probabilities of propositions relative to ways of understanding them is a straightforward extension of ordinary conditionalization. To formulate the extension, I use P(A, UA) to stand for the probability of a proposition A relative to an understanding of it UA. Let E be a proposition expressing the evidence, relevant to a hypothesis H, acquired from time t to time t+. Let UEt be the agent’s way of understanding E at t, and UEt+ the agent’s way of understanding E at t+. Let UHt be the agent’s way of understanding H at t, and UHt+ the agent’s way of understanding H at t+. Conditionalization requires that if Pt(H, UHt|E, UEt+) = x, then Pt+(H, UHt+) = x.
Rolling Back Idealizations 239 Suppose that for an agent at a time the probability that today is Monday, relative to an understanding of the proposition that uses the sentence, “Today is Monday,” is 50%. Letting M stand for the proposition that today is Monday, Pt(M, UMt) = 0.5. At a later time t+, the agent has learned exactly that today is Monday. The agent then assigns probability 1 to the proposition that today is Monday relative to an understanding of it that uses the sentence, “Today is Monday.” The agent’s understanding of the proposition grows so that Pt+(M, UMt+) = 1. The conditional probability Pt(M, UMt|M, UMt+) = 1 shows the result of supposing this growth in understanding. It supposes that at t the agent’s understanding of M is as it is at t+ and so replaces UMt with UMt+ as the agent’s understanding of M at t when obtaining the conditional probability for M at t relative to an understanding of M. In an application of conditionalization to the case, the proposition that today is Monday is both the hypothesis and the evidence. Because Pt(M, UMt|M, UMt+) = 1, and because Pt+(M, UMt+) = 1, according to conditionalization, Pt+(M, UMt+) should be 1. Because Pt+(M, UMt+) = 1, the agent complies with conditionalization. Standard conditionalization is for cases with only gains in information, and it employs conditional probabilities in which the condition generates a new body of evidence without losing any evidence. However, conditionalization on new evidence emerges from probability assignments that fit the evidence; conditionalization is not a requirement independent of fit with the evidence at a time. Fit with evidence entails updating according to conditional probability when evidence changes exactly by addition of the condition. It also entails changing probability assignments according to conditional probability when the condition, together with background conditions, indicates exactly the evidence remaining after a loss of evidence, taking evidence lost to be either a proposition or a way of understanding a proposition and so information lost.
10.2.5. Self-Locating Propositions An agent may learn a proposition that specifies the agent’s spatiotemporal location, that is a self-locating proposition. Such a proposition carries information about the center of the agent’s world. An agent’s conditionalizing on a self-locating proposition may involve conditionalizing on a new understanding of a proposition. Self-locating propositions are prominent in the problem of the Absent-Minded Driver that Piccione and Rubinstein (1997)
240 Rational Responses to Risks present and in the problem of Sleeping Beauty that Elga (2000) presents. This section does not review the extensive literature on these problems to address them thoroughly. It sketches only the Sleeping Beauty problem and uses it only to illustrate some points about learning propositions that are self-locating.5 Sleeping Beauty enters an experiment on Sunday evening. She will wake on Monday. Then the experimenters toss a fair coin and if it comes up heads end the experiment, but if it comes up tails give Beauty a drug that eliminates her memory of waking and puts Beauty back to sleep until Tuesday, when she wakes and the experiment ends. Beauty knows these features of the experiment on Sunday and then takes the probability of heads to be 1/2. When she wakes Monday, not knowing the day of the week, she takes the probability of heads to be 1/3, because she knows if it is Tuesday, tails came up. In this section’s model, a proposition’s probability assignment is relative to a way of understanding the proposition. Being certain that today is Sunday, and tomorrow that yesterday was Sunday, requires keeping track of one’s position in time. An agent’s evidence of her temporal position grounds a probability assignment to a proposition understood by means of temporal indexicals. On Sunday, Beauty knows her position in time. When Beauty wakes during the experiment, she does not know the day of the week. She has lost some information about her position in time. She does not know when she wakes whether to assert “Today is Monday” or “Today is Tuesday.” As she wakes on Monday, she does not know, of the day of her waking, that it is Monday, although on Sunday she knew of this day that it is Monday. Suppose that the experimenters tell her that today is Monday so that she regains evidence she lost about her spatiotemporal location. Once told, for her the probability of heads is 1/2, as it was when on Sunday she assigned that probability to heads. Once told, she knows the day of the week and makes the same probability assignment as when on Sunday she knew the day of the week and knew of the day of her current waking that it is Monday. On Sunday, Beauty knows her location in time and through that knowledge knows propositions about times relative to her time; she assigns probability 1/2 to heads. On waking, Beauty’s new ignorance of her location in time produces a change in her total evidence that changes the probability of heads 5 Stalnaker (2008: Chap. 3) argues for conditionalization after acquisition of self-locating information. Bradley (2011) defends conditionalization in the context of the Sleeping Beauty problem.
Rolling Back Idealizations 241 from 1/2 to 1/3. Beauty’s learning it is Monday makes her again assign probability 1/2 to heads. She learns what she lost, namely, information that a certain day is Monday. An agent can learn by gaining a way of understanding a proposition because a probability attaches to a proposition given a way of understanding it. Beauty loses and later gains a way of understanding that a certain day is Monday. The way of understanding the proposition is the way on Sunday that the sentence, “Tomorrow is Monday,” brings and that on Monday the sentence, “Today is Monday,” brings. Generalized probability updating according to conditional probabilities folds in gaining a way of understanding a proposition and conditionalizing on the understanding gained.
10.3. Instability Chapter 5’s model assumes that the options in a standard, finite decision problem have stable utilities in the sense that an option’s utility is the same given any option’s realization. Relaxing this assumption requires addressing decision problems in which no option stably maximizes utility, that is, none maximizes utility given its realization; every option carries information according to which it does not maximize utility. This section generalizes the principle of utility maximization to handle such decision problems.6
10.3.1. Cases of Instability Egan’s (2007) case of the button that if pressed kills all psychopaths presents a finite decision problem without an option that stably maximizes utility. In this problem, an agent is deciding whether to press the button. He would like to eliminate psychopaths, believes that he is not a psychopath, and believes that pressing the button will not cause him to become a psychopath. However, he also believes that only a psychopath would press the button. 6 A finite decision problem has a finite number of salient options. If the decision problem arises in a game, these options are called pure strategies. A probability mixture of pure strategies is a mixed strategy. An agent may use a random process to guide her adoption of a pure strategy, or to select a pure strategy for her. In the first case the resolution of the decision problem is still a pure strategy. In the second case, the agent’s binding herself to the random process’s result forms a new option. In neither case is the mixed strategy that the random process implements an option of the original decision problem. These points about options carry over to decision problems that do not arise in games.
242 Rational Responses to Risks Hence, pressing the button provides evidence that he is a psychopath and will die because of this act. Because, above all, he wants to live, it seems rational for him not to press the button. Given that the agent does not press the button, pressing the button maximizes utility. Given that the agent presses the button, not pressing the button maximizes utility. No option maximizes utility given its realization; none stably maximizes utility. What general decision principle governs such cases? In response to Egan’s case, Joyce (2012) proposes that in a finite decision problem without an option that stably maximizes utility, rationality requires, not an option that maximizes utility, but only that the agent, using full information, reach a deliberational equilibrium among contending options in which each contender has the same expected utility as any other contender, and each contender has an expected utility at least as great as any option not contending. Any option proceeding from the deliberational equilibrium is rational, Joyce claims. Joyce makes a choice’s rationality depend on the outcome of a choice procedure, a way of deliberating that reaches a deliberational equilibrium. He assumes idealizations that make it rational for an agent to search for a deliberational equilibrium. However, reaching a deliberational equilibrium is insufficient for rationality in some cases. It does not eliminate pushing the button in Egan’s case and, in general, does not favor, among the options in a deliberational equilibrium, one that is better, if realized, than are the others, if realized.7 Game theory provides decision problems in which no option maximizes utility given its realization, and also decision problems in which each of multiple options is such that it maximizes utility given its realization. In such problems, reaching a deliberational equilibrium depends on the initial state of deliberation and the procedure or dynamics of deliberation. In the game Matching Pennies, which Table 10.1 depicts, two players display one side of a penny. If they display the same side, the first player, whose choices the table’s rows represent, wins the pennies. If they display different sides, the second player, whose choices the table’s columns represent, wins the pennies. A cell in the table represents the outcome of a combination of choices, one for each player, and lists first the utility of the outcome for the first player, and second the utility of the outcome for the second player. Given each player’s power to predict the other player’s choice of a strategy, no 7 Richter (1984, 1986) discusses cases of this sort.
Rolling Back Idealizations 243 Table 10.1 Matching Pennies
Heads Tails
Heads
Tails
2, 0 0, 2
0, 2 2, 0
Table 10.2 Hi-Lo
High Low
High
Low
2, 2 0, 0
0, 0 1, 1
deliberational equilibrium involves a pure strategy. However, a deliberational equilibrium in mixed strategies exists for a player, namely, adopting each pure strategy with a probability of 0.5. Suppose that a player’s procedure for deliberation moves from one pure strategy to another, following incentives to switch from heads to tails and from tails to heads. Then the player’s deliberations pass by the equilibrium endlessly without ever reaching it. These dynamics of deliberation prevent reaching an equilibrium. The two-person game Hi-Lo, which Table 10.2 depicts using the same conventions as Table 10.1, has two deliberational equilibria for each player, one involving High and the other involving Low, assuming that each player is certain the other anticipates his strategy and responds optimally. If a player begins deliberation by entertaining Low and thinks the other player anticipates this choice and selects Low, then he has no reason to entertain another strategy. He immediately reaches a deliberational equilibrium in which he selects Low with a probability of 1. If he begins deliberation by entertaining High and thinks the other player anticipates this choice and selects High, he has no reason to entertain another strategy. He immediately reaches a deliberational equilibrium in which he selects High with a probability of 1. The initial state of deliberation settles the deliberational equilibrium a player reaches, and hence the strategy a player selects. Making a choice’s rationality depend on its issuing from a deliberational equilibrium leaves unresolved decision problems without a deliberational equilibrium and leaves ungrounded the initial deliberational state and the deliberational dynamics that together settle whether an agent reaches
244 Rational Responses to Risks a deliberational equilibrium and which one he reaches if several exist. Without a justification of a particular initial deliberational state and type of deliberational dynamics, using an option’s issuing from a deliberational equilibrium as a standard of rationality lacks a full justification.8
10.3.2. Self-Support Causal decision theory, which Chapter 4 adopts, may address Egan’s case by distinguishing evaluation of an option and identification of choice-worthy options. The theory calculates an option’s expected utility using for a state, not the standard conditional probability of the state given the option, but a type of probability for an option–state pair that attends to causal relations between the option and the state, such as the probability of the subjunctive conditional that if the option were realized, then the state would obtain, as in Section 10.2. Without changing its calculation of an option’s expected utility, causal decision theory may adjust its identification of choice-worthy options in decision problems having options without stable utilities. It may claim that a rational option need not maximize utility unconditionally but may instead have a modified utility-relation to other options. An option that maximizes utility given its realization is said to be ratifiable. Self-support generalizes ratifiability for decision problems without a ratifiable option or with nonratifiable options more choice-worthy than ratifiable options. An option is self-supporting if assuming its realization does not provide a sufficient reason to realize another option. In finite decision problems, an incentive to switch from one option to another is a sufficient reason to realize another option if the incentive does not begin a sequence of incentives to switch option that leads back to the original option. At least one self-supporting option exists in every finite decision problem, as Weirich (1998: Chap. 4) shows.9 Self-support and emergence from a deliberational equilibrium are closely related phenomena in finite decision problems. Decisive rejection of an option, and so the option’s lack of self-support, entails that the option is not 8 Armendt (2019) holds that causal decision theory correctly evaluates an agent’s choice using the agent’s information at the time of choice. Because of the costs of deliberation, a rational agent’s information at the time of choice may not include information that further deliberation can provide. My points about decision instability assume cognitively ideal agents for whom deliberation has no cost. 9 For simplicity, this section treats only finite decision problems, but Weirich (1998: Chap. 4) extends the standard of self-support to decision problems with an infinite number of options.
Rolling Back Idealizations 245 part of a deliberational equilibrium. Moreover, a self-supporting option may emerge from a deliberational equilibrium, if deliberation entertains mixed strategies. If a path of incentives away from an option returns to the option, so that the option is self-supporting, then the option may be part of a mixed strategy that forms a deliberational equilibrium. To cover finite decision problems that may have an option without a stable utility, causal decision theory requires, not an option that maximizes utility, but a self-supporting option. A rational choice, it finds, is self-supporting but need not maximize utility. The principle of self-support requires that an agent’s choice take account of all her relevant information, including her information about the information she will have given an option’s realization, but explains an option’s rationality with calculations of options’ utilities that attend to an option’s consequences rather than the evidence the option brings. A fully rational ideal agent prepares for a decision problem and also predicts her decision before making it. Her rational, and so self-supporting, preparations and her predictive power explain her choice. Her preparations aim for making her choice in the decision problem as beneficial as possible. They yield a self-supporting option that given its realization has a utility at least as great as the utility of any other self-supporting option given its realization, as Weirich (1998: Chap. 4) argues. In Egan’s case, both pressing the button and not pressing it are self- supporting. Although each provides evidence that the other option would be better, neither provides a sufficient reason to adopt the other option. However, the self-conditional utility of not pressing is higher than is the self-conditional utility of pressing. So, a fully rational agent, who rationally prepares for the decision problem, does not press the button. In the game Matching Pennies that Table 10.1 presents, assuming that each player knows that her opponent can predict her choice, no pure strategy is ratifiable. If mixed strategies are not available, no strategy available maximizes given its assumption. Yet each strategy is self-supporting and so is rational. Furthermore, each strategy maximizes self-conditional utility. Hence a fully rational ideal agent, who rationally prepares for the game, may show heads and also may show tails. In the game Hi-Lo that Table 10.2 presents, assuming that each player knows that the other anticipates her choice, both High and Low are self- supporting. A player’s selecting High is not only self-supporting but also maximizes self-conditional utility among self-supporting options. A fully
246 Rational Responses to Risks rational ideal agent, who prepares for this game, chooses High. Examining Hi-Lo clarifies causal decision theory’s explanation of the rationality of a choice. In Hi-Lo, an ideal agent, given her opponent’s knowledge of her, initiates coordination, as Weirich (2007b) explains, by taking steps that make High maximize utility. In cases such as Hi-Lo, in which an option’s being self- supporting is just its maximizing utility given as an assumption its realization, an ideal agent who foresees her choice maximizes utility by adopting the self-supporting option she predicts after her preparations. In an agent’s decision problem, an option’s rationality depends on the agent’s information at the time of the decision problem. A fully rational ideal agent makes a choice that is self-supporting in light of all her information, including information acquired during preparation for the decision problem. Because of this acquired information, in particular, foreknowledge of her choice, for a fully rational ideal agent in Hi-Lo, a choice that is self- supporting also maximizes utility. It maximizes utility given its realization, and its realization is foreseen. Because an ideal agent foresees her choice, in her decision problem she has all the relevant evidence her choice provides before she chooses. Because High maximizes utility given High, and she chooses High, High maximizes utility. An option not realized may be rational because self-supporting. An option’s being rational is not the same as its being an option that a fully rational agent adopts because multiple options may be rational. In Hi-Lo a fully rational ideal agent prepares so that she adopts the self-supporting option High. However, Low is still a self-supporting option and is rational, although not adopted, and consequently not utility maximizing. The option High in the game Hi-Lo is rational but not uniquely rational, although unique in being the result of a fully rational ideal agent’s preparations. Maximizing self- conditional utility among self- supporting options is an evidential reason rather than a causal reason for a choice, and causal decision theory uses it to identify a rational choice but not to explain the choice’s rationality. In Hi-Lo, a rational ideal agent prepares for her decision problem. The reasons for preparing to adopt High in Hi-Lo are causal; the preparations enhance prospects. Preparations that make High maximizing have better consequences than have preparations that make Low maximizing. Self- supporting preparations for a decision problem and self-supporting choice in the decision problem yield as her choice an option that maximizes self-conditional utility among self-supporting options. Maximization of self-conditional utility among self-supporting options is just a feature of the option that emerges from rational preparation for the
Rolling Back Idealizations 247 game and then rational choice in the game, in this case, utility-maximizing preparations and utility-maximizing choice. By preparing for her decision and by adopting a self-supporting option, a fully rational ideal agent incidentally maximizes utility and incidentally maximizes self-conditional utility among self-supporting options. However, the reason for her choice’s rationality is its being self-supporting.
10.4. Summary A research program for refinement of a normative model rolls back idealizations of the model to gain realism. Realism requires rolling back all idealizations, but steps toward realism may remove some idealizations while retaining others. After rescinding some idealizations, a model may advance normative principles that are either precise or approximate. This chapter rolls back some, but not all, idealizations of Chapter 5’s model, and advances precise standards of rationality for decisions. It takes three steps toward realism. The first step relaxes the idealization of cognitively ideal agents and then assumes points about belief to construct a model that shows how agents may rationally use belief-desire deliberations to reach a decision. The second step relaxes the idealization that an agent in a decision problem makes probability and utility assignments to propositions that the agent fully understands. Within the revised model, a rational choice maximizes utility relative to the agent’s understanding of the options in the decision problem. The third step drops the idealization that the options in a decision problem have stable utilities and then generalizes the principle of utility maximization to obtain the principle of self-support for finite decision problems.10
10 I note three more steps toward realism that other theorists take. (1) In sequences, a nonideal agent may not be able to predict her future choices. Evaluation of choices may then use probabilities of future choices, as in Peterson and Vallentyne (2018). (2) Chapter 5’s model assumes that an agent in a decision problem is certain of her options, namely, acts over which she has direct control. An agent may face a nonstandard decision problem with uncertainty about options. Gary Muller (personal communication, 2017) observes that in some decision problems an agent is not certain she can realize each of her options; in fact, no exhaustive set of options exists such that she is certain of each that she can realize it. For instance, Dr. Black may possibly control an agent’s mind to prevent her from making a decision. The agent is not certain that she can make a decision, such as a decision to cross a stream in her path, even if she can. A general decision principle, yet to be formulated, covers such nonstandard decision problems. (3) Chapter 5’s model assumes that an agent has temporally constant basic goals. An agent with changing basic goals faces a nonstandard decision problem. Rationality’s requirements for the agent’s utility assignment to an option that affects attainment of future goals not currently held must settle how much weight the agent should now give to attainment of those future goals. Pettigrew (2019) proposes principles of rational choice for such nonstandard decision problems.
Conclusion The foregoing chapters present a philosophical account of risk. They distinguish normatively significant types of risk, characterize each type of risk, and formulate rationality’s principles for responses to the various types of risk. The principles govern attitudes to risks as well as acts that affect risks. Rational responses to risks vary with the type of risk. An agent may rationally change the environment to reduce a physical risk and may rationally gather information that reduces an evidential risk arising from ignorance. Rationality requires an intrinsic aversion to a chance of a bad event, with an intensity settled by the product of the event’s probability and utility. Although it also requires an intrinsic aversion to an exposure to chance, it is permissive about the intensity of the aversion. Rational agents may differ in the intensities of their intrinsic aversions to exposure to chance. Although rationality requires an intrinsic aversion to any risk, it allows an extrinsic attraction to a risk as a means of reaching a goal. An act’s expected utility for an agent evaluates the act considering possible good outcomes along with possible bad outcomes. The act’s possible outcomes include all the agent cares about, including the act’s exposure to chance. The act’s expected utility combines evaluations of the risks and prospects that the act generates to obtain an overall evaluation of the act, the act’s utility. The intrinsic utility of a risk or prospect, characterized as probability-utility product for a possible outcome, equals the product given coordination of scales for utility and intrinsic utility. Given that the risks and prospects are exclusive and exhaustive, their intrinsic utilities make independent contributions to an act’s utility, and their sum equals the act’s utility. Mean-risk evaluation of an act divides the act’s utility into (1) the act’s utility ignoring the act’s risk and (2) the intrinsic utility of the act’s risk in the sense of its exposure to chance. The method of evaluation rests on a distinction between types of utility with different scopes of evaluation. Intrinsic utility has narrow evaluative scope, and ordinary, comprehensive utility has wide evaluative scope. To simplify, the mean-risk method may target an act’s consequences rather than its outcome, and not evaluate events in the act’s
Rational Responses to Risks. Paul Weirich, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190089412.001.0001.
Conclusion 249 past. For an agent with a basic, underived intrinsic aversion to the act’s risk, the method takes the act’s risk as a separable consequence of the act. The intrinsic utility of the act’s risk makes an independent contribution to the act’s utility. Hence, adding to it the act’s utility ignoring the act’s risk yields the act’s utility. An ideal agent in a standard decision problem resolves the problem rationally by reaching a decision that maximizes utility among options. Doing this amounts to the agent’s adopting an option at the top of her preference- ranking of options. Granting that the agent’s preference-ranking is rational, she has every reason to decide according to her preferences. A sequence of choices is rational if each choice in the sequence is rational. Rationality’s intertemporal requirements derive from rationality’s requirements on each choice at the time of the choice. Rational management of risks often aims for reductions in risks because of intrinsic aversions to them, but may lead to taking on risks for the sake of associated benefits. For example, a rational agent may make a bet to hedge other bets and so reduce overall exposure to chance. A combination of bets may be less risky than each bet taken by itself. Unlike the chances of bad events that a single act creates, the exposures to chance of multiple acts are not additive. Their nonadditivity creates opportunities for hedging. The principles governing rational responses to risks justify methods of advising others and, with their authorization, deciding for them. They guide the trustee decisions that professionals make to manage the risks that their clients face and guide the regulations that government agencies impose to control the risks that the public faces. Although the principles for responding rationally to risks rely on many simplifying assumptions, a well- grounded research program exists for relaxing these assumptions and generalizing decision principles to make them more realistic. General methods of model refinement guide steps toward realism. Constructing a philosophical account of risk unifies diverse views concerning risk in the interdisciplinary literature on risk and justifies general principles of risk analysis and risk management. In particular, the account justifies the expected-utility principle and, under simplifying assumptions about an investor’s goals, justifies financial management’s evaluation of investments according to expected return and risk. The account’s systematic approach to risk deepens understanding of risk and rationality’s requirements for responses to risks. It explains rationality’s
250 Conclusion requirements for attitudes to risk and then uses these requirements to explain rationality’s principles for responses to risk. It explains how requirements of coherence arise from more fundamental principles of rationality. For example, it obtains the requirement that preferences among options be coherent from the requirement that preferences among options agree with the options’ evaluations. Furthermore, it explains the case for substantive requirements, going beyond coherence, that govern evaluation of a single option using evaluations of the risks and prospects that the option generates. Its explanations deepen an understanding of rational responses to risks.
References Adler, M. 2003. “Risk, Death, and Harm: The Normative Foundations of Risk Regulation.” Minnesota Law Review 87: 1293–445. Ahmed, A. 2014. Evidence, Decision and Causality. Cambridge: Cambridge University Press. Allais, M. 1953. “Le comportement de l’homme rationnel devant le risque: critique des postulats et axioms de l’école Américaine.” Econometrica 21: 503–46. Allingham, M. 2002. Choice Theory: A Very Short Introduction. Oxford: Oxford University Press. Al-Najjar, N., and J. Weinstein. 2009. “The Ambiguity Aversion Literature: A Critical Assessment.” Economics and Philosophy 25: 249–84. Armendt, B. 2014. “On Risk and Rationality.” Erkenntnis 79: 1119–27. Armendt, B. 2019. “Causal Decision Theory and Decision Instability.” Journal of Philosophy 116: 263–77. Arntzenius, F., A. Elga, and J. Hawthorne. 2004. “Bayesianism, Infinite Decisions, and Binding.” Mind 113: 251–83. Arrow, K. 1965. Aspects of the Theory of Risk-Bearing. Helsinki: Yrjö Jahnssonin Säätiö. Arrow, K. 1970. Essays in the Theory of Risk-Bearing. Amsterdam: North-Holland. Augustin, T., F. Coolen, G. Cooman, and M. Troffaes, eds. 2014. Introduction to Imprecise Probabilities. New York: Wiley. Aumann, R., and R. Serrano. 2008. “An Economic Index of Riskiness.” Journal of Political Economy 116: 810–36. Aven, T., O. Renn, and E. A. Rosa. 2011. “On the Ontological Status of the Concept of Risk.” Safety Science 49: 1074–79. Babic, B. 2019. “A Theory of Epistemic Risk.” Philosophy of Science, preprint. doi: 10.1086/ 703552. Barberis, N. 2013. “Thirty Years of Prospect Theory in Economics: A Review and Assessment.” Journal of Economic Perspectives 27: 173–95. Baron, J. 2008. Thinking and Deciding. Fourth edition. Cambridge: Cambridge University Press. Bell, D., and H. Raiffa. 1988. “Marginal Value and Intrinsic Risk Aversion.” In D. Bell, H. Raiffa, and A. Tversky, eds., Decision Making, pp. 384–97. Cambridge: Cambridge University Press. Bénabou, R., and J. Tirole. 2003. “Intrinsic and Extrinsic Motivation.” Review of Economic Studies 70: 489–520. Bermúdez, J. L. 2018. “Frames, Rationality, and Self-Control.” In J. L. Bermúdez, ed., Self-Control, Decision Theory, and Rationality, pp. 179–203. Cambridge: Cambridge University Press. Bernoulli, D. [1738] 1954. “Exposition of a New Theory on the Measurement of Risk.” Econometrica 22: 23–36.
252 References Binmore, K. 2007. Playing for Real: A Text on Game Theory. Oxford: Oxford University Press. Binmore, K. 2009. Rational Decisions. Princeton: Princeton University Press. Bradley, D. 2011. “Self-Location Is No Problem for Conditionalization.” Synthese 182: 393–411. Bradley, R. 2017. Bayesianism with a Human Face. Cambridge: Cambridge University Press. Bradley, R., and H. O. Stefánsson. 2017. “Counterfactual Desirability.” British Journal for the Philosophy of Science 68: 485–533. Brigham, E., and J. Houston. 2009. Fundamentals of Financial Management. Twelfth edition. Mason, OH: South-Western Cengage Learning. Broome, J. 1991. Weighing Goods. Oxford: Blackwell. Broome, J. 2013. Rationality through Reasoning. Hoboken, NJ: Wiley. Brown, M. J. 2013. “Values in Science beyond Underdetermination and Inductive Risk.” Philosophy of Science 80: 829–39. Buchak, L. 2012. “Can It Be Rational to Have Faith?” In J. Chandler and V. Harrison, eds., Probability in the Philosophy of Religion, pp. 225–48. Oxford: Oxford University Press. Buchak, L. 2013. Risk and Rationality. Oxford: Oxford University Press. Buchak, L. 2014. “Risks and Tradeoffs.” Erkenntnis 79: 1091–117. Buchak, L. 2017. “Precis of Risk and Rationality.” Philosophical Studies 174: 2363–68. Chalmers, D. 2011. “The Nature of Epistemic Space.” In A. Egan and B. Weatherson, eds., Epistemic Modality, pp. 60–107. Oxford: Oxford University Press. Chang, H. 2004. Inventing Temperature: Measurement and Scientific Progress. New York: Oxford University Press. Chateauneuf, A., and M. Cohen. 2013. “Cardinal Extensions of the EU Model Based on the Choquet Integral.” In D. Bouyssou, D. Dubois, M. Pirlot, and H. Prade, eds., Decision-Making Process: Concepts and Methods, Chap. 10. Kindle edition. Hoboken, NJ: Wiley. doi: 10.1002/9780470611876.ch10 Chen, Z., and Wang, Y. 2008. “Two-Sided Coherent Risk Measures and Their Application in Realistic Portfolio Optimization.” Journal of Banking and Finance 32: 2667–73. Chipman, J. 2008. “Compensation Principle.” In S. Durlauf and L. Blume, eds. The New Palgrave Dictionary of Economics, pp. 934–44. Second edition. London: Palgrave MacMillan. Coleman, J. 1992. Risks and Wrongs. Cambridge: Cambridge University Press. Cox, L. 2012. Improving Risk Analysis. New York: Springer. Cozic, M., and B. Hill. 2015. “Representation Theorems and the Semantics of Decision- Theoretic Concepts.” Journal of Economic Methodology 22: 292–311. Cranor, C. 1993. Regulating Toxic Substances. New York: Oxford University Press. Cranor, C. 2006. Toxins and Torts. Cambridge: Cambridge University Press. Cranor, C. 2007. “Toward a Non-Consequentialist Approach to Acceptable Risks.” In T. Lewens, ed., Risk: Philosophical Perspectives, pp. 36–53. Abingdon, UK: Routledge. Cubitt, R., D. Navarro-Martinez, and C. Starmer. 2015. “On Preference Imprecision.” Journal of Risk and Uncertainty 50: 1–34. Davis-Stober, C., and N. Brown. 2013. “Evaluating Decision Maker ‘Type’ under p- additive Utility Representations.” Journal of Mathematical Psychology 57: 320–28. Dawid, P. 2017. “On Individual Risk.” Synthese 194: 3445–74.
References 253 Delbaen, F. 2002. “Coherent Risk Measures on General Probability Spaces.” In K. Sandmann and P. Schönbucher, eds., Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, pp. 1–37. Berlin: Springer. de Melo-Martín, I., and K. Intemann. 2016. “The Risk of Using Inductive Risk to Challenge the Value-Free Ideal.” Philosophy of Science 83: 500–20. Diaconis, P., and S. Zabell. 1982. “Updating Subjective Probability.” Journal of the American Statistical Association 77: 822–30. Dietrich, F. 2019. “A Theory of Bayesian Groups.” Noûs 53: 708–36. doi: 10.1111/ nous.12233. Dietrich, F., and C. List. 2013. “A Reason-Based Theory of Rational Choice.” Noûs 47: 104–34. Dietrich, F., and C. List. 2016a. “Reason-Based Choice and Context-Dependence: An Explanatory Framework.” Economics and Philosophy 32: 175–229. Dietrich, F., and C. List. 2016b. “Mentalism versus Behaviorism in Economics: A Philosophy of Science Perspective.” Economics and Philosophy 32: 249–81. Dietrich, F., and C. List. 2017. “What Matters and How It Matters: A Choice-Theoretic Representation of Moral Theories.” Philosophical Review 126: 421–79. Douglas, H. 2000. “Inductive Risk and Values in Science.” Philosophy of Science 67: 559–79. Douglas, H. 2009. Science, Policy, and the Value-Free Ideal. Pittsburgh: University of Pittsburgh Press. Dreier, J. 1996. “Rational Preference: Decision Theory as a Theory of Practical Rationality.” Theory and Decision 40: 249–76. Egan, A. 2007. “Some Counterexamples to Causal Decision Theory.” Philosophical Review 116: 93–114. Elga, A. 2000. “Self-Locating Belief and the Sleeping Beauty Problem.” Analysis 60: 143–47. Elga, A. 2010. “Subjective Probabilities Should Be Sharp.” Philosophers’ Imprint 10: 1–11. Elliott, K. C., and D. J. McKaughan. 2014. “Nonepistemic Values and the Multiple Goals of Science.” Philosophy of Science 81: 1–21. Epstein, L. 1999. “A Definition of Uncertainty Aversion.” Review of Economic Studies 66: 579–608. Epstein L. 2008. “Living with Risk.” Review of Economic Studies 75: 1121–41. Finkelstein, M., and B. Levin. 2008. “Epidemiologic Evidence in the Silicone Breast Implant Cases.” In J. Kadane, ed., Statistics in the Law, pp. 43–49. New York: Oxford University Press. Fischhoff, B., and J. Kadvany. 2011. Risk: A Very Short Introduction. Oxford: Oxford University Press. Fischhoff, B., S. Lichtenstein, P. Slovic, S. Derby, and R. Keeney. 1981. Acceptable Risk. Cambridge: Cambridge University Press. Fleurbaey, M. 2010. “Assessing Risky Social Situations.” Journal of Political Economy 118: 649–80. Foster, D., and S. Hart. 2009. “An Operational Measure of Riskiness.” Journal of Political Economy 117: 785–814. Friedman, D., I. Mark, J. Duncan, and S. Shyam. 2014. Risky Curves: On the Empirical Failure of Expected Utility. London: Routledge. Gaifman H., and Y. Liu. 2018. “A Simpler and More Realistic Subjective Decision Theory.” Synthese 195: 4205–41.
254 References Ghirardato, P., and M. Siniscalchi. 2018. “Risk Sharing in the Small and in the Large.” Journal of Economic Theory 175: 730–65. Gibbard, A., and W. Harper. [1978] 1981. “Counterfactuals and Two Kinds of Expected Utility.” In W. Harper, R. Stalnaker, and G. Pearce (eds.), Ifs, pp. 153–90. Dordrecht: Reidel. Gilboa, I. 2009. Theory of Decision under Uncertainty. Cambridge: Cambridge University Press. Gilboa, I. 2010. Rational Choice. Cambridge, MA: MIT Press. Gilboa, I., A. Postlewaite, L. Samuelson, and D. Schmeidler. 2017. “What Are Axiomatizations Good For?” Manuscript. Gilboa, I., S. Minardi, L. Samuelson, and D. Schmeidler. 2018. “States and Eventualities: How to Understand Savage without Being Hanged.” Manuscript. Good, I. J. 1952. “Rational Decisions.” Journal of the Royal Statistical Society, Series B, 14: 107–14. Good, I. J. 1967. “On the Principle of Total Evidence.” British Journal for the Philosophy of Science 17: 319–21. Gregory, R., N. Dieckmann, E. Peters, L. Failing, G. Long, and M. Tusler. 2012. Deliberative Disjunction: Expert and Public Understanding of Outcome Uncertainty. Risk Analysis 32: 2071–83. Hacking, I. 2001. Probability and Inductive Logic. Cambridge: Cambridge University Press. Hahn, R., ed. 1996. Risks, Costs, and Lives Saved: Getting Better Results from Regulation. New York: Oxford University Press. Halevy, Y. 2007. “Ellsberg Revisited: An Experimental Study.” Econometrica 75: 503–36. Hampton, J. 1994. “The Failure of Expected-Utility Theory as a Theory of Reason.” Economics and Philosophy 10: 195–242. Hansson, B. 1988. “Risk Aversion as a Problem of Conjoint Measurement.” In P. Gardenfors and N.-E. Sahlin, eds. Decision, Probability, and Utility: Selected Readings, pp. 136–158. Cambridge: Cambridge University Press. Hansson, S. O. 2007. “Risk and Ethics: Three Approaches.” In T. Lewens, ed., Risk: Philosophical Perspectives, pp. 21–35. Abingdon, UK: Routledge. Hansson, S. O. 2009. “Measuring Uncertainty.” Studia Logica 93: 21–40. Hansson, S. O. 2014. “Risk.” In E. N. Zalta, ed., Stanford Encyclopedia of Philosophy. http:// plato.stanford.edu/archives/spr2014/entries/risk/. Hart, C., and M. Titelbaum. 2015. “Intuitive Dilation?” Thought: A Journal of Philosophy 4: 252–62. Hausman, D. 2012. Preference, Value, Choice, and Welfare. Cambridge: Cambridge University Press. Heilman, C. 2015. “A New Interpretation of the Representational Theory of Measurement.” Philosophy of Science 82: 787–97. Hicks, D. “Inductive Risk and Regulatory Toxicology: A Comment on de Melo-Martín and Intemann.” Philosophy of Science 85: 164–74. Holt, C. A., and S. K. Laury. 2002. “Risk Aversion and Incentive Effects.” American Economic Review 92: 1644–55. Holton, G. 2004. “Defining Risk.” Financial Analysts Journal 60: 19–25. Icard, T. 2018. “Bayes, Bounds, and Rational Analysis.” Philosophy of Science 85: 79–101. Jasanoff, S. 2005. Designs on Nature: Science and Democracy in Europe and the United States. Princeton: Princeton University Press.
References 255 Jeffrey, R. [1965] 1990. The Logic of Decision. Second edition, paperback. Chicago: University of Chicago Press. Jorion, P. 2006. Value at Risk: The New Benchmark for Managing Financial Risk. Third edition. New York: McGraw-Hill. Joyce, J. 1999. The Foundations of Causal Decision Theory. Cambridge: Cambridge University Press. Joyce, J. 2010. “A Defense of Imprecise Credences in Inference and Decision Making.” Philosophical Perspectives 24: 281–323. Joyce, J. 2012. “Regret and Instability in Causal Decision Theory.” Synthese 187: 123–45. Kadane, J., ed. 2008. Statistics in the Law. Oxford: Oxford University Press. Kadane, J., and G. Woodworth. 2008. “Hierarchical Models for Employment Decisions.” In J. Kadane, ed., Statistics in the Law, pp. 110–133. New York: Oxford University Press. Kahneman, D. 2011. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. Kahneman, D., and A. Tversky. 1979. “Prospect Theory.” Econometrica 47: 263–91. Kaplan, S., and B. J. Garrick. 1981. “On the Quantitative Definition of Risk.” Risk Analysis 1: 11–27. Kavka, G. 1983. “The Toxin Puzzle.” Analysis 43: 33–36. Keeney, R., and H. Raiffa. 1993. Decisions with Multiple Objectives: Preferences and Value Tradeoffs. Cambridge: Cambridge University Press. Kim, N. 2016. “A Dilemma for the Imprecise Bayesian.” Synthese 193: 1681–702. Kment, B. 2017. “Varieties of Modality.” In E. N. Zalta, ed., Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/spr2017/entries/modality-varieties/. Kohl, C., G. Frampton, J. Sweet, A. Spok, N. Haddaway, R. Wilhelm, S. Unger, and J. Schiemann. 2015. “Can Systematic Reviews Inform GMO Risk Assessment and Risk Management?” Frontiers in Bioengineering and Biotechnology 3, Article 113. Krantz, D., R. Luce, P. Suppes, and A. Tversky. 1971. Foundations of Measurement, Vol. 1. New York: Academic Press. Leitgeb, H. 2017. The Stability of Belief: How Rational Belief Coheres with Probability. Oxford: Oxford University Press. Lewens, T., ed. 2007. Risk: Philosophical Perspectives. Abingdon, UK: Routledge. Lewis, D. 1979. “Attitudes De Dicto and De Se.” Philosophical Review 88: 513–43. Lewis, D. [1980] 1986. “A Subjectivist’s Guide to Objective Chance.” In Philosophical Papers, Vol. 2, pp. 83–132, with postscripts. Oxford: Oxford University Press. List, C., and P. Pettit. 2011. Group Agency: The Possibility, Design and Status of Corporate Agents. Oxford: Oxford University Press. Luce, R. D. 2010a. “Interpersonal Comparisons of Utility for 2 of 3 Types of People.” Theory and Decision 68: 5–24. Luce, R. D. 2010b. “Behaviorial Assumptions for a Class of Utility Theories: A Program of Experiments.” Journal of Risk and Uncertainty 41: 19–37. Maccheroni, F., M. Marinacci, and D. Ruffino. 2013. “Alpha as Ambiguity: Robust Mean- Variance Portfolio Analysis.” Econometrica 81: 1075–113. Machina, M. 1982. “‘Expected Utility’ Analysis without the Independence Axiom.” Econometrica 50: 277–323. Machina, M. 2009. “Risk, Ambiguity, and the Rank-Dependence Axioms. American Economic Review 99: 385–392. Machina, M., and M. Rothschild. 2008. “Risk.” In S. Durlauf and L. Blume, eds., The New Palgrave Dictionary of Economics, pp. 5608–15. Second edition. London: Macmillan.
256 References Markowitz, H. 1959. Portfolio Selection: Efficient Diversification of Investments. New York: Wiley. Mayo, D., and R. Hollander, eds. 1991. Acceptable Evidence. New York: Oxford University Press. McClennen, E. 1990. Rationality and Dynamic Choice: Foundational Explorations. Cambridge: Cambridge University Press. Meacham, C., and J. Weisberg. 2011. “Representation Theorems and the Foundations of Decision Theory.” Australasian Journal of Philosophy 89: 641–63. Michaeli, M. 2014. “Riskiness for Sets of Gambles.” Economic Theory 56: 515–47. Mongin, P. 2018. “The Allais Paradox: What It Became, What It Really Was, What It Now Suggests to Us.” Manuscript. Okasha, S. 2007. “Rational Choice, Risk Aversion, and Evolution.” Journal of Philosophy 104: 217–35. Okasha, S. 2016. “On the Interpretation of Decision Theory.” Economics and Philosophy 32: 409–33. Oliveira, L. 2016. “Rossian Totalism about Intrinsic Value.” Philosophical Studies 173: 2069–86. Parfit, D. 1984. Reasons and Persons. Oxford: Oxford University Press. Paul, L. A. 2014. Transformative Experience. Oxford: Oxford University Press. Pearl, J. 2009. Causality: Models, Reasoning, and Inference. Second edition. Cambridge: Cambridge University Press. Peterson, M. 2006. “The Precautionary Principle Is Incoherent. Risk Analysis 26: 595–601. Peterson, M. 2007. “On Multi- Attribute Risk Analysis.” In T. Lewens, ed., Risk: Philosophical Perspectives, pp. 68–83. Abingdon, UK: Routledge. Peterson, M. 2012. Introduction to Decision Theory. Cambridge: Cambridge University Press. Peterson, M., and P. Vallentyne. 2018. “Self- Prediction and Self- Control.” In J. L. Bermúdez, ed., Self-Control, Decision Theory, and Rationality, pp. 48–71. Cambridge: Cambridge University Press. Pettigrew, R. 2016. “Risk, Rationality and Expected Utility Theory.” Canadian Journal of Philosophy 45: 798–826. Pettigrew, R. 2019. Choosing for Changing Selves. Oxford: Oxford University Press. Piccione, M., and A. Rubinstein. 1997. “On the Interpretation of Decision Problems with Imperfect Recall.” Games and Economic Behavior 20: 3–24. Posner, E., and E. G. Weyl. 2012. “An FDA for Financial Innovation: Applying the Insurable Interest Doctrine to Twenty-First-Century Financial Markets.” John M. Olin Law & Economics School Working Paper No. 589. Chicago: University of Chicago. Pratt, J. 1964. “Risk Aversion in the Small and in the Large.” Econometrica 32: 122–36. Quiggen, J. 1982. “A Theory of Anticipated Utility.” Journal of Economic Behavior and Organization 3: 323–43. Rabin, M. 2000. “Risk Aversion and Expected-Utility Theory: A Calibration Theorem.” Econometrica 68: 1281–92. Rabin, M., and R. Thaler. 2001. “Anomalies: Risk Aversion.” Journal of Economic Perspectives 15: 219–32. Raiffa, H. 1968. Decision Analysis: Introductory Lectures on Choices under Uncertainty. Reading, MA: Addison-Wesley. Randall, A. 2011. Risk and Precaution. Cambridge: Cambridge University Press.
References 257 Rescher, N. 1983. Risk: A Philosophical Introduction to the Theory of Risk Evaluation and Management. Washington, DC: University Press of America. Richter, R. 1984. “Rationality Revisited.” Australasian Journal of Philosophy 62: 392–403. Richter, R. 1986. “Further Comments on Decision Instability.” Australasian Journal of Philosophy 64: 345–49. Riedel, F., and T. Hellmann. 2013. “The Foster-Hart Measure of Riskiness for General Gambles.” Institute of Mathematical Economics, Bielefeld University, Working Paper 474. Rothschild, M., and J. Stiglitz. 1970. “Increasing Risk: I. A Definition.” Journal of Economic Theory 2: 225–43. Safra, Z., and U. Segal. 2008. “Calibration Results for Non-Expected Utility Theories.” Econometrica 76: 1143–66. Sahlin, N., and J. Persson. 2004. “Epistemic Risk: The Significance of Knowing What One Does Not Know.” http://www.nilsericsahlin.se/risk/index.html Sahlin, N., and Weirich, P. 2014. “Unsharp Sharpness.” Theoria 80: 100–103. Salmón, N. 2019. “Impossible Odds.” Philosophy and Phenomenological Research 99: 644– 62. doi: 10.1111/phpr.12517. Samet, D., and D. Schmeidler. 2018. “Desirability Relations in Savage’s Model of Decision Making.” Presented at the D-TEA conference in Paris, 2018. Samuelson, P. 1963. “Risk and Uncertainty: A Fallacy of Large Numbers.” Scientia 98: 108–13. Savage, L. J. [1954] 1972. The Foundations of Statistics. Second edition. New York: Dover. Scarantino, A. 2010. “Inductive Risk and Justice in Kidney Allocation.” Bioethics 24: 421–30. Schmeidler, D. 1989. “Subjective Probability and Expected Utility without Additivity.” Econometrica 57: 571–87. Sharpe, W. F. 1994. “The Sharpe Ratio.” Journal of Portfolio Management 21: 49–58. Shrader-Frechette, K. 1991. Risk and Rationality: Philosophical Foundations for Populist Reforms. Berkeley: University of California Press. Slovic, P., ed. 2016. The Perception of Risk. Abingdon, UK: Earthscan from Routledge. Smith, A. E., and W. Gans. 2015. “Enhancing the Characterization of Epistemic Uncertainties in PM2.5 Risk Analyses.” Risk Analysis 35: 361–78. Spirtes, P., C. Glymour, and R. Scheines. 2000. Causation, Prediction, and Search. Cambridge, MA: MIT Press. Stalnaker, R. 1968. “A Theory of Conditionals.” In N. Rescher, ed., Studies in Logical Theory, pp. 98–112. Oxford: Oxford University Press. Stalnaker, R. 2008. Our Knowledge of the Internal World. Oxford: Oxford University Press. Steel, D. 2014. Philosophy and the Precautionary Principle: Science, Evidence, and Environmental Policy. Cambridge: Cambridge University Press. Stefánsson, H. O., and R. Bradley. 2015. “How Valuable Are Chances?” Philosophy of Science 82: 602–625. Stefánsson, H. O., and Bradley, R. 2019. “What Is Risk Aversion?” British Journal for the Philosophy of Science 70: 77–109. Sunstein, C. 2002. Risk and Reason: Safety, Law, and the Environment. Cambridge: Cambridge University Press. Sunstein, C. 2005. Laws of Fear: Beyond the Precautionary Principle. Cambridge: Cambridge University Press.
258 References Sunstein, C. 2014. Why Nudge? The Politics of Libertarian Paternalism. New Haven, CT: Yale University Press. Thaler, R., and C. Sunstein. 2009. Nudge: Improving Decisions about Health, Wealth, and Happiness. New York: Penguin. Thoma, J., and J. Weisberg. 2017. “Risk Writ Large.” Philosophical Studies 174: 2369–84. Tversky, A., and D. Kahneman. 1981. “The Framing of Decisions and the Psychology of Choice.” Science 211: 453–58. Tversky, A., and D. Kahneman. 1992. “Advances in Prospect Theory: Cumulative Representation of Uncertainty.” Journal of Risk and Uncertainty 5: 297–323. Viscusi, W. 1998. Rational Risk Policy. Oxford: Oxford University Press. Wakker, P. 2010. Prospect Theory: For Risk and Ambiguity. Cambridge: Cambridge University Press. Walley, P. 1991. Statistical Reasoning with Imprecise Probabilities. London: Chapman and Hall. Weirich, P. 1980. “Conditional Utility and Its Place in Decision Theory.” Journal of Philosophy 77: 702–15. Weirich, P. 1984a. “The St. Petersburg Gamble and Risk.” Theory and Decision 17: 193–202. Weirich, P. 1984b. “Interpersonal Utility in Principles of Social Choice.” Erkenntnis 21: 295–317. Weirich, P. 1986. “Expected Utility and Risk.” British Journal for the Philosophy of Science 37: 419–42. Weirich, P. 1987. “Mean-Risk Decision Analysis.” Theory and Decision 23: 89–111. Weirich, P. 1988. “Trustee Decisions in Investment and Finance.” Journal of Business Ethics 7: 73–80. Weirich, P. 1998. Equilibrium and Rationality: Game Theory Revised by Decision Rules. Cambridge: Cambridge University Press. Weirich, P. 2001a. Decision Space: Multidimensional Utility Analysis. Cambridge: Cambridge University Press. Weirich, P. 2001b. “Risk’s Place in Decision Rules.” Synthese 126: 427–41. Weirich, P. 2004a. Realistic Decision Theory: Rules for Nonideal Agents in Nonideal Circumstances. New York: Oxford University Press. Weirich, P. 2004b. “Belief and Acceptance.” In I. Niiniluoto, M. Sintonen, and J. Wolenski, eds., Handbook of Epistemology, pp. 499–520. Dordrecht: Kluwer. Weirich, P. 2005. “Risk Regulation.” Behavioral and Brain Sciences 28: 564–65. Weirich, P. 2007a. “Using Food Labels to Regulate Risks.” In P. Weirich, ed., Labeling Genetically Modified Food, pp. 222–45. New York: Oxford University Press. Weirich, P. “Initiating Coordination.” 2007b. Philosophy of Science 74: 790–801. Weirich, P. 2010a. Collective Rationality: Equilibrium in Cooperative Games. New York: Oxford University Press. Weirich, P. 2010b. “Probabilities in Decision Rules.” In E. Eells and J. Fetzer, eds., The Place of Probability in Science, pp. 289–319. Boston Studies in the Philosophy of Science, Volume 284. Dordrecht: Springer. Weirich, P. 2010c. “Utility and Framing.” In P. Weirich, ed., Realistic Standards for Decisions, a special issue of the journal Synthese 176: 83–103. Weirich, P. 2011. “The Explanatory Power of Models and Simulations: A Philosophical Exploration.” In P. Weirich, T. Gruene- Yanoff, S. Ruphy, and J. Simpson, eds., Philosophical and Epistemological Issues in Simulation and Gaming, a special issue of the journal Simulation and Gaming: An Interdisciplinary Journal 42: 149–70.
References 259 Weirich, P. 2012. “Multi- Attribute Approaches to Risk.” In Sabine Roeser, Rafaela Hillerbrand, Per Sandin, and Martin Peterson, eds., Handbook of Risk Theory: Epistemology, Decision Theory, Ethics, and Social Implications of Risk, pp. 517–43. Dordrecht: Springer. Weirich, P. 2015a. Models of Decision- Making: Simplifying Choices. Cambridge: Cambridge University Press. Weirich, P. 2015b. “Decisions without Sharp Probabilities.” Philosophia Scientiae 19: 213–25. Weirich, P. 2015c. “Intrinsic Utility’s Compositionality.” Journal of the American Philosophical Association 1: 545–63. Weirich, P. 2015d. “La théorie de la décision generaliseé.” In Daniel Andler, ed., Sciences et Décision, Chap. 5. Besançon, France: Presses Universitaires de Franche-Comté. Weirich, P. 2017. “Collective Rationality and Cooperation.” In Marija Jankovic and Kirk Ludwig, eds., Handbook of Collective Intentionality, pp. 209–20. London: Routledge. Weirich, P. 2018a. “Rational Plans.” In J. L. Bermúdez, ed., Self-Control, Decision Theory, and Rationality, pp. 72–95. Cambridge: Cambridge University Press. Weirich, P. 2018b. “Risk as a Consequence.” Topoi. doi: 10.1007/s11245-018-9570-4. White, R. 2005. “Epistemic Permissiveness.” Philosophical Perspectives 19: 445–59. White, R. 2010. “Evidential Symmetry and Mushy Credence.” In T. Gendler and J. Hawthorne, eds., Oxford Studies in Epistemology, Vol. 3, Chap. 7. New York: Oxford University Press. Zynda, L. 2000. “Representation Theorems and Realism about Degrees of Belief.” Philosophy of Science 67: 45–69.
Index For the benefit of digital users, indexed terms that span two pages (e.g., 52–53) may, on occasion, appear on only one of those pages. Absent-Minded Driver, 239–40 context-independence, 93–94 acceptance, 192–94 imprecision, 111 accessibility, 12 independence, 100–1 by introspection, 12, 59 scope, 42–44 accompaniments of risk, 3 attitudes, 41 additivity, 94 basic intrinsic, 168 failures of, 140–41 con, 66 of intrinsic utilities, 104–5, 136–38, 157 conative, 41 mean-risk, 99–100, 101–2 doxastic, 41 of utilities, 94, 104 first-order, 75 Adler, M., 217 pro, 66 admissibility, 12 second-order, 75 of probability assignments, 12, 208–9 Augustin, T., Coolen, F., Cooman, G., and of utility assignments, 12, 208–9 M. Troffaes, 127n11 agents, 9–10. See also person Aumann, R. and R. Serrano, 173n4 groups, 9–10, 200 autonomous cognitive capacity, 5 individuals, 9–10 Aven, T., O. Renn, and E. A. Rosa, 213n17 standards for, 9–10 aversion, 42 temporal position of, 156–57, 233, 240 basic intrinsic, 43, 168 agreement, 205. See also consent intrinsic, 56–57 unanimous, 205 to loss, 79–80 Ahmed, A., 151–52n7 to uncertainty, 34, 56–57n8, 70 Allais, M., 13, 120 aversion to risk, 4, 44–45, 66–67 Allais’s paradox, 99n9, 119, 120, 120n6, as aversion to chance, 27 131, 131n15 basic intrinsic, 4, 34, 73, 107–8, 190 sequential version of, 155–57 as concavity of a utility curve, 44–46 Allingham, M., 6n3, 44–45 extrinsic, 4 Al-Najjar, N. and J. Weinstein, intrinsic, 4, 34, 58, 162, 199 49–50n3, 144n3 as mentalistic, 146 ambiguity, 33–34, 49–50n3, 174–75 reduction of, 113 Armendt, B., 133–34, 243–44n8 Arntzenius, F., A. Elga, and J. Babic, B., 191n2 Hawthorne, 115 Barberis, N., 79n8 Arrow, K., 13, 44–45 bargaining problem, 204n8 attitude feature, 93–94 Baron, J., 163–64 being quantitative, 114 Bayesian statistics, 219
262 Index Bayesianism, 153 objective, 153 subjective, 153 behavioral economics, 5 belief, 227 and communication, 227–29 Bell, D. and H. Raiffa, 14n7 Bénabou, R. and J. Tirole, 4n2 Bermúdez, J. L., 237n4 Bernoulli, D., 115–16 binding, 144, 149 binding contract, 207 Binmore, K., 10, 65, 204 Bradley, D., 239–40n5 Bradley, R., 127n11 Bradley, R. and H. O. Stefánsson, 94n6, 121n8, 122 Brigham, E. and J. Houston, 161, 173n4 Broome, J., 6n3, 90n5 Brown, M. J., 191–92n3 Buchak, L., 13, 31, 71n6, 130, 130n13, 131n15 causal decision theory, 25, 95, 132–33, 231, 244 centered worlds, 234–36 Chalmers, D., 223n2 chances, 21–22 additivity for intrinsic utilities of, 96–97 evidential, 21–22 extreme values of, 28 intrinsic utilities of, 56, 95 physical, 21–22 of realizing an extrinsic attitude, 96 Chang, H., 50–51 Chateauneuf, A. and M. Cohen, 132–33n16 Chen, Z. and Wang, Y., 173n4 Chipman, J., 206 choices, 10 consistency of, 10, 117 heuristics for, 222, 225–26 prediction of, 145–46, 150–51, 245–46 preparations for, 245–46 rationality of, 10 citizens, 197–98 basic goals of, 197–98 corrected preferences of, 211–12
informed evaluations of, 203, 214–15 preferences of, 202 purified preferences of, 211n13 utility functions of, 197–98 willingness-to-pay of, 202 coalitional games, 211 cognitive costs, 134, 225 and prospects of a successful choice, 225 cognitive psychology, 5 coherence, 73, 249–50 and substantive requirements, 249–50 Coleman, J., 1–2 collective act, 209–10 collective utility, 201–2 maximization of, 201–2, 205–6, 207–8 combinations of acts, 85, 136 at the same time, 85 in sequence, 85 combinations of decisions, 142 combinations of risks, 9, 136–41 interactions within, 136 combinations of risks and prospects, 88 created by an act, 87–88 created by an option, 87, 95 interactions within, 88 comparison of options, 188 compensation, 199–200, 206–7, 215–16 composites, 61 conditional preferences among, 62 evaluation of, 87 intrinsic utilities of, 109 preferences among, 62, 90 compositionality, 88–89, 110–11, 125–26, 188–89 and interchangeability of equivalents, 89 mean-risk, 99–100, 125–26 return-risk, 168 conative attitudes, 24 representation by multiple utility assignments, 24 concatenation, 106–7 conditional risks, 4 conditionalization, 153–54, 202–3, 237–39, 241 and gaining a way of understanding a proposition, 237–38 given imprecise probabilities, 153 Jeffrey’s generalization of, 153–54
Index 263 and loss of information, 238, 239 and probabilities fitting evidence, 154, 239 consent, 200 unanimous, 200 consequences, 30–31, 143–44 basic, 103, 182–83 fine-graining of, 182–83 fine individuation of, 183–84 separability of, 31 simplification of, 182 consistency, 10, 73–75. See also coherence dynamic, 144n3 of regulations, 208n11 control, 142 direct, 142, 151–52n7, 193 coordination of utility scales, 36–37, 95, 248 core, 204 allocation, 207 cost-benefit analysis, 205–6 Cox, L., 202 Cozic, M. and B. Hill, 58n9 Cranor, C., 76, 196n1 creation of risks, 3 by groups, 9 by nature, 3 Cubitt, R., D. Navarro-Martinez, and C. Starmer, 127n11 Davis-Stober, C. and N. Brown, 49–50n4 Dawid, P., 213n16 de Melo-Martín, I. and K. Intemann, 191–92n3 decision problems, 113–15 framing of, 237 with idealizations relaxed, 247 standard, 114 decision procedure, 192–93, 221–22 decision theory, 201 decision under risk, 123 decision under uncertainty, 123 decisions, 113–14 given incomplete information, 180 heuristics for, 222–23 with imprecise probability and utility assignments, 180 instability of, 241–47
for others, 178–79 simultaneous, 141–42 degrees of belief, 52–53 Delbaen, F., 176–77n6 deliberation, 194 cognitive costs of, 232 dynamics of, 242–43 equilibrium of, 242–44 standards and procedures for, 221–22 using beliefs and desires, 194, 222, 227, 229–31 using probabilities and utilities, 194 desires, 42 all-things-considered, 44, 54–55 and attainability, 64–65 basic intrinsic, 43 extrinsic, 42 intrinsic, 42 for a prospect, 68 ratio comparisons of, 55 strengths of, 61 diachronic Dutch book argument, 154n8 Diaconis, P. and S. Zabell, 153–54 Dietrich, F., 45–46n1, 203n6 Dietrich, F. and C. List, 26, 26n6, 50n5 dilation, 128n12 diminishing marginal utility, 6n3, 45–46, 76–77, 78, 173, 176 direct inference, 21, 123, 233 dominance, 124 stochastic, 124–25 Douglas, H., 191–92 doxastic attitudes, 24 representation by multiple probability assignments, 24 Dreier, J., 122 efficiency, 200, 206, 207–8 as a goal of rationality, 200 Kaldor-Hicks, 206–7 Pareto, 206 Egan, A., 241–42 Elga, A., 128n12, 150–51, 239–40 Elliott, K. C. and D. J. McKaughan, 191–92 Ellsberg’s paradox, 119, 132–33, 140–41, 141n1 Epstein, L., 123–24
264 Index evaluation, 211 of acts, 211 of procedures, 211 events, 25–26 badness of, 19 context-sensitive occurrence of, 209 influenced by a world’s center, 235 physical realization of, 55 evidence, 21 accessibility of, 21 extensiveness of, 190–91 its separation from goals, 192 weakness of, 33–34, 35–36 expected utility, 8–9 “as if ” maximization of, 11–12, 115–16, 117 maximization of, 8–9, 11–12, 114, 117 (see also maximization of utility) permissively extended maximization of, 127–28 representational maximization of, 115–16, 118–19 substantive maximization of, 115–16, 118–19 expected-utility principle, 25, 87, 94–99, 136–37, 193, 249 experience of a goal, 43 expert, 181 assessment, 181 assessment of a risk’s size, 186 information, 181, 198, 202–3 probabilities, 184 testimony, 219 witness, 218–19 exposure to chance, 3, 28–30, 248. See also an act’s risk as a means, 70 intrinsic aversion to, 69–70, 78, 184–85 its equilibrium value, 31–33, 100–1 reduction of, 71 fact and value, 192 separation of, 192 FDA and genetic therapy, 214 financial management, 161 Finkelstein, M. and B. Levin, 219 Fischhoff, B. and J. Kadvany, 5 Fischhoff, B., S. Lichtenstein, P. Slovic, S. Derby, and R. Keeney, 196n1
Fleurbaey, M., 204n7 Foster, D. and S. Hart, 173n4 Friedman, D., I. Mark, J. Duncan, and S. Shyam, 3n1 Gaifman H. and Y. Liu, 118–19n4 game theory, 201, 242–44 epistemic, 206 Ghirardato, P. and M. Siniscalchi, 145n4 Gibbard, A. and W. Harper, 30, 95n7, 231 Gilboa, I., 65, 118–19n4 Gilboa, I., A. Postlewaite, L. Samuelson, D. Schmeidler, 6n3, 11n5 Gilboa, I., S. Minardi, L. Samuelson, and D. Schmeidler, 121n7 Good, I. J., 12n6, 71 governmental design, 197 government regulations, 196 by a federal agency, 197 evaluation of, 198 by a regulatory agency, 196 Gregory, R., N. Dieckmann, E. Peters, L. Failing, G. Long, and M. Tusler, 197n2 Hacking, I., 122 Hahn, R., 205–6n10 Halevy, Y., 49–50n3 Hampton, J., 10–11n4 Hansson, B., 62n10 Hansson, S. O., 12–13, 33–34n8, 76 Hart, C. and M. Titelbaum, 128n12 Hausman, D., 211n13 hedging, 137, 144–47, 249 contrasted with insuring, 145 and interaction of risks, 146 Heilman, C., 50n6 Hicks, D., 191–92n3 Holt, C. A., and S. K. Laury, 49–50n4 Holton, G., 173n3 hypothesis testing, 191–92 Icard, T., 222n1 ignorance as a means, 71–72 incomparability, 123 indexicals, 233, 234 denotation of, 233–35 temporal, 240
Index 265 indifference, 42 to the status quo, 55 as a zero point, 169–70 inductive logic, 201, 202 information gathering, 71, 146–47 information pooling, 202 informed attitude to a risk, 187 informed utility assignment, 185 interests of an agent, 20 introspection, 12, 133 Jasanoff, S., 196n1 Jeffrey, R., 13 Jorion, P., 163–64, 175–76 Joyce, J., 25, 95n7, 128n12, 242 Kadane, J., 218–19, 218–19n18 Kadane, J. and G. Woodworth, 219 Kahneman, D., 226 Kahneman, D. and A. Tversky, 5, 79, 80–81, 129–30 Kaplan, S. and B. J. Garrick, 66n3 Kavka, G., 149 Keeney, R. and H. Raiffa, 167n2 Kim, N., 24n5 kinds of risk, I 8. See also types of risk normative significance of, 2, 19 Kment, B., 29n7 Kohl, C., G. Frampton, J. Sweet, A. Spok, N. Haddaway, R. Wilhelm, S. Unger, and J. Schiemann, 202–3n5 Krantz, D., R. Luce, P. Suppes, and A. Tversky, 61–62 Leitgeb, H., 224–25 Lewens, T., 1–2 Lewis, D., 21, 123, 234 List, C. and P. Pettit, 204n7 Luce, R. D., 49–50n4 Maccheroni, F., M. Marinacci, and D. Ruffino, 164n1 Machina, M., 49–50n3, 120n6 Machina, M. and M. Rothschild, 173n4 magnitudes, 50–52 Markowitz, H., 162–63 maximin principle, 124 maximization of utility, 117, 241 its generalization, 241
Mayo, D. and R. Hollander, 196n1 McClennen, E., 148n6 Meacham, C. and J. Weisberg, 10–11n4 mean-risk additivity, 101–11 mean-risk evaluation, 62, 161, 181, 189, 194–95, 248–49 of an option’s utility, 168 mean-risk principle, 87 means, 68n4 mean-variance evaluation, 164n1 measurement, 50–51, 57–58 conjoint, 61–62, 63, 102 measures of risks, 36, 74, 164. See also sizes of risks beta, 175 coefficient of variation, 175 severity, 36–37, 76 Sharpe ratio, 175 standard deviation, 38, 175 value-at-risk (VAR), 175–77 variance, 37–38, 174–75, 186 methods, 2 empirical, 2 philosophical, 2 Michaeli, M., 173n4 models, 7–8 causal, 202 generalization of, 8 idealizations of, 221 normative, 7–8, 179–80, 230 refinement of, 249 of regulation, 196 showing a phenomenon’s possibility, 223 Mongin, P., 120n6 monotonicity, 77 morality and rationality, 5 morality’s requirements, 1–2 Muller, G., 247n10 multi-attribute-utility analysis, 205–6 multi-attribute utility theory, 167n2 negotiation of a regulation, 198 as a coalitional game, 204 as a cooperative game, 204 as an idealized game, 201, 204, 216 nonadditivity of risks, 136–37 nonquantitative cases, 110, 124, 127 nonquantitative methods, 188–91
266 Index normative strength, 12 of principles, 12, 14–15 nudges, 211 objectivity, 128 of a choice, 128 in a legal sense, 196, 217–18 of a probability assignment, 128 in a social sense, 217 Okasha, S., 38n10, 116n2 Oliveira, L., 105n10 operationalism, 10 options, 114–15 at-will realization of, 142 comparisons of, 124 comprehensive outcomes of, 121 exclusion of risk from outcomes of, 119 hypothetical, 117 as possible decisions, 114–15 preference-rankings of, 249 utilities of, 115, 121n8 OSHA and benzene, 214 outcome of an option, 172–73 as finely individuated, 172–73 Parfit, D., 82 partition invariance, 132–33 Paul, L. A., 128n12 Pearl, J., 202 people’s will and interests, 209 permissivism, 53 concerning choice, 53 concerning doxastic attitude, 53 concerning strength of aversion to an option’s risk, 71 person, 21. See also agents cognitively ideal, 21 perspective in a world, 235–36 Peterson, M., 49–50, 167n2, 212 Peterson, M. and P. Vallentyne, 247n10 Pettigrew, R., 131n15, 247n10 Piccione, M. and A. Rubinstein, 239–40 Posner, E. and E. G. Weyl, 200 possibility, 3, 22 epistemic, 22, 82–83, 223 practical syllogism, 229 Pratt, J., 13, 44–45 Precautionary Principle, 212–13
preferences, 10–11 basic, 52 effects of framing on, 237 formation and explanation of, 13, 41–42, 118 intensities of, 57, 60, 61 among options, 53–54, 114, 132 rationality of, 112 representation of, 10–11 probability, 20–21, 41, 52 conditional, 25 evidential, 20–21, 25, 52 imprecise, 24, 127 inferred, 117–18 physical, 20–21, 52 relative to an understanding of a proposition, 234 probability interpretations, 21 Bayesian, 21 frequentist, 21 mentalist, 116n2 objective Bayesian, 23–24 professional advice, 178 proportionality, 75–77, 140, 164, 175, 184–85 propositions, 25–26 expressions of, 232 as objects of preferences, 26–27 presenting reasons, 26–27 self-locating, 239–41 sentential names for, 26, 75 understandings of, 75, 232 prospect theory, 79, 129–30 cumulative, 129–30 and reference points, 80–81 prospects, 27, 29, 78 public justification, 217–20 quantities, 50–52 Quiggen, J., 130 Rabin, M., 147n5 Rabin, M. and R. Thaler, 147n5 Raiffa, H., 173n5 Randall, A., 212n15 ratification, 244 and self-support, 244 (see also self-support)
Index 267 rationality, 5–7, 64 of attitudes, 7, 41 collective, 206 conditional, 7 as a mental capacity, 6 principles of, 8 as a standard of evaluation, 6 technically defined, 6 reasons, 64 causal, 246–47 evidential, 246–47 higher order, 65 internal, 64 reflection, 12 unlimited, 12 representation, 58–59 of preferences, 59 of strengths of belief, 52–53 of strengths of desire, 106–7 representation theorems, 31, 58, 116–17 and consequences, 31 and multiple representations, 117 necessary preference axioms of, 116–17, 118 structural preference axioms of, 116–17, 118 requirements of rationality, 10. See also standards of rationality explanation of, 10 multichronic, 150 strength of, 14–15 synchronic, 150, 154 Rescher, N., 1–2 resoluteness, 148n6 responses to risks, 1, 4, 41, 248 according to type of risk, 248 acts, 41 attitudes, 41 public discussion of, 2 return-risk evaluation, 161–62. See also mean-variance evaluation derived from mean-risk evaluation, 165–67 of an investment, 166–67 Richter, R., 242n7 Riedel, F. and T. Hellmann, 173n4 rights of individuals, 197–98
risk, 5, 78 conditional, 25 as a consequence, 8–9, 13, 47–48, 119–23 expression of, 26 imprecise, 24, 77 as an interaction effect, 163–64 as a means, 68 past, 81–83 reduction of, 113 resolution of, 81, 83 risks for nonhumans, 5 risk of an act, 29–30, 36. See also risk of an option risk of an investment, 171 as an equilibrium value, 171 as individuated by size, 172–73 as separable from return, 171 risk management, 2, 193–94 rational, 2, 113 risk of an option, 69 and equilibrium, 186–87 risk of realizing an extrinsic aversion, 67 risk premium, 38, 48, 73, 80 risk types, 3 chance of a bad event, 3, 19, 36–37 epistemic, 22n2 evidential, 22–23, 170–71, 184, 212–17 inductive, 191–94 objective, 20 perceived, 22–23 physical, 22–23, 170 subjective, 20 volatility, 28 risk-weighted expected-utility (REU) theory, 130–34 risk weights, 13, 130 Rothschild, M. and J. Stiglitz, 173n4 Safra, Z. and U. Segal, 133–34n17 Sahlin, N. and J. Persson, 22n2, 208–9n12 Sahlin, N. and P. Weirich, 151 Saint Petersburg paradox, 123n9 Salmón, N., 21–22n1 Samet, D. and D. Schmeidler, 30 Samuelson, P., 147 Satan’s apple, 115 Savage, L. J., 13, 30, 31, 49–50n3, 118–19
268 Index Scarantino, A., 191–92 Schmeidler, D., 33–34n9, 132–33n16 scientific proceduralism, 210–11 self-support, 244–47 principle of, 247 separability, 62–63, 90–94, 125–26, 139, 189–91 equivalence, 92, 126 monotonic, 92 preference, 91, 110–11 of risk and other consequence, 185 of risk and return, 168–69 strong, 91 utility, 93 weak, 91 sequences of acts, 137–38. See also sequences of choices expected utility of, 137–38 interactions of exposures to chance of acts within, 139 risks from, 146 standards of rationality for, 142–44 sequences of choices, 139, 146 anticipated changes in information during, 152 changes in goals during, 152–53, 154–55 changes in information during, 152–57 exposures to chance of, 139 standards of rationality for, 148, 157, 249 utilities of, 139 sequences of options, 142–43, 146 domination among, 149–52 pragmatic equivalence of, 142–43n2 standards of rationality for, 147 Sharpe, W. F., 175 Shrader-Frechette, K., 210–11 sizes of risks, 36–39, 74, 169, 173, 185–86 Sleeping Beauty, 239–40 Slovic, P., 22–23n3 Smith, A. E. and W. Gans, 213n17 social institutions, 199–200, 210 sociology, 5 Spirtes, P., C. Glymour, and R. Scheines, 202 stability of evaluations, 199 Stalnaker, R., 231, 239–40n5
standards of legal practice, 218 standards of rationality, 136 multi-chronic, 136 synchronic, 136 Steel, D., 191–92, 212 Stefánsson, H. O. and R. Bradley, 49–50n4, 76–77 strategies, 241n6 mixed, 241n6 pure, 241n6 strengths of belief, 52–53, 115–16 strengths of desire, 115–16 Sugden, R., 192 Sunstein, C., 196n1, 211, 212 sure-thing principle, 120, 120n6 Thaler, R. and C. Sunstein, 211 theory of rational choice, 50 theory of risk, 1 normative, 3 philosophical, 1, 3, 248, 249 systematic, 12–13 Thoma, J. and J. Weisberg, 132–33 tolerance, 53 epistemic, 53 toxin puzzle, 149 transformative experience, 128n12 trustee decisions, 179, 249 and regulation, 201 Tversky, A. and D. Kahneman, 80–81, 129–30 types of risk, 19. See also kinds of risk normative significance of, 248 utility, 19–20, 41, 53–57 conditional, 54–55, 54n7 imprecise, 24, 127 mentalist interpretation of, 116n2 relative to an understanding of a proposition, 234, 235 scale for, 19–20, 53–54 scope of, 54 self-conditional, 245–47 utility function, 146 concavity of, 146 (see also aversion to risk) utility inferred and measured, 57–58, 117–18
Index 269 utility level, 78 direction of change in, 78 magnitude of change in, 78–79 utility of a risk, 4 extrinsic, 4 intrinsic, 4 from a perspective, 82–83 utility of a world, 95 as a comprehensive utility, 95 as an intrinsic utility, 95 utility transfers, 207, 208 utility types, 54 causal, 54, 103–4, 106, 181–82 comprehensive, 54, 104, 181–82 epistemic, 194, 227 interpersonal, 205, 205n9
intrinsic, 54 marginal, 107–9
Viscusi, W., 198 volatility, 145 as exposure to chance, 164–65 diversification to reduce it, 162–63 of an investment, 162, 163–64 Wakker, P., 79, 123n10, 129–30 Walley, P., 20–21, 127n11 wanting, 66. See also desires White, R., 23–24n4, 128n12 Zynda, L., 10–11n4