Beings of Thought and Action: Epistemic and Practical Rationality 9781108992985, 9781108834377

In this book, Andy Mueller examines the ways in which epistemic and practical rationality are intertwined. In the first

172 25 1MB

English Pages [254] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Beings of Thought and Action: Epistemic and Practical Rationality
 9781108992985, 9781108834377

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

BE ING S O F T H OU GH T AN D A CT IO N

In this book, Andy Mueller examines the ways in which epistemic and practical rationality are intertwined. In Part I, he presents an overview of the contemporary debates about epistemic norms for practical reasoning, and defends the thesis that epistemic irrationality can make one practically irrational. Mueller proposes a contextualist account of epistemic norms for practical reasoning and introduces novel epistemic norms pertaining to ends and hope. In Part II, Mueller considers current approaches to pragmatic encroachment in epistemology, ultimately providing a new principle-based argument for pragmatic encroachment. While the book defends tenets of the knowledge-first program, one of its main conclusions is thoroughly pragmatist: In an important sense, the practical has primacy over the epistemic. andy mueller is a post-doctoral fellow at Goethe University Frankfurt. He works on epistemology and practical reasoning and has published articles in journals including Analysis, Analytic Philosophy, Episteme, and Synthese.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

BEINGS OF THOUGHT AND ACTION Epistemic and Practical Rationality

ANDY MUELLER Goethe University Frankfurt

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

University Printing House, Cambridge cb2 8bs, United Kingdom One Liberty Plaza, 20th Floor, New York, ny 10006, USA 477 Williamstown Road, Port Melbourne, vic 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 79 Anson Road, #06–04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781108834377 doi: 10.1017/9781108992985 © Andreas Mueller, 2021 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2021 A catalogue record for this publication is available from the British Library. isbn 978-1-108-83437-7 Hardback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

Contents

List of Figures List of Tables Acknowledgments

page vi vii viii

Prologue

x

part i beings of thought in action Epistemic Encroachment on Practical Rationality

3

2 Practical Reasoning, Ends, and the End of Hope

31

3 Contexts, Costs, and Benefits

55

4 Knowledge and Seemingly Risky Actions

81

1

part ii beings of action in thought 5 Pragmatic Encroachment in Epistemology

109

6 Reasons for Belief and the Primacy of the Practical

140

7 Assessing Potential Explanations of Pragmatic Encroachment

169

8 Social Beings

193

Epilogue

212

Glossary References Index

218 220 229

v

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

Figures

3.1 5.1 5.2 6.1

COE and CFI: Variations Shifting thresholds view (STV) Total pragmatic encroachment (TPE) The primacy of the practical

vi

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

page 74 136 138 158

Tables

page 95 127 128

4.1 Surgeon case in EUT 5.1 Practical adequacy 5.2 Worsnip on stakes

vii

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

Acknowledgments

This book grew from my doctoral dissertation, which I submitted in January 2016. I would like to thank both of my advisors, Marcus Willaschek and André Fuhrmann. Marcus was a superb advisor, untiring in providing feedback and encouragement even though we disagreed on many of the issues we discussed. His help throughout my academic career, prior to the dissertation and since, has been invaluable. While writing the dissertation, I was able to visit Rutgers University and the University of Southern California (USC). Many thanks to Jason Stanley, who was my advisor during my stay at Rutgers. His input at the beginning of this project also helped to shape its final outcome. Huge thanks go to Jake Ross, my advisor at USC. Jake pushed my thinking in new directions and his encouragement was precious. I met many wonderful people in the philosophy departments at Rutgers and USC. I cannot mention them all by name, but thanks to all of them. Both departments are fantastic and I feel blessed to have been a guest. The excellence cluster “The Formation of Normative Orders” at Goethe University provided me with a PhD scholarship from 2013 to 2016. I am grateful to everyone involved in the cluster for providing me with the necessary financial security as well as plenty of time to work on the thesis that has now become this book. I was fortunate to receive many helpful comments while writing the dissertation, and also the manuscript of this book. Many thanks, in no particular order, to Ralph Wedgwood, Mikkel Gerken, Clayton Littlejohn, Adrienne Martin, Alex Worsnip, Daniel Whiting, Stephen Grimm, Davide Fassio, Jie Gao, Matt Benton, Luke Davies, Julien Dutant, Daniel Immerman, Amy Floweree, Roger Clarke, Jonathan Dancy, Wolfgang Barz, David Löwenstein, Eva Schmidt, Susanne Mantel, Alexander Dinges, Claudia Blöser, Thomas Sturm, and Tom Kelly. My apologies to those I may have forgotten. I would also like to thank the two anonymous reviewers for Cambridge University Press for their insightful comments. viii

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

Acknowledgments

ix

Chapter 1 draws on material from my article “How Does Epistemic Rationality Constrain Practical Rationality?,” previously published in Analytic Philosophy. Thanks to John Wiley & Sons for permitting me to reuse this material. Finally, this book would not have been possible without the support from the people in my nonacademic life. My parents and my family have made many things possible that I treasure. The fact that they often do not understand what I am doing, and yet never fail to support me, must be a sign of unconditional love. For that, I am deeply grateful. Thanks to my friends for some much-needed balance. Special thanks to my wife Steffi, especially for having my back during the completion of this manuscript. I am very excited to continue this journey with you.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

Prologue

Henri Bergson’s advice to “Think like a man of action and act like a man of thought” is simple yet profound. In philosophy, there is a tendency to address the question of how to regulate our thoughts about the world separately from the question of how to regulate our actions in the world. Division of labor is a fine thing but not without its risks. The overarching conviction that motivates this book is that we miss something important if we account for the practical and epistemic dimensions of our lives separately. A lack of concern with epistemological questions leads to an impoverished perspective on practical rationality. Conversely, a lack of concern with practical matters leads to an impoverished perspective on epistemology. Here, I outline what motivates this conviction. We humans are beings of thought and action. Humans necessarily form thoughts about the world they inhabit. Even withholding belief is a kind of mental attitude toward the world. And we necessarily act in the world. Even a decision not to engage with the world, an omission to act, is a way to act. Moreover, our thinking and our acting are deeply intertwined. When we act, our actions are guided by how we think about the world. If one is starving and believes that a plant in one’s immediate surrounding is edible, one will eat it. But the necessity to act can shape how we think about the world as well. Acting in the world always comes with the risk of failure. Eating the plant, if it is edible, is advisable. However, if it is not, the consequences might be severe. This will influence whether and how we make up our mind about the question whether the plant is edible. These examples intentionally have the ring of an evolutionary story. I believe it to be a plausible hypothesis that if there was not this deep intertwinement between thought and action, human beings would not have made it very far. It seems to be a plausible conjecture that having this intertwinement is an evolutionary advantage. It is hard to imagine that a species that acted without taking into account what it thought about the world would have made it very far. Similarly, if a species’ thoughts about x

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

Prologue

xi

the world were not partly shaped by the risk of failed actions, it probably would not have been successful. If human beings are beings of thought and action, and if their nature as agents and thinkers is deeply intertwined, then it should be no surprise that both theories in epistemology and about practical rationality must take into account this double nature. A considerable number of papers and monographs have been published at the intersection of epistemology and practical rationality over the last two decades. While this monograph will critically engage with a considerable amount of this literature, it will not attempt to be an exhaustive survey nor a conclusive critical commentary. My main aim is to provide a reasonable case for my own views concerning epistemology and practical rationality; I am not going to provide a definitive rebuttal of all rival views. Even this modest aim makes certain omissions necessary. Perhaps the most salient omissions are semantic theories about “knows” like epistemic contextualism and findings from experimental philosophy about folk intuitions. I will touch on both of these topics only very briefly. Contextualism will only be discussed in the Epilogue and experimental philosophy in Chapter 5. And even there, I will say considerably less than one might expect if one is familiar with the current literature on shifting knowledge attributions. I do not intend to deny the relevance of these views and approaches. Nevertheless, I believe these omissions to be a defensible choice given the argumentative strategy I pursue to set out my own views. Part I of the book deals with the debate about epistemic norms for practical reasoning and Part II with the debate about pragmatic encroachment in epistemology. The debate on epistemic norms for practical reasoning has, so far, primarily concerned the question of which epistemic condition makes it rationally permissible to treat a proposition as a reason for action. The debate on pragmatic encroachment has focused mainly on the question of whether it is true that practical factors, such as the costs of error, can make a difference to whether a true belief amounts to knowledge. While both of these questions will figure prominently in the book, I explore some uncharted territory. I will also consider the questions whether there are epistemic norms for practical reasoning at all, whether there is an epistemic norm about which ends one can rationally pursue, and whether there is pragmatic encroachment on notions other than knowledge. Over the course of the book, I will defend a nonstandard picture of the relation between our epistemic and our practical nature. The epistemic and the practical are deeply intertwined; they encroach upon each other. Before I turn to a short chapter synopsis, I will set the stage for the whole book. My starting point is the topic that is now known under the heading

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

xii

Prologue

“epistemic norms for practical reasoning.”1 Its origins can be traced back to Fantl and McGrath (2002), who argued for a particular connection between knowledge and rational action. It is at least implicit in Hawthorne (2004: 29), who argues that one ought only to use what one knows as a premise in practical reasoning. Williamson (2005: 227) claims that knowledge is necessary to be entitled to be a premise in practical reasoning. And Stanley (2005: 9) holds that “one should act only on what one knows.” The topic became explicit in Hawthorne and Stanley (2008), where the relevant terminology was shifted to reasons for action and what epistemic conditions must be fulfilled so that it is rationally permissible to treat a proposition as a reason for action. Since it can be considered a locus classicus in the debate about epistemic norms for practical reasoning, I will focus on Hawthorne and Stanley (2008) here. Hawthorne and Stanley’s starting point is the question: What is the relation between knowledge and rational action? They state that at least in one prominent theory of rational action, expected utility theory, knowledge is not relevant in determining what it is rational to do. What matters is one’s degree of confidence, or credence, in a proposition. According to expected utility theory, it can be rational to act on a proposition when one has a high degree of confidence in it, even though one fails to know it. At the end of this Prologue, I will say a few words about how my approach relates to expected utility theory. For now, I will focus on Hawthorne and Stanley’s theses and arguments. Hawthorne and Stanley’s main argument for what has become known as the knowledge norm for practical reasoning is based on folk appraisals of actions. Their first example is about a lottery with 10,000 tickets and a first prize of $5,000. Consider the following string of practical reasoning of a lottery ticket owner considering selling his ticket for one cent: (1) (2) (3) (C) 1

2

I will lose the lottery. If I keep the ticket I will get nothing. If I sell the ticket, I will get a cent. So, I ought to sell the ticket.2

In the following, I will sometimes switch between appraisals of reasoning and appraisals of action. That may be objectionable to those who hold that there is no epistemic norm for action, for example Simion (2018). However, everything that I say about actions could be rephrased as an evaluation of the practical reasoning underlying the action. And even Simion (2018: 235) agrees that there are epistemic norms for practical reasoning. From Hawthorne and Stanley (2008: 572); however, the example also appears in Hawthorne (2004).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

Prologue

xiii

Hawthorne and Stanley call this piece of practical reasoning absurd. They identify (1) as the culprit. While it is highly likely that one will lose the lottery if one owns a ticket, it is commonly said that one cannot know that one will lose3 – at least as long as that belief is based only on the statistical likelihood of losing. Hawthorne and Stanley think that it is the lack of knowledge of (1) that makes this piece of practical reasoning absurd. Hawthorne and Stanley’s second example is about conditional orders. Imagine a sous-chef who is ordered to take the cake out of the oven if it is done before the masterchef returns from his break. The sous-chef has the following conditional order: If the cake is done, then take it out of the oven. Hawthorne and Stanley think that conditional orders require knowledge of the antecedent before one can discharge the consequent. The sous-chef could be criticized for taking the cake out of the oven if he does not know that the cake is done. The third batch of examples concerns negligence. For example, if a parent lets a child play close to a potentially dangerous dog, they are blameworthy if they do not know that the dog is harmless. The blame is apt, independent of whether the dog is actually dangerous or not. If the parent did not know that the dog is harmless, then they should not let their child play near that dog. Or consider a doctor giving a flu shot to a patient. If the doctor does not know that the needle is clean and safe to use, they will rightly be subject to criticism. Hawthorne and Stanley then proceed to argue for the existence of cases in which even a justified belief does not exempt from blame. This leads to the conclusion that we generally blame for a lack of knowledge. I will omit this particular case involving justified belief. We will see here that the assumption that we generally blame for a lack of knowledge does not hold up under scrutiny. Nonetheless, all the cases mentioned support the view that there is an intimate connection between knowledge and our appraisal of action. Hawthorne and Stanley also point to ordinary language: It is considerably more natural to appraise behavior with the verb ‘know’ than the phrase ‘justified belief’, or even ‘reasonable belief’. Perhaps this is because ‘know’ is a phrase of colloquial English, whereas ‘justified belief’ is a phrase from philosophy classrooms. But this is itself a fact that should be surprising, if the fundamental concept of appraisal were justification rather than knowledge. (Hawthorne and Stanley 2008: 573) 3

See Hawthorne (2004) for a book-length treatment on the issue of knowledge and lottery propositions.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

xiv

Prologue

This observation should be uncontroversial. The phrases “justified belief” or “rational belief” are not frequently used colloquially. This is granted by many critics and has proven a fact difficult to account for by those who assign justification a central role in practical reasoning. The fact that we appraise actions in terms of knowledge would be very surprising if knowledge was not somehow relevant to appraising actions.4 One background assumption needed in the case for the knowledge norm of practical reasoning is that there is a single epistemic state that is sufficient to avoid blame in all the cases. The cases mentioned so far do not rule out that in certain circumstances, lesser epistemic states than knowledge will do to avoid blame. This background assumption is not argued for and it is a potentially contentious premise for those inclined to say that there is not a single general epistemic condition for practical rationality. But I will bracket this issue here. I do so because it seems that Hawthorne and Stanley never intended to give us a deductively valid argument. But they can still rightly hold that the folk appraisals, discussed here, constitute an abductive argument for the following principle. The Reason–Knowledge Principle (RKP) It is appropriate to treat the proposition that p as a reason for acting iff you know that p. (see Hawthorne and Stanley 2008: 578)5

The Reason–Knowledge Principle (RKP) gives a condition for when it is epistemically appropriate to treat a practically relevant proposition as a practical reason. Given the widely shared assumption that reasons make things rational to do, we can understand that the RKP suggests an epistemic condition for practical rationality. So “appropriate” is to be understood as “rationally permissible.” In the following, my terminology will vary; I will sometimes speak of “permissibility,” sometimes of “rationality.” Both are meant to be shorthand for “rational permissibility.” I want to emphasize that the RKP is merely an epistemic condition for practical rationality or, as one could say, an epistemic precondition for practical rationality. One can meet the demands of the RKP and still fail to be practically rational because one treats a practically irrelevant proposition as a reason. For example, I know my grandma’s telephone number. However, if I were to treat that proposition as a reason to invest in stocks of a certain 4

5

In Mueller (2021), I offer an explanation of why that is without endorsing the Reason–Knowledge Principle I am about to introduce. I have dropped the amendment about the p-dependency of the decision which is part of the original version. I do so because as I explain later, I will restrict my discussion to practically relevant propositions.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

Prologue

xv

company, this would certainly be practically irrational. My grandma’s phone number is absolutely no indicator for stock market trends, and it thus cannot render any financial decision rational. As irrational as buying stocks based on a phone number may be, such practical reasoning is fine in one respect. Since I know my grandma’s phone number, my reasoning cannot be challenged because it relied on a proposition to which I had a lacking epistemic relation. In this sense, my reasoning is better than the reasoning of somebody considering selling his lottery ticket because he takes it to be a loser. In the following, the examples I will use always involve practically relevant propositions. In line with the current debate, I will focus on the question whether knowledge is the necessary and sufficient epistemic condition for it to be appropriate to treat a practically relevant proposition as a reason, that is whether knowledge is the epistemic norm for practical reasoning. A few words on the notion of “treating as a reason” are in order. By treating something as a reason, I mean nothing over and above using a proposition in practical deliberation. One takes the proposition to count in favor or against a certain action and adjusts deliberation accordingly. I will often speak of “relying on p” or simply of “acting on p.” I assume that these various locutions all refer to the same phenomenon – a phenomenon that I might not be able to adequately describe except by using these locutions but familiar to all of us. Many commentators have constructed cases for which we tend to judge that it is appropriate to treat a proposition as a reason for action even though the proposition is not known. These cases undermine the necessity direction of the RKP. There are three versions of this objection, which we can make vivid with simple cases. For all the cases, we assume the following at the outset. Hannah has made a reservation at a restaurant for a dinner date with Sarah. Case F, false beliefs: Hannah is justified in believing that the restaurant where she is planning to have dinner is on 5th street. She looked it up on the Internet, which is a reliable way of finding out where restaurants are. But the restaurant recently moved to 6th street, and the website had not yet been updated with the new address when Hannah tried to look it up. Hannah lacks knowledge because she believes a falsehood, yet it seems appropriate to rely on the proposition that the restaurant is on 5th street.6 Case G, Gettiered beliefs: The website that Hannah used to look up the address had been hacked the night before, and the hacker changed the 6

Such cases can be found in Fantl and McGrath (2009) and Gerken (2011).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

xvi

Prologue

address listed on the website. However, by coincidence, he changed it to 6th street, the new location where the restaurant actually is. So, Hannah ends up having a true justified belief. But since she was in a Gettier case, she lacks knowledge. Yet, just as in the example here, it seems that it is appropriate for Hannah to rely on her belief that the restaurant is on 6th street even if it does not amount to knowledge.7 Case P, partial beliefs: Hannah is on the way to the restaurant. She looked up the address, but then she forgot it. She thinks she remembers that it is on 5th street, but she is quite unsure about it. But she still has a partial belief about the location of the restaurant.8 Since it is very urgent to get to the restaurant quickly, otherwise the reservation will be canceled, Hannah decides to act on her partial belief. It seems appropriate for her to rely on this partial belief, even though it is not knowledge.9 Hawthorne and Stanley are aware of such putative counterexamples and have developed several strategies to deal with them. In regard to Case F and Case G, they hold that Hannah is excused for her failure to live up to the knowledge norm but that she nonetheless broke the norm.10 In regard to Case P, they hold that Hannah has knowledge of probabilities in virtue of which she satisfies the RKP; for example, she knows that it is likely (enough) that the restaurant is on 5th street. But these strategies to deflect the counterexamples against the necessity direction of the RKP have received criticism. The excuse-maneuver is rejected in Brown (2008: 173), Fantl and McGrath (2009: 125), Gerken (2011), and Locke (2015). The knowledge-ofprobabilities maneuver is rejected in Mueller and Ross (2017). I will not engage with further epicycles in the debate about the RKP. So far, I believe that the balance of reasons speaks in favor of rejecting at least the necessity direction of the RKP. This leaves us with two questions: Which epistemic condition must be fulfilled in order for it to be permissible to treat p as a reason? Is at least the sufficiency direction of the RKP true? Perhaps surprisingly, the first question has hardly been discussed in debates about practical reasons. In fact, some have denied that there are such epistemic conditions, to which I will turn in a moment. For many, there seems to be only a negative relation between reasons and our epistemic standing to them. For example, Dancy (2000: 57) writes that 7 8

9 10

Such cases can be found in Brown (2008), Neta (2009), and Smithies (2012). Partial beliefs are contrasted with full, or outright beliefs; they are often also referred to as credences or degrees of belief or confidence. I will use these terms interchangeably. Such cases can be found in Jackson (2012) or Smithies (2012). For a similar strategy, see also Boyd (2015), who uses the distinction between primary and secondary propriety to deal with counterexamples to RKP.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

Prologue

xvii

there is an epistemic filter for reasons that affect the rational status of one’s actions; he uses an example from Anscombe to support this claim. While I sleep in my bed, somebody leaves a baby on my doorstep, which dies during the night. The baby on my doorstep is a reason to get up, but this is not a reason I am rationally bound to follow because it is not accessible to me. The epistemic filter that determines whether I am bound to follow an existing reason has two components. First, existing reasons are not binding if they are inaccessible to me. Second, they are not binding even if they are accessible, but one is not at fault for failing to access them. While Dancy acknowledges the relevance of epistemic considerations for practical rationality, his conception of an epistemic filter falls short of a comprehensive account of the relation between epistemic and practical rationality. This is because Dancy’s filter serves merely a negative function. It tells us when the epistemic considerations make it the case that existing reasons do not affect the status of our actions as rational. Existing reasons do not have a bearing when they supervene on facts that we could not or need not have known about. But this filter is entirely silent on the first question posed here. Which epistemic condition must be fulfilled in order for it to permissible to treat p as a reason?11 In later passages, it becomes clear that Dancy wants positive restrictions on when one may rely on one’s beliefs in practical reasoning. But he falls short of giving a clear answer as to what they are. Dancy (2000: 60) writes that only the rationally permissible aspects of an agent’s perspective can rationalize actions. While this is a very natural answer, it remains vague. Everything from some positive rational credence to Cartesian certainty is included in the rationally permissible aspects of the agent’s perspective. Are all these different epistemic states equally permissible bases for practical reasoning in all practical reasoning situations? Moreover, there is a more fundamental worry. What justifies the assumption that practical rationality requires epistemically rational beliefs/credences? While this assumption might be intuitively shared by many, it has been challenged. If practical rationality does not even require

11

Similarly, Raz (2011: 110) talks about an epistemic filter. This epistemic filter can affect whether a fact that is unknown to the agent constitutes a reason for the agent. Raz holds that a reason an agent cannot know about is not a reason the agent has to do some act. But this epistemic filter does not affect whether that reason is a reason. Reasons persist through our ignorance, but our ignorance about them can affect what we have reason to do. This seems to be a merely negative account of how epistemic considerations can affect what we have reason to do. It does not answer the question whether we must have some positive epistemic status toward a proposition before it is permissible to rely on it in practical reasoning.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

xviii

Prologue

rational beliefs, then the whole debate about epistemic norms for practical reasoning would seem misguided. It is the challenge, just hinted at, that marks the starting point of my investigation in this book. In Part I, I investigate in which way epistemic rationality bears on practical rationality. In Chapter 1, I argue that there is epistemic encroachment on practical rationality: Whether one is practically rational can be partly influenced by epistemic considerations. I provide an argument for epistemic encroachment based on considerations of instrumental rationality. From there, I argue that practical reasoning is subject to distinctive epistemic norms. In Chapter 2, I broaden the debate about epistemic norms for practical reasoning by considering a norm for which ends it can be practically rational to pursue. I do so by exploring a link between practical rationality and hope and developing an epistemic norm for hoping. Finally, I argue that once we see the wide variety of ends one can rationally have, this has upshots for the debate about potential successors of the RKP. In Chapter 3, I offer my own alternative to the RKP. I argue for a form of contextualism about epistemic norms for practical reasoning. I suggest that whether the degree of justification for a proposition provided by one’s evidence makes it rationally permissible to treat that proposition as a reason for action varies with context. I provide an account of the degree of justification a context calls for in terms of the costs of error and the costs of further inquiry. In Chapter 4, I address whether knowledge of a proposition is always sufficient to treat that proposition as a reason. A number of influential cases seem to suggest that knowledge does not always suffice. I argue that this impression is mistaken. In Part II, I investigate in which way and how practical considerations encroach on epistemology. In Chapter 5, I provide a novel argument in favor of pragmatic encroachment on knowledge, the thesis that whether one knows a proposition can partly be determined by practical factors. The argument draws on principles defended in Chapters 3 and 4. In Chapter 6, I develop an explanation of how pragmatic encroachment on epistemology works. First, I offer an informal theory of the strength of epistemic reasons. Then I argue that this theory plus some assumptions about the function of belief suggest that there is pragmatic encroachment on the strength of epistemic reasons. In Chapter 7, I assess rival explanations of pragmatic encroachment and my own account of total pragmatic encroachment. I argue that rival explanations face some problems while my own often compares favorably.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

Prologue

xix

I then explore one of the more controversial aspects of my account: the instability of rational belief and rational credences. Finally, in Chapter 8, I turn to the fact that as well as beings of thought and action, we are also social beings. Practical factors can vary widely from person to person. This leads to a number of different challenges for any account of pragmatic encroachment. To deal with them, I argue for what I call a social-sensitive account of pragmatic encroachment, but I also acknowledge some commitments of my account that are bound to be controversial. To sum up the whole project in a (perhaps lengthy) slogan: The epistemic and the practical encroach on each other – knowledge is first, but the practical has primacy. In Chapters 2 and 4, I explain the way in which my project promises to make good on some claims that are at least in the spirit of the knowledge-first program. In Chapter 6, I explain the way in which the practical has primacy over the epistemic. Throughout this book, I try to stay neutral on some of the classical debates in epistemology. The two most salient are internalism versus externalism about epistemic justification and fallibilism versus infallibilism about knowledge. Regarding the first, I often point out how my views can be accommodated within an internalist or an externalist framework. Regarding the second, I generally proceed as if fallibilism, understood as the thesis that one can know that p even though one’s evidence does not entail p, is true. While this might not be understood as neutrality, I do point out what an infallibilist might have to say in response to my views. In many cases, assuming infallibilism would only make my tasks easier. In some instances, while infallibilism might be a way to avoid a conclusion, I point out that it would take a fairly radical reading of infallibilism that would rob us of most of what we ordinarily take ourselves to know. I am generally willing to accept that the ultimate fate of my arguments may depend on other battlegrounds in epistemology, but I can only hope to engage in such further battles elsewhere. I also try to stay neutral on debates about reasons for action. My discussion is meant to be equally compatible with a Humean account of reasons based on the agent’s desires as with accounts that see the truths about reasons as grounded in values.12 It is also intended to be neutral on 12

Baril (2019: 66) argues that certain cases favor the combination of a thesis involving pragmatic encroachment with an externalist account of reasons for action, according to which some reasons for action are independent of the agent’s motivational set. While I find her cases intuitively compelling, nothing in the following depends on taking a particular side in the internalism/externalism debate about reasons for action.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

xx

Prologue

the question whether reasons are things out there in the world, psychological states, or abstract entities such as propositions. It will be harder to remain neutral on the issue of factualism, that is whether reasons are facts (true propositions), as Case F (here) might already suggest. Where I touch on this issue, I outline various responses for the factualist. Similarly, in respect of reasons for belief, unless indicated otherwise the term will always refer to an epistemic reason for believing, a consideration that suggests that some proposition is true, or likely to be true. I stay silent on the debates about the nature of epistemic reasons for believing. I often use the term “evidence” to refer to epistemic reasons for belief, following the customary equation of epistemic reasons for belief and evidence. I assume that the term epistemic reason for believing stands in sharp contrast with the notion of a practical reason for belief, for example the positive benefits one might reap by believing. Much of the following discussion is held in a framework of practical rationality that one may call reasons-based. This framework could be contrasted with expected utility theory, according to which practical rationality is a matter of maximizing expected utility. Reasons play no role in this framework, which relies only on credences and utilities to calculate expected utilities. The reasons-based framework, though, can hardly be called a fringe option, so I do not defend this choice of framework. In fact, much like Weisberg (2013), I believe that these two frameworks can both be valuable and serve different purposes, though I am not able to go into details about their relation. I do, however, sometimes explicitly discuss objections framed in terms of expected utility theory to show that the views I hold can be maintained in this framework. A final preliminary: I use a large number of abbreviations throughout this book. I do so because I believe this facilitates discussion and makes for a more concise presentation, as well as easier to read sentences. Clearly, the use of abbreviations will only be beneficial if readers know what they refer to. Very patient readers of earlier manuscript versions advised me to include a list of abbreviations in the final version. I was at least wise enough to heed this advice once it was given to me. There is now a Glossary of all significant abbreviations for principles and cases along with their definitions, which may be useful where there is a significant gap between their first occurrence and subsequent discussions where they feature; they are all listed in the Index as well.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

part i

Beings of Thought in Action

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

chapter 1

Epistemic Encroachment on Practical Rationality

The quest for an epistemic norm for practical reasoning seeks to answer the question: Which epistemic condition must be satisfied in order for it to be rationally permissible to treat a proposition as a reason for action? That there is such an epistemic condition on practical reasoning seems to be one of the few assumptions that all participants to this debate can agree on. As pointed out in the Prologue, the assumption that there are epistemic norms is usually justified by pointing to the use of epistemic vocabulary in folk appraisals of practical reasoning. But, as we will see next, the validity of the inference from criticism to epistemic norms has recently been questioned. But can’t we take it for granted that there are epistemic norms for practical reasoning? Isn’t the assumption that practical reasoning cannot be wholly indifferent to epistemic concerns obviously true? Suppose that I believe, contrary to all the evidence that I have, that I am Napoleon. That belief is certainly irrational. If I were to act on this belief and decide to travel to Paris to reclaim my throne, this action would be just as irrational. So isn’t it obviously true that practical rationality depends on being epistemically rational? Given a tight connection between reasons for action and rational action, isn’t it natural to assume that one must satisfy certain epistemic conditions for it to be rationally permissible to treat a proposition as a reason for action? I do not take epistemic encroachment on practical rationality to be a surprising view. To me, it seemed obviously true, which is why I previously labeled it, facetiously, “the obvious view.”1 Nonetheless, the debate about epistemic norms for practical reasoning deserves to be put on a more solid footing than just claims about alleged obviousness. Given that there are actual dissenters, there is a need for an explicit argument in favor of epistemic encroachment on practical rationality. I will develop my arguments in exchange with two challenges. The first one is due to 1

See Mueller (2017a).

3

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

Beings of Thought and Action

4

Derek Parfit, who questions that, generally, epistemic failings translate into a failure of practical rationality. Sections 1.1, 1.2, 1.3, and 1.4 concern this general challenge to epistemic encroachment on practical rationality. In Section 1.5, I argue that we can infer from epistemic encroachment that there is an epistemic norm for practical reasoning. In Sections 1.6 and 1.7, I turn to the second challenge due to Davide Fassio, who tries to resist a specific form of epistemic encroachment, namely, that there is a genuine epistemic norm for practical reasoning.

1.1

Parfit’s Challenge to Epistemic Encroachment on Practical Rationality

Here is a boring textbook definition: Epistemic rationality is concerned with what to believe, while practical rationality is concerned with what to do. This boring definition gives rise to an interesting question. As just defined, rationality is a capacity that aims at resolving certain questions. This capacity is exercised in reasoning. The result of reasoning, whether it is a belief or an action, can be evaluated as rational depending on whether one’s rational capacities have been exercised properly. Practical reasoning cannot take place in a vacuum; it presupposes beliefs about the world with which we plan to interact. The interesting question, then, is this: To what extent does practically rational action depend on having rational beliefs? To many, it might seem that there is an obvious answer: One’s practical reasoning or one’s action cannot count as rational if they are based on irrational beliefs. The example about me believing that I am Napoleon suggests as much. Practical rationality depends on being epistemically rational. Epistemic Encroachment on Practical Rationality (EE) Whether an action is practically rational can depend on epistemic rationality.

In this colloquial form, EE is an imprecise slogan. The problem is not merely that EE does not tell us how the epistemic encroaches on the practical. The culprit of the imprecision is that “depends” could be indicative of one of several possible dependence relations. I have to ask readers sensing this ambiguity in EE to be patient until Section 1.4, where I will address this concern. Over the course of this whole chapter, in which way I take the epistemic to encroach on the practical will emerge. The “can” in EE is a necessary and worthwhile addition. It allows us to sidestep cases in which one is rewarded for one’s epistemic irrationality. Suppose that an eccentric billionaire offers me a million dollars if

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

Epistemic Encroachment on Practical Rationality

5

I accomplish the following: to believe that it is not raining, which goes against all my evidence, as I see that it is absolutely pouring down, and to act as if it was not raining by taking no umbrella on my walk, which goes against my preference for staying dry. Let us ignore the controversial question whether I can form the relevant belief. To many, it might seem that doing what the billionaire asked for is entirely rational given that I really want the reward of a million dollars. The “can” in EE allows us to sidestep such cases entirely. It allows that there can be cases in which epistemic irrationality will not make one practically irrational. While my sample is certainly not representative of the discipline as a whole, everybody I have talked to about this issue seems to hold EE. Surprisingly, though, the view is hardly ever mentioned in writing. It receives at least some consideration in Audi (2004: 38). An explicit endorsement can be found in Whiting (2014a: 4) and Harman (1999: 13), who writes that irrational beliefs can be the source of irrational decisions. Perhaps because EE seems fairly obvious, few bother to mention it or even argue for it. However, there is at least one explicit dissenter to EE. According to Derek Parfit, practical rationality does not depend on epistemic rationality. Parfit argues that what we can rationally do depends on our beliefs. When we have false beliefs, these beliefs can give us apparent reasons, as long as the content of what is believed, if true, would give us a real reason to act.2 This leads to the following definition of the rationality of actions: (A) Our desires and acts are rational when they causally depend in the right way on beliefs whose truth would give us sufficient reasons to have these desires, and to act in these ways. (Parfit (2011: 112)) (A) seems plausible. However, I will ultimately reject it. I think that we must do so, because (A) entails (B). I will argue that we have good reason to reject (B), which reads as follows: (B) In most cases, it is irrelevant whether these beliefs are true or rational. Some of the exceptions involve certain normative beliefs. (Parfit (2011: 112)) (B) is the denial of EE. I shall set complications involving normative beliefs aside for the moment and return to them in Section 1.4. Here is Parfit’s 2

Generally, anybody who holds that the contents of beliefs, no matter whether these beliefs are rational or not, provide one with reasons and also holds that rational action entirely depends on reasons so understood will have to deny EE.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

6

Beings of Thought and Action

argument for (B). Suppose you believe, against all the evidence available to you, that smoking will improve your health. Parfit thinks that this belief is irrational. But he denies that this makes desiring to smoke or actually smoking irrational. Parfit (2011: 113) writes that “what makes our desires rational or irrational is not the rationality of the beliefs on which these desires causally depend, but the content of these beliefs, or what we believe.” While this quote is about desires, Parfit (2011: 14) holds essentially the same view on the rationality of actions. Parfit concedes that one might stipulate that actions based on irrational beliefs are also irrational, but he thinks that this is not the best view to hold. When it comes to the belief about smoking and health, Parfit thinks that it would be misleading to call the resulting action (i.e. smoking) practically irrational. Practical rationality is only constrained by the content of our beliefs. According to Parfit, epistemic rationality does not constrain practical rationality. The rational failure consists only in an improper reaction to epistemic reasons. Consequently, one merely displays epistemic irrationality in the given case. If this is correct, this shows that EE is mistaken. But is it? In order to defend it, one has to answer what I shall call Parfit’s challenge: What exactly is the practical shortcoming that actions based on irrational beliefs exhibit? Is there also an improper reaction to practical reasons? Contrary to Parfit, I will argue that for acting rationally, not only does content matter, but also how we arrive at our attitudes toward that content. When one’s practical reasoning relies on epistemically improper beliefs, the resulting actions do suffer from a practical defect, not merely from an epistemic one. In this sense, practical rationality depends on epistemic rationality. This is not merely a terminological dispute. Although my focus here will be on EE, there are further implications. If you start smoking, then given your irrational belief that smoking will improve your health, it would not be unusual to rebuke you for your action of smoking, and not just your belief. Why would we rebuke your action if it were practically rational? If Parfit was right, then criticism of the action would be uncalled for. Therefore, the dispute does not seem merely terminological. It has substantial implications – for example, for our practice of rebuking the actions of others.

1.2 Ordinary Language and Epistemic Encroachment I want to begin with an argument for EE based on ordinary language. I do not take this argument to be decisive, but it helps to demonstrate that EE is an intuitively appealing thesis. The failure of this argument to stand up to

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

Epistemic Encroachment on Practical Rationality

7

Parfit’s challenge shows that while EE might be intuitively appealing, Parfit’s challenge is genuine and proponents of EE need to address it. But first some preliminaries. For Parfit (2011: 123), to call an action irrational is to say that it is deserving of strong criticism (of the kind often expressed by words like “foolish,” “stupid,” or “crazy”). In line with this definition of practical irrationality, I will demonstrate that this form of criticism also applies to actions based on irrational beliefs – that is, beliefs based on insufficient reasons or on no reasons at all. While Parfit’s thesis is about actions, in this section I am primarily concerned with the mental process of practical reasoning. I assume that actions based on irrational practical reasoning are themselves irrational. I take this to be an innocent assumption and I will not defend it here. Given this assumption, the following considerations suggest that relying on an epistemically irrational belief in practical reasoning makes one’s practical reasoning irrational, and thus also the action based on it. This is just what EE suggests. I rely on two assumptions about practical reasoning. First, reasoning must be in some way informed by one’s beliefs. This should be uncontroversial. Second, part of reasoning is taking a consideration to be a basis from which to reason. I will call this attitude “taking p as a basis for reasoning.” If I take p as a basis for reasoning, I consider p fit to be used in further reasoning.3 If I believe that there is milk at the store, and I deliberate about where to get some milk, I take this belief as a basis for my reasoning about what to do. So practical reasoning is dependent on beliefs about some subject matter p, and in reasoning one chooses whether to treat one’s belief that p as a basis from which to reason. Two qualifications are in order. Bratman (1992) argues that the attitudes that guide practical reasoning go beyond belief. He argues for a separate attitude of acceptance, which differs from the attitude of belief. To avoid confusion, taking p as a basis in reasoning does not strictly imply believing that p, since p might only be accepted. Similarly, it can plausibly be said that reasoning does not always require full outright beliefs but merely having a positive credence, where having such a positive credence can still fall short of full outright belief. But for present purposes, allow me to restrict practical reasoning to beliefs. I will return to credences at the end of Section 1.4.4 3

4

Employing this terminology allows me to remain neutral as to which role p plays in reasoning – whether, for instance, p is a reason or something else (such as an enabler). See Dancy (2004) for the distinction between reasons and enablers. I think that the arguments to come could also be applied to acceptances. While acceptances may differ from belief, both should be subject to epistemic constraints when employed in practical reasoning. However, I will not pursue such an application here.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

8

Beings of Thought and Action

While these qualifications explain why utterance of the following proposition can sometimes be true and might not even express any rational inconsistency, such an utterance is, in most ordinary situations, odd: (1) I take p as a basis for my reasoning, but I do not believe that p.

Such utterances are odd because taking p as a basis for reasoning usually implies believing that p. The oddness is similar to that found in Moore’s paradox – although I am not claiming that either (1) or (2) (see following) is a new variant of Moore’s paradox. An utterance of “p, but I do not believe that p” can be true. But usually, by asserting that p, one implies that one believes that p, which is why these utterances sound paradoxical. The same can be said about the practical case. To take the availability of milk at the store as a basis for one’s reasoning usually implies that one believes that there is milk at the store. By changing the second part of the sentence, we can check for further oddness: (2) I take p as a basis for my reasoning, but I have no reason to believe that p. I take p as a basis for my reasoning, but it is irrational to believe that p.

Both utterances subsumed under (2) seem just as odd as (1). On the basis of this observation, one might argue for EE. One might say that what best explains the oddity of (2) is that EE is true. Taking p as a basis for reasoning does not seem to be compatible with having no reason to believe that p because practical rationality depends on epistemic rationality. This explanation of the oddity of (2) is not available to Parfit and (2) seems to pose a further problem. Assume that I have the odd set of attitudes described in (2) and that I act on p. According to Parfit, if p were true and a sufficient reason to act, my action would still be rational. But if (2) is a first step of an irrational piece of practical reasoning, then, given the further assumption that irrational reasoning results in irrational actions, we arrive at an opposing verdict. This speaks against Parfit’s proposition (B) (Section 1.1). While I am sympathetic to this line of argument, we should not be swayed by it. In this form, the argument provides no answer to Parfit’s challenge. Parfit could offer an alternative explanation of the oddity of (2) that does not rely on EE. As I spelled out earlier, taking p as a basis for reasoning implies that one believes that p. However, the second conjunct is a concession that one lacks reason to believe that p. Thereby, one concedes to violating the epistemic requirement to form beliefs in accordance with one’s evidence. The oddity of (2) might be explained

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

Epistemic Encroachment on Practical Rationality

9

by the fact that the first conjunct implies that one holds a belief, while the second conjunct concedes that one lacks the proper reasons to hold this belief. Yet all that (2) expresses is the violation of an epistemic requirement. Parfit could then defend proposition (B) by arguing that (2) describes a case of epistemic irrationality but not practical irrationality. The problem with (2) is that one violates a principle of epistemic rationality – forming beliefs in accordance with one’s evidence. But it is not clear that this leads to a practical shortcoming or an action that actually goes against genuine practical reasons. The considerations adduced so far do not demonstrate that; they thus leave Parfit’s challenge unanswered.

1.3 Answering Parfit’s Challenge Practical reasoning cannot happen in a vacuum. We are dependent on worldly considerations in two ways. We need beliefs about our environment in order to navigate it and, ordinarily, these beliefs must be true if our interaction with the world is to be successful. Any competent practical reasoner is sensitive to this connection. Of course, practical reasoning is not to be judged by its outcomes alone. But part of reasoning is taking a consideration to be the right basis from which to reason from in light of interaction with the world. If one takes a proposition as a basis for reasoning which is not rationally defensible and therefore unlikely to be true, it seems that one has made a practical mistake. We might say that proposition (2) is an instantiation of this practical mistake, and I will now turn to what this mistake consists in. The practical mistake concerns instrumental rationality, which is about deciding which means to take to achieve your end. Discussions about instrumental rationality often focus on whether one ought to take means M to achieve end ε when one deems that M is necessary to achieve ε. Controversies arise because of ends that are not worthy of pursuit.5 I do not need to engage in this discussion here, because the following instrumental principle should be less controversial: Insufficient Means Principle (IMP) Given that ε is a worthy end, if one intends to achieve ε and one has a prima facie reason to believe that M, on its own or as a necessary part of a sufficient means M* with which one intends to achieve ε, is not suitable to achieve ε, then one has a prima facie reason not to M in order to achieve ε. 5

For an overview of this discussion, see Kolodny and Brunero (2013).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

10

Beings of Thought and Action

First, to keep things simple, I will only consider cases in which M refers to a single act. I will turn to more complex cases in which M is a necessary part of a sufficient means M* at the end of this section. IMP can be motivated by cases such as the following. Sarah intends to procure Sichuan pepper by going to the store, even though she knows that the store does not carry this pepper. Intuitively, Sarah’s going to the store would be practically irrational because it would be instrumentally irrational. Sarah has reason to believe that going to the store is not suitable for securing Sichuan pepper because she knows that the store does not carry it. IMP captures this. Cases in which an end is very complex or impossible to achieve by oneself might serve as counterexamples to IMP. Assume, for example, that my end is to bring about world peace, and that I select donating to Amnesty International as my means to achieving that end. Given that I know that it will take a lot more than this for my end to be realized, I have reason to believe that M is not suitable to achieve ε. But donating to Amnesty seems at least rationally permissible. So it seems that the fact that M is unsuited to realize ε does not amount to a strong prima facie reason not to M. There are two reasons why such a case does not pose a problem for IMP. First, the case can be easily redescribed such that my end is “doing my fair share to bring about world peace” rather than “bringing about world peace” per se. A person’s actual end need not be what she takes it to be. In the given case, a description of the agent as intending to do her fair share instead of actually trying to bring about world peace seems more charitable. So such cases do not amount to counterexamples, because there is always a more charitable redescription at hand. Second, and I will explain this in more detail later, IMP is about prima facie reasons. IMP indicates the existence of a prima facie reason. What exactly this reason is depends on the specific case. By “prima facie reasons” I mean reasons that can still be outweighed by further reasons. Even when there are prima facie reasons not to M, it can still be rational to M, because prima facie reasons can be overridden. So in some circumstances, it can be rational to M in order to ε even when one has reason to believe that M is not suitable to achieve ε.6 Whether M is actually suitable to achieve ε is inconsequential. This claim can be defended with an amended version of the shopping case. Assume that Sarah does not know that the store does not carry Sichuan pepper, because she has received false testimony from an otherwise reliable friend that the 6

The term “prima facie reason” here is based on Dancy’s notion of a “contributory reason”; see Dancy (2004). It is contrasted with “pro toto” or “all things considered reason.”

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

Epistemic Encroachment on Practical Rationality

11

store does not carry this pepper. She nonetheless has reason to believe that going to the store is not a suitable means to acquire Sichuan pepper. It would be irrational to go to the store, even when they actually do carry the pepper and going there actually would be a suitable means. This mirrors our reasoning in epistemic cases. If I have reason to believe my visual perception is not trustworthy – for example, because I have reason to believe that I am in an abnormally lit room – I should not trust what I perceive. This is independent of whether the lighting conditions actually are abnormal. Yet there is an important difference when it comes to the practical case. The consequent of IMP is not about what one should do but rather about prima facie reasons. This is to account for special circumstances, such as when a decision to act is forced upon an agent. Here is an example of such a forced choice case. Suppose that it is extremely important that you are on time for a meeting, but you cannot remember whether it takes place on the second or the third floor. There is no way to find out the actual location, and you can only make it to one of these places in time. So you must choose between floors. Assume that you choose to go to the second floor, even though you have no reason to go there instead of the third floor. In such a scenario, you have reason to believe that going to the second floor is not suitable for arriving at the meeting on time. Yet the prima facie reason indicated by IMP is overridden by concerns about time.7 It seems entirely rational to go to the second floor, because by doing so you at least stand a chance of making it on time. Nonetheless, in the absence of any other reason, it remains irrational to select means you have reason to believe are unsuitable for bringing about your intended end. I take it to be a plausible assumption that epistemic irrationality is not among the circumstances in which the prima facie reason indicated by IMP is overridden. That is, one’s epistemic irrationality does not generally provide a strong practical reason that could override the prima facie practical reason indicated by IMP. I will return to what exactly the prima facie practical reason indicated by IMP is in Section 1.4. My argument against Parfit also requires the following principle: Linking Principle (LP) Given that p speaks in favor of M-ing in order to ε, if it is irrational to believe that p, then one has a prima facie reason to believe that M-ing is not suitable to achieve ε.8 7

8

In the case of giving to charity mentioned earlier, one might say that the prima facie reason IMP provides can be outweighed if the agent attributes a high symbolic value to making a donation. Even if p is just one among many reasons one has to M in order to ε, the conditional still holds. That is because the consequent is merely about prima facie reasons, which, as I explained before, can be

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

12

Beings of Thought and Action

Let us begin with the various ways of the antecedent being true. If it is irrational to believe that p, that may be because you either have insufficient reason for believing p, no reason for believing p, or you have strong reasons for believing not-p. If you have reason to believe not-p, then you have reason to believe that it is unlikely that p is true. Similarly, when you have some but insufficient reason for believing p, since your reasons do not strongly indicate that p, you have reason to believe that p is unlikely to be true. Things are slightly different when you have no reason to believe p. It may then not necessarily be that you have a reason to believe that p is unlikely to be true. For example, you may have never considered whether p and you may also not have any reason to believe that not-p, so that withholding is the appropriate attitude toward p. But still, if you have no reason to believe p, this fact is a reason to believe that p is not true (although this reason may not be decisive). The thought with which I opened this section suggests that the consequent of LP obtains whenever the antecedent does. Successful interaction with the world requires true beliefs, and competent practical reasoners are sensitive to this connection. So if their sole reason for M-ing in order to ε is p, and they have at least a reason to believe that p is not true, or even likely to be not true, they will have a reason to believe that M-ing is unsuitable to achieve ε. This holds even in a case in which one irrationally believes p because the balance of reasons speaks in favor of withholding on p: that it could equally well be that not-p makes it that one has at least a prima facie reason for believing that M-ing is unsuitable to achieve ε because M-ing will do so only if p. In light of the mentioned caveats of prima facie reason, we can still admit that this prima facie reason might be outweighed by the fact that p is possible, so that it is ultimately rational to withhold on whether M-ing is suitable to achieve ε. But this does not undermine that even then, there is a prima facie reason to believe that M-ing is unsuitable to achieve ε. Given IMP and LP, I am now in a position to show that actions based on irrational beliefs are practically irrational because such actions go against the practical reasons indicated by IMP. Take a case in which there is just one relevant consideration, p, which speaks in favor of doing M in order to achieve ε. For example, the consideration that the store carries Sichuan overridden by other reasons. Suppose that p and q are both sufficient reasons for M-ing and suppose that one irrationally believes p, but rationally believes that q. Then the former fact provides one with a prima facie reason to believe that M-ing is not suitable to achieve ε. However, the latter fact overrides the former fact and makes it rational to believe that M-ing is suitable to achieve ε, at least as long as this belief is based on q, and not on p.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

Epistemic Encroachment on Practical Rationality

13

pepper speaks in favor of going to the store in order to obtain it. Now add Sarah, who irrationally believes that p because her evidence strongly suggests that not-p. By first applying modus ponens to LP and then to IMP, it follows that Sarah has a prima facie reason not to M. Since there are no other reasons involved, Sarah’s M-ing is practically irrational, as it goes against a practical reason indicated by IMP. The following dialogue scheme illustrates that in light of this, simple denial of practical irrationality is insufficient to uphold Parfit’s challenge. For the sake of brevity, it contains two dialogues. The first takes place against the backdrop of an utterance of proposition (2) by Derek; the second is about a joint observation of Sarah by me and Derek. Considering both of these variants suggests that the absurdity of Derek’s final reply is not due to the first-personal nature of proposition (2). me: You concede that it is irrational to believe that p, because you/Sarah have/has evidence suggesting that not-p? derek: Yes! me: Do you acknowledge that this gives you/Sarah a reason for believing that actions based on p – for example, M-ing based on p in order to ε – are unsuitable to achieve ε? derek: Yes! me: Do you agree that this gives you/Sarah a reason not to M? derek: Yes! me: So you do agree that acting on p would be practically irrational? derek: Absolutely not!

Derek’s final reply seems absurd for the following reason. He concedes that he/Sarah have no reason to believe that p. He then also concedes LP and IMP. They jointly indicate that he/Sarah have a reason not to act on p, as we just reason by modus ponens on LP and then IMP. Yet he denies that acting on p would be practically irrational. If we reasonably assume that instrumental rationality is an important part of practical rationality, and that IMP therefore draws attention to practical reasons, then Derek’s final reply seems bizarre. This provides us with an answer to Parfit’s challenge. If one irrationally believes that p and p speaks in favor of taking a certain means to one’s end, then, if one acts on p, one acts against a practical reason indicated by IMP. In real-life scenarios, means–end relations are often more complicated. Often we must take more than one means to achieve our ends. If I intend to quench my thirst, I must not only get a bottle of water, I must also unscrew the cap and ultimately drink from the bottle. Merely getting a bottle of water alone is not a sufficient means to quench my thirst. But

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

Beings of Thought and Action

14

it would surely be absurd to think that one thereby has a strong prima facie reason not to get a bottle of water. That is why IMP contains a clause in which M is a necessary part of a sufficient means M*, where M* is a further action or maybe even a set of actions that is sufficient to bring about the intended end. This complication does not undermine my argument. Every action M that is a part of M* aims at achieving some smaller end e that is supposed to ultimately bring about ε. In the example, unscrewing the cap is supposed to realize the end that the water bottle is open. We can still apply IMP to any part of M*. If one takes p to be a reason for M, then if one is irrational to believe that p, one has a strong prima facie reason not to M because one has good reason to assume that M-ing will fail to bring about ε. So the main line of the argument is unaffected by more complex actions. Furthermore, if one has good reason to believe that any of the parts of M* will fail to bring about what they are supposed to, one has good reason to assume that M* is not suitable to bring about ε and thus one should not engage in any of the actions that M* comprises. To sum up my argument: The claim that epistemically irrational beliefs do not lead to irrational actions conflicts with LP and IMP, which is a plausible independent principle about instrumental rationality. When a person violates certain epistemic requirements on belief, she puts herself in the position of having reason to believe that her chosen means are not suitable for the realization of her ends. So, contra Parfit, such actions display not only epistemic but also practical irrationality. That is, they display practical irrationality in just the sense described by Parfit: They are foolish and strongly worthy of criticism. This answers Parfit’s challenge. Since (A) and (B) reduce practical rationality to questions about the content of beliefs, we must reject both. As proposition (2) suggests, what matters is not only content but also whether one has reason to believe that content. This argument also vindicates EE. It demonstrates that reasoning that is based on epistemically irrational beliefs will also be reasoning that is practically defective and thus not practically rational.

1.4

Epistemic Encroachment Refined

Here is a potential objection against my argument for EE from IMP. Talk of “dependence” contained in EE is ambiguous. EE can be read as a standard conditional claim, according to which epistemic rationality is a necessary condition for practical rationality. But one could also understand “depend”

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

Epistemic Encroachment on Practical Rationality

15

as referring to a metaphysical grounding relation, so that EE is about what makes it the case that an action is practically irrational.9 This gives us the following two versions of EE. EE-C If (i) one’s doxastic state regarding p is epistemically irrational and (ii) there is no special reward for acting on p, then one’s actions based on p can be practically irrational.10 EE-G Epistemic irrationality can make one practically irrational in acting on p.

EE-G is not merely a claim about a conditional. It is a claim about what makes acting on p irrational: one’s epistemic irrationality in believing that p. EE-G is a stronger claim than EE-C because it is not just a standard conditional of the form “if A, then B.” Such conditionals can be true even when it is not the case that A makes B true. It might be true that if I am crying, then I am sad, but false that my crying makes me sad. It might be that an unfortunate event makes both antecedent and consequent true. Therefore, EE-C could be true while EE-G is false. Parfit might retreat to the position that IMP is a conditional claim and thus at best supports EE-C, but not EE-G. This would be a retreat because by accepting EE-C, Parfit would have to revise verdicts on certain cases. Smoking because one believes that smoking is good for one’s health would be irrational, because according to EE-C, if one’s belief is irrational, then one’s action based on it is as well. However, as can be verified by quotes in Section 1.1, Parfit often uses the locution of “making something rational.” For example, he holds that what makes our desires rational is the content of our beliefs. So it might be that his denial of EE is really to be understood as a denial of EE-G. Parfit might deny EE-G by arguing that there is a common source that makes one irrational in both believing and acting on p. Here is how this objection could be developed based on what Parfit says about beliefs with normative content. Parfit (2011: 119f.) holds that normative beliefs are an exception to proposition (B). His example concerns agony. The nature of agony gives us both a practical reason to avoid agony and an epistemic 9

10

Talk about grounding is widespread in contemporary metaphysics. Here is a quick summary from one of its first proponents. Fine (2001: 15) explains grounding as “an explanatory relation: if the truth that P is grounded in other truths, then they account for its truth; P’s being the case holds in virtue of the other truths’ being the case.” The second conjunct of the antecedent is meant to capture the “can” found in EE and is necessary to avoid straightforward falsification from the reward cases mentioned in Section 1.1.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

16

Beings of Thought and Action

reason to believe that one has such a practical reason. Suppose that it is true that if one irrationally believes that one does not have a reason to avoid agony, then one’s desire for agony is irrational as well. Yet this does not establish that one’s epistemic irrationality makes one practically irrational. Parfit might hold that it is the nature of agony, or the facts about the nature of agony, that give us both epistemic reasons and practical reasons to which we are unresponsive. Thus, the nature of agony makes one both epistemically and practically irrational. Consequently, it is not one’s epistemically irrational belief that makes one practically irrational, and hence it would seem that EE-G does not hold for normative beliefs. But this attempt to avoid EE-G is flawed. First, as I have argued in Mueller (2017a), it is unclear how to properly extend this line of argument to nonnormative beliefs. Second, and more importantly, the suggested argument generally falls short of impugning EE-G. Take Sarah from the previous section: Even if one can find a single fact that makes her belief and action irrational, one has thereby not established that it is not an epistemic failing that makes for a practical failing. The problem is that the argument presupposes that it must be an irrational belief that makes one practically irrational for EE-G to be true. Thus, the argument might seem successful when one can show that the action based on p is not made irrational by the irrational belief that p. But this argument is based on a mistaken reading of EE-G, which holds that what makes acting on p irrational is epistemic irrationality in believing that p or, in short, epistemic irrationality, not the belief that p, which is epistemically irrational. Therefore, EE-G is not refuted if one can demonstrate that it is not the irrational belief that p that makes an action based on p irrational. We can illustrate this by reconsidering the previous shopping example. I will use the qualification “apparent” to indicate that my argument also holds for Parfit’s notion of an apparent reason, which I will discuss briefly in Section 1.7. Sarah irrationally believes that the store carries Sichuan pepper because that goes against her evidence E, which consists of testimony from her friends and her prior experiences which suggest that the store does not carry Sichuan pepper. The reason why it is irrational to go to the store is the (apparent) fact that the store does not carry Sichuan pepper, which is something that Sarah has reason to believe because of E. What makes Sarah irrational in going to the store is that there is this reason not to go to the store, to which she is not sensitive, but which is within her cognitive reach because of E. But the fact that Sarah does not react properly to E, which is a shortcoming of epistemic rationality, thus makes it the case that she is practically irrational. Sarah’s lacking epistemic performance

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

Epistemic Encroachment on Practical Rationality

17

makes it the case that she acts contrary to a practical reason that is within cognitive reach, which is why Sarah’s action is irrational. By a practical reason within cognitive reach, I mean that this reason is in principle accessible to Sarah. It is not a reason available only to an omniscient being, but any nonideal rational being with ordinary rational capacities.11 Because this reason is within the cognitive reach of any ordinary rational person, Sarah is at fault for not responding to it and this is why Sarah’s action is irrational.12 Parfit (2011: 111) points out that for him, “to fail to respond to some reason, we must be aware of the facts that give us this reason.” He might hold that Sarah lacks the relevant awareness of the reason when it is merely within cognitive reach. Thus, we should reject the explanation of why Sarah’s action is irrational that I have just given. There are two things to be said in reply. First, the phrase “awareness of the facts that give us this reason” is ambiguous. Sarah is aware of E and E is the fact that gives her the reason not to go to the store. In this sense, Sarah is aware of the facts that give her a reason. However, Sarah might not be aware of the (apparent) fact that the store does not carry Sichuan pepper – the reason she fails to respond to. Parfit’s objection only works given this second reading, which I shall call the awareness constraint. Parfit offers no detailed account of the relevant notion of awareness. But for the above objection to work, the awareness constraint needs to take a strong form, requiring actual awareness of certain (apparent) facts. But there are independent reasons to reject a strong awareness requirement on failing to respond to reasons. Consider the following two examples in which pathologies or strong emotions stand in the way of becoming aware of certain facts. Suppose there are two political opponents who disagree strongly about certain issues, but also agree on many others. Their agreement 11

12

I use the notion of cognitive reach in order to stay clear of concerns that drive the internalism versus externalism debate about epistemic justification. The notion of cognitive reach is a close cousin of the notion of what is accessible, as featured in accessibilism, the view that what one is justified in believing is determined by what is accessible. However, the notion of accessibility can be given either an internalist (see Audi 2001) or an externalist gloss (see Gibbons 2006). The notion of cognitive reach and what it means to be at fault in failing to respond to what is within cognitive reach requires further precisification. Certain facts may be within the cognitive reach of ordinary rational persons, but require extensive training (e.g. medical school) or extensive amounts of time (e.g. a simple but extremely long calculation). I assume that it is controversial whether one is at fault for not responding to such facts. However, such controversial cases would only pose a problem if they were cases in which one irrationally believes something, but an action based on the belief is rational. But it is not clear that the controversial cases hinted at – for example, having a belief that goes contrary to what one would have learned at medical school – are cases of epistemic irrationality.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

18

Beings of Thought and Action

is a fact that is a reason for both to engage in cooperative action to bring about certain positive changes. But suppose that one politician is so obsessed with the disagreements that he fails to become aware of what both sides agree on, and consequently refuses any cooperation. I think this refusal to cooperate is an irrational action, even when one side may not be aware of the reason to cooperate. Here is another example. Suppose there is an extremely arachnophobic gun owner whose fear of spiders is so extreme that he starts shooting at harmless house spiders. I think that shooting at house spiders is still an irrational action, even if the gun owner, due to his phobia, is unaware of the fact that house spiders are harmless, which is a reason to refrain from shooting at them. Given such cases of irrational action, it is best to reject a strong awareness condition on failing to respond to reasons. The notion of cognitive reach I suggested can handle such cases of irrational action. As these examples suggest, one’s action can fail to be rational because one fails to respond to reasons that are accessible to beings with ordinary rational capacities that do not suffer from the impediments mentioned. Returning to my defense of EE-G, I want to stress a key point. Strictly speaking, it is not an irrational belief that makes one practically irrational. Rather, epistemic irrationality, which makes a belief irrational, can also make actions irrational. Therefore, it is mistaken to argue against EE-G by trying to show that the result of one’s epistemic irrationality, the irrational belief, does not make one practically irrational. We can now easily show that EE-G, and my defense of it, also applies to normative beliefs. Let’s grant Parfit the claim that the nature of agony provides one with reasons to believe that one has a practical reason to avoid agony. We also grant that what makes one practically irrational is failing to respond to practical reasons. But we can also ask what makes one fail to respond to this practical reason. And here we can give the same answer as stated earlier. It is one’s epistemic irrationality, unresponsiveness to the evidence provided by the nature of agony, which makes it the case that one fails to respond to a practical reason that is within cognitive reach. In other words, one’s practical failing is brought about by an epistemic failing. This is just what EE-G holds. Our understanding of EE-G sits nicely with the argument from IMP, given in Section 1.3. In Sarah’s case, she decides to M in order to achieve her end ε because of p. However, there are strong reasons to believe not-p. Her insensitivity to these reasons constitutes her epistemic irrationality. And her epistemic irrationality makes it that she has reason to believe that M is

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

Epistemic Encroachment on Practical Rationality

19

not a sufficient means to achieve ε because she has reasons to believe that not-p, and given not-p, M-ing is an insufficient means to achieve ε. In a nutshell: Epistemic irrationality will make one instrumentally irrational because it will lead one to choose means that one has reason to believe are insufficient for achieving one’s desired end. In Sarah’s case, epistemic irrationality consists in believing something while having evidence to the contrary. But I have already mentioned other forms of epistemic irrationality. For example, believing something for absolutely no reason or believing things for insufficient reasons. In these other cases of epistemic irrationality, the reason indicated by IMP cannot be the (apparent) fact that the store does not carry Sichuan pepper. This is simply not supported when one has no or insufficient evidence for believing that the store carries Sichuan pepper, which is unlike the original case. So what exactly is the reason Sarah is unresponsive to, when her belief that the store carries Sichuan pepper is epistemically irrational because it is based on no or insufficient evidence? My suggestion is that Sarah is unresponsive to the practical reason which is given by the (apparent) fact that she has no or insufficient evidence for believing that the store carries Sichuan pepper, which is a reason that speaks in favor of not going to the store. This is compatible with the claim that one’s evidence is not a practical reason, as this (apparent) fact is not the evidence that Sarah has, but rather an (apparent) fact supported by her evidence. Note that it is quite natural to quote the fact that one lacks sufficient evidence as a practical reason. Suppose Hannah and Sarah have the same evidence about the stock market which either gives them no or insufficient evidence for believing that stocks will rise. Hannah, out of the blue, then declares that stocks will rise, so she and Hannah should invest in stocks to make some money. It would be a natural reaction if Sarah were to say that they should refrain from such an action because they lack sufficient evidence for believing that stocks will rise. So there is nothing unusual about saying that, sometimes, facts about one’s evidence can be practical reasons.13 While other forms of epistemic irrationality require a bit more attention as to how we fill in the details, my suggestion shows that there is 13

Some might say that it is more natural for Sarah to say that Hannah simply has no reason to buy stocks. I see the appeal of this suggestion and think that it provides no problem for my general argument, as an action not supported by any reason is still an irrational action. But my suggestion has three advantages. First, it avoids worries about negative reason existentials pressed in Schroeder (2007: 92). Second, it allows a treatment parallel to the other cases. Third, my suggestion does not require a modification of the consequent of IMP to “then one has a strong prima facie reason not to M or no reason to M.”

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

Beings of Thought and Action

20

a plausible way to do this which also coheres well with the overall picture I have argued for. The consideration of the foregoing paragraph also helps to make good an issue that I have bracketed so far: reasoning that is based on credences. In many cases in which one’s reasoning is based on a positive credence that p that does not suffice for an outright belief in p, the consequent of IMP will be triggered. But since the consequent of IMP only concerns prima facie reasons that can be outweighed, for example, by time concerns, it can nonetheless be rational to M in order to ε. The case about the uncertainty where the meeting will take place given in Section 1.3 could be redescribed in such a way.14 However, credences can fail to be epistemically rational. When they are, one’s evidence does not support the credence that one has in p. But if one’s credence goes against the evidence that one has, this gives one an additional reason to believe that M-ing will not be suitable to ε, a reason that goes beyond one’s general uncertainty about p. As I have just pointed out, facts about one’s evidence can be practical reasons. If one’s credence in p is not supported by one’s evidence, then this can be a practical reason not to M in order to achieve ε. Let us sum up. In this section, we have achieved greater clarity about how to understand EE. We have distinguished between a conditional claim and a grounding claim. While both are true, it seems that the argument from IMP only supports EE-C. Yet in this section we have also developed support for EE-G, how it is to be understood and how these considerations relate back to the argument from IMP. The key lesson here is that a grounding version of EE should not hold that one’s irrational belief makes one irrational in acting. Rather, one’s irrational reaction to the evidence, which constitutes epistemic irrationality, makes one practically irrational by making one unresponsive to practical reasons that are within cognitive reach. Nonetheless, this fully supports the view that epistemic irrationality leads to practical irrationality.

1.5

From Epistemic Encroachment to Epistemic Norms for Practical Reasoning

If we accept EE, then it may seem that we have a good case for a second form in which the epistemic encroaches on the practical. Namely, in the 14

If an action is based on a rational credence, then it is not based on an irrational belief. Thus such cases do not undermine EE.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

Epistemic Encroachment on Practical Rationality

21

form of epistemic norms for practical reasoning that specify epistemic conditions one must meet in order for it to be rationally permissible to treat a proposition as a reason for action, as outlined in the Prologue. Here is a tempting argument from EE to epistemic norms for practical reasoning. If Sarah’s epistemic irrationality in believing that the store carries Sichuan pepper makes her practically irrational in going to the store, then it cannot have been rationally permissible to treat that proposition as a reason for that action. That must be because of her lacking epistemic standing. Hence, there must be epistemic norms for practical reasoning that set out epistemic conditions for permissibly treating a proposition as a reason. Unfortunately, this argument ignores the details of Section 1.4, which provide the skeptic about epistemic norms for practical reasoning with reasons for resisting this tempting argument. The key thought in Section 1.4 was that a lack of epistemic rationality makes one insensitive to practical reasons that are within one’s cognitive reach. If Sarah believes that the store carries Sichuan pepper against good evidence to the contrary, then Sarah has a reason not to go to the store that is within cognitive reach – namely, that the store does not carry Sichuan pepper. If she goes to the store nonetheless, she acts against this reason. However, it does not follow that Sarah must satisfy an epistemic condition in order for it to be permissible to treat the proposition that the store carries Sichuan pepper as a reason to go to the store. We could say that there are no epistemic conditions on permissibly treating a proposition as a reason. Hence, it was rationally permissible for Sarah to treat that proposition as a reason. What made Sarah’s action irrational is the additional practical reason, that the store does not carry Sichuan pepper, to which Sarah is not sensitive. We could then simply say that Sarah has a reason to go to the store, namely, that it carries Sichuan pepper, but also a stronger reason not to go to the store, namely, that it does not carry Sichuan pepper. Since the latter reason outweighs the former one, this is all we need to account for Sarah’s irrationality. No appeal to epistemic norms for practical reasoning is necessary to account for Sarah’s practical irrationality and hence the ideas contained in the previous sections cannot establish that there are epistemic norms for practical reasoning of the kind mentioned in the Prologue. While there is no watertight argument from my case for EE to epistemic norms for practical reasoning, I still believe that the way outlined to resist the conclusion is itself to be resisted. It is helpful to put things in terms of two competing storylines to see why. According to the first, which I call “YES” (for yes to epistemic norms for practical reasoning), there are

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

22

Beings of Thought and Action

epistemic conditions on what it can be rationally permissible to treat as a reason for action. When one irrationally believes that p, it is not rationally permissible to treat p as a reason to ɸ. To make YES coherent with the central idea of Section 1.4, it is assumed that there is also a practical reason to not-ɸ, namely, not-p. Both considerations, that not-p and that it is impermissible to treat p as a reason, on their own suffice to make it irrational to ɸ. According to YES, then, one’s practical irrationality in ɸing is overdetermined. According to the second story, “NO” (for no to epistemic norms for practical reasoning), there are no epistemic conditions on what one can permissibly treat as a reason for action. Even when one irrationally believes that p, it is rationally permissible to treat p as a reason to ɸ. However, there is also a reason to not-ɸ, namely, not-p. The latter reason outweighs the former reason and that is why it is practically irrational to ɸ. Which of the two storylines should we accept? Reasons of parsimony suggest that we ought to prefer NO. If we can account for the relevant cases without assuming that there are epistemic norms for practical reasoning, then we need not bother with such norms. While reasons of parsimony favor NO, we should only prefer NO over YES if it is otherwise plausible. And this is where I have serious reservations. According to NO, the irrationality of ɸ-ing arises due to a weighing situation. Weighing reasons against each other is a part of practical reasoning. However, NO forces us to accept some very odd practical reasoning as rationally permissible. According to NO, what one does in this scenario is to treat p as a reason for ɸ-ing and weigh p against not-p, as a reason for notɸ-ing. Setting aside issues about how the respective weight of each reason is determined, it seems to be seriously odd to engage in such weighing. Consider if Sarah were to reason as follows: That the store carries Sichuan pepper is a reason to go to the store. But that the store does not carry Sichuan pepper is a reason not to go to the store. Since the latter reason outweighs the former, it is not rational to go to the store. To me, such reasoning seems highly odd. But according to NO, it is perfectly permissible reasoning. The oddity is not merely first personal, as the reasoning here does not involve first-person pronouns. If NO was right, this is how we could describe how Sarah ought to reason if she does treat the proposition that the store carries Sichuan pepper as a reason for action despite her evidence to the contrary. According to NO, we cannot ask Sarah to stop treating this proposition as a reason. All that we can ask of her is to weigh it properly against the other reason, that the store does not carry

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

Epistemic Encroachment on Practical Rationality

23

Sichuan pepper. This strikes me as very implausible and it is does not seem like that is how we ever actually reason. That we do not reason as suggested by NO is supported by considerations about realizing mistakes in one’s reasoning and criticizing the reasoning of others. If NO was right, one would expect that there could in principle be weighing mistakes and corresponding reports about such mistakes. If Sarah were to engage in reasoning as suggested by NO, and made a mistake, she could report it in the following way: Oh, I see my mistake now. I did treat my irrational belief that the store carries Sichuan pepper as a reason to go to the store. That in itself was fine. I only got things wrong when I weighed this reason against the reason that the store does not carry Sichuan pepper, which, as I now see, was a weightier reason.

Given how frequently we make mistakes in reasoning, one would expect that we would encounter such reports in everyday life. I contend that most of us, probably, have never encountered such an explanation of an irrational action. Additionally, I think this explanation sounds highly odd, as it relies on the general idea that it can be permissible to weigh p and not-p as reasons against each other. Suppose I criticize Sarah’s reasoning by saying, “You do not have evidence for believing p, in fact you have evidence for believing not-p.” According to NO, the first part of my utterance is not a legitimate criticism. Only the second part is a valid criticism, highlighting a practical reason that was within cognitive reach and to which Sarah was not properly sensitive. Again, that does not strike me as particularly plausible. What I want Sarah to do in light of my critique is not just to take into account a further reason in her reasoning. What I really want is for her not to start reasoning by relying on p at all, as it is inappropriate to treat p as a reason. But that description is not available to NO. At this point we should pause. While NO is more parsimonious, it does have consequences that I assume fall in the categories of the odd and the implausible. YES has none of those consequences. Does the additional assumption that there are epistemic norms for practical reasoning really amount to an excessive assumption that calls for Ockham’s razor once these consequences of NO are pointed out? I think not. I think we can further bolster YES if we remind ourselves that we are beings of thought and action. If NO was right, then our practical reasoning would be entirely separated from our epistemic reasoning. What we ought to believe about the world would not matter at all to how we ought to

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

24

Beings of Thought and Action

reason about how to act. It is hard to believe that we are such creatures. However, the assumption that there are epistemic norms avoids this conclusion. It provides a constraint on which beliefs can permissibly enter into our practical reasoning and guarantees that there is a link between our epistemic and our practical nature. To defend NO, perhaps one could say that one’s epistemic situation has a bearing on the weight of the reasons that one considers. This would not entirely cut ties between our epistemic and practical nature. I have no argument against this possibility; however, I am not aware of an established theory of the weight of reasons that could make good on this suggestion. I only claim that YES seems to be a more natural consequence of our nature as beings of thought and action. If we really ought not to think about the world in a certain way, then this thought ought not influence our decisionmaking about how to act in the world. And it does seem that this direct link is not available to a view like NO. As I said at the outset of this section, my defense of EE does not give us a watertight argument in favor of the assumption that there are epistemic norms for practical reasoning. Yet, given my defense of EE, the only story that allows for the denial of epistemic norms for practical reasoning has a number of implausible consequences. Thus, while we do not have a watertight case, we nonetheless have a good case for the assumption that there are epistemic norms for practical reasoning.

1.6 Fassio’s Challenge to Epistemic Encroachment One might concede EE, concede that YES is more plausible than NO, and yet question whether there are genuine epistemic norms for practical reasoning. Even if epistemic encroachment is true, the epistemic does not directly encroach on the norms that govern practical reasoning as YES would have it. Fassio (2017) presents an argument along these lines. According to him, the assumption that there are epistemic norms for practical reasoning is based on a mistake. His arguments particularly aim at the standard argument in favor of epistemic norms for practical reasoning that relies on observations of our use of epistemic vocabulary to criticize someone’s practical reasoning. Fassio argues that this does not give us sufficient reason to hold that there are epistemic norms for practical reasoning. While the previous argument for epistemic norms for practical reasoning is not entirely based on the standard argument, it does partially rely on observations about criticism to discredit NO. Thus it is worth addressing Fassio’s challenge.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

Epistemic Encroachment on Practical Rationality

25

Fassio’s skepticism about epistemic norms for practical reasoning is based on the possibility of distinguishing between norms and regulation conditions that pertain to a norm (Fassio 2017: 2140). Norms require us to do certain things, for example to stop at a red traffic light. RL Drivers ought to stop when traffic lights are red.

However, a norm like RL does not specify what to do in order to satisfy it, especially not which epistemic conditions are necessary to satisfy it. Regulation conditions concern how to do what norms require us to do. Usually, we follow RL by reliance on our visual perception. We see that there is a red light and then we stop. A regulation condition of RL might thus suggest using visual perception in order to follow RL. However, RL itself is entirely silent on how we satisfy it. Fassio uses this distinction between norms and regulation conditions to argue that criticism need not always be indicative of a norm violation. Criticism could also pertain to violation of a common regulation condition. Suppose I run a red light and a shocked passenger trenchantly asks me, “Didn’t you see the red light?” According to Fassio, this criticism is not strictly indicative of a violation of a norm that requires me to see the red light I am supposed to stop at. After all, RL does not require me to see the red light. Granted, it would be ludicrous to drive blindfolded. Yet if I did, and miraculously stopped at a red light, the criticism “You didn’t see the red light!” would not amount to a criticism of a violation of RL, but merely to a violation of the common regulation condition of RL. Seeing the red light is not part of the satisfaction condition of RL. It is merely a regulation condition that one must meet in most ordinary circumstances in order to satisfy RL. The “Didn’t you see . . . “ criticism pertains only to a violation of a regulation condition of RL. It does not indicate that there is another norm in addition to RL that requires me to see the red light that RL demands me to stop at. Fassio transposes this argument so that it applies to common epistemic criticism that is appealed to in the debate about epistemic norms for practical reasoning. He holds that practical reasoning is governed by the following norm: RN It is appropriate for S to use p as a premise in her practical reasoning about whether to F iff p is a reason for S to F/not-F. (Fassio 2017: 2145)

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

26

Beings of Thought and Action

RN is not an epistemic norm as it does not feature any epistemic vocabulary and its satisfaction does not depend on epistemic conditions. Whether a proposition is a practical reason does not depend on how one is epistemically related to that proposition. Thus, RN can be satisfied independently of how one is epistemically related to p. Fassio argues that our epistemic criticism only concerns the regulation conditions of RN. Fassio (2017: 2148) names the following three regulation conditions of RN: one must acknowledge RN, one must recognize that p is the case, and recognize that p is a reason to F/ not-F. Fassio simply applies the same reasoning to RN that was applied to RL to argue that epistemic criticism is not indicative of an epistemic norm for practical reasoning that goes beyond RN. Suppose Mary, like most of us, prefers not to get wet when it rains. Mary has only weak evidence that it will not rain today, yet this is what she believes. Mary treats this proposition as a reason not to take her umbrella with her today. Assuming that it will, nonetheless, not rain today and that the fact that it will not rain today is a reason not to take an umbrella today, Mary satisfies RN. But in addition, suppose that Mary then meets her friend Tom who knows that her belief that it will not rain today is based on flimsy evidence. Tom criticizes Mary for not knowing (or not being justified in believing) that it will not rain today. While Fassio deems this criticism to be legitimate, he denies that it indicates that Mary has violated a genuine epistemic norm that requires one to treat as reasons only propositions to which one has a specific epistemic standing, for example, knowledge or justified belief. According to Fassio, Mary has merely violated a regulation condition of RN. But given the distinction between norms and their regulation conditions, it does not follow that Mary has violated an epistemic norm for practical reasoning that would require knowledge (or a justified belief) of the proposition she treats as a reason. If that is correct, then at least the epistemic does not encroach on norms that concern practical reasoning. This is Fassio’s challenge: Why should we assume that there are genuine epistemic norms for practical reasoning when the same data could be explained by a combination of a purely practical norm RN and its regulation conditions?

1.7 Answering Fassio’s Challenge While I accept Fassio’s challenge, his distinction between norms and regulation conditions is not sufficient to demonstrate that practical reasoning is not subject to epistemic norms. The relevant epistemic assessments do not merely pertain to regulation conditions of RN, as it is not

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

Epistemic Encroachment on Practical Rationality

27

convincing that RN has such regulation conditions. Consequently, it cannot be maintained that epistemic criticism of the form “But you did not know that p!” merely indicates the violation of a regulation condition of RN. Let us go back to RL, which demands that one stops at red lights. If regulation conditions concern how to do what the norm requires one to do, then it is immensely plausible that the regulation conditions of RL concern one’s epistemic standing to one’s physical surroundings, as that is where traffic lights are usually to be found. But now consider RN and its alleged regulation conditions for comparison. One could say that recognizing that it is not raining today is a regulation condition for RN in Mary’s case cited earlier. If the cases are parallel, then the regulation conditions of RN, just like those of RL, require one to establish a certain epistemic relation to one’s physical surroundings. But there are reasons to believe that these cases are not parallel. RN states that it is rational to treat p as a reason to F iff p is a reason to F. Plausibly, then, the regulation conditions of RN should concern a recognition not of whether p, but whether p is a reason to F. But knowing that it is not raining today is a reason to not take the umbrella seems to be quite different from knowing that there is a red light ahead. The former is what one might call knowledge of the normative realm, knowing what counts as a reason for what. It is not obvious that we need to know that it is not raining in order to know that the proposition that it is not raining today is a reason not to take an umbrella. Suppose someone asks, given your preference for staying dry, whether the proposition that it is not raining today is a reason to leave the umbrella at home. It does not seem to be necessary to ask in reply what the truth-value of this proposition is, or what my evidence in favor of this proposition is. Given one’s preferences, one can directly answer that it is a reason not to take the umbrella. Similarly, consider hypotheticals. It seems that we can directly answer the question whether if it is not raining today, this would be a reason not to take an umbrella, without considering whether it is actually raining or what our evidence in that hypothetical scenario is. If we can answer such questions directly, then it is unclear why RN would have a regulation condition that requires recognition that p on top of recognition that p is a reason to F. But if the regulation conditions of RN do not include the recognition of p, or that one has some positive epistemic standing toward p, then Fassio’s challenge does not arise. It is not the case that a criticism of the form “But you did not know that p!” merely pertains to a regulation condition of RN because it is not clear why RN should have such a regulation condition.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

28

Beings of Thought and Action

And thus one cannot say that the complaint is merely indicative of a violation of a regulation condition of RN and not of a genuine epistemic norm for practical reasoning. If the criticism pertained to a regulation condition of RN, one would expect it to have a different form, namely, “But you did not know that p is a practical reason to F!” Since this criticism has a quite different form, we should question whether the criticism “But you did not know that p!” pertains to a regulation condition of RN. Here is a potential rejoinder to this strategy for avoiding Fassio’s challenge. Fassio (2017: 2145) assumes factualism about reasons for action. This view holds that reasons for actions are facts. Fassio remains noncommittal about whether facts are true propositions or a different kind of entity that makes a proposition true. In any case, on this view, p can be a reason only if p is true. If only a true proposition can be a reason, then it can plausibly be said that regulation conditions of RN cannot merely concern one’s epistemic standing toward what counts as a reason for what, but also to what is going on in the actual world. If “it is not raining today” can only be a reason if it is in fact not raining today, then it makes sense that regulation conditions of RN require me to have an eye not only on the normative realm but also on my physical surroundings. Contrary to what I said earlier, then, recognizing that p is indeed a plausible regulation condition of RN. But this rejoinder can be rebutted. First, it commits one to factualism, which is not an entirely uncontroversial view. But, more importantly, this commitment brings with it further undesirable consequences. RN specifies what is appropriate to be treated as a reason and it certainly sounds natural that it can only be appropriate to treat as a reason what is indeed a reason. However, “appropriate,” as it is commonly used in the debate about epistemic norms for practical reasoning, is understood as “rational permissibility.”15 So let us rephrase RN to make this explicit. RN-R It is rationally permissible for S to use p as a premise in her practical reasoning about whether to F iff p is a reason for S to F/not-F.

According to RN-R and factualism, it is never rationally permissible to treat a false proposition as a reason. But that seems implausible. Suppose you have excellent evidence that it will not rain today. However, a very special weather phenomenon occurs which causes an unexpected downpour. If factualism is 15

See Hawthorne and Stanley (2008: 578) for this qualification of “appropriate” just after introducing their epistemic norm for practical reasoning, the Reason–Knowledge Principle.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

Epistemic Encroachment on Practical Rationality

29

true, then the proposition that it will not rain today is disqualified from being a reason not to take an umbrella today as this is a falsehood. But then, according to RN-R, it is not rationally permissible to treat this proposition as a reason. That seems highly implausible, as is also suggested by Case F in the Prologue.16 Furthermore, if reasons are required for rational action, then it follows that not taking an umbrella is not rational. But that is equally implausible. Thus the combination of RN and factualism leads to undesirable consequences. To be clear: The problem here is not factualism, but the combination of factualism and RN. The problem of rational action based on false belief is a well-known problem for factualism and most factualists have strategies to deal with it. For example, Parfit, who is invoked by Fassio as a proponent of factualism, introduces the notion of apparent reasons. An apparent reason is a belief in a proposition that if true, would constitute a reason. Apparent reasons can rationalize actions – this is inherent in Parfit’s principle (A) (see Section 1.1). In light of this, a modification of RN to include apparent reasons might seem appealing. RN* It is rationally permissible for S to use p as a premise in her practical reasoning about whether to F iff either p is a reason for S to F/not-F or p is an apparent reason for S to F/not-F.

While RN* might be true, it is unsuitable to sustain Fassio’s challenge, as RN* faces the same problem I have raised for RN. If an apparent reason can be a false proposition, then it is not clear why RN* would have a regulation condition that requires the recognition of p. It does not seem to be the case that one must have an eye on the world to determine whether p is an apparent reason – for the same considerations given previously also apply to apparent reasons. We might need to know the truth-value of the proposition that it is raining to determine whether it is a reason or an apparent reason to take an umbrella. But to know that it is either, we need not have an eye on the world and whether it is actually raining. Therefore, the recognition that p remains an implausible regulation condition of RN*. One might be tempted to argue that p can only be an apparent reason if it is rational to believe that p, thus regulation conditions of RN* require an eye toward the world, not merely the normative realm. While that seems 16

In the Prologue, I also mentioned a strategy to soften the counterintuitive ring by relying on the excuse-maneuver, though I also mentioned there that this maneuver is itself controversial.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

30

Beings of Thought and Action

plausible, it is not a view that is available to Fassio. If p can only be an apparent reason if it is rational to believe that p, then apparent reasons do depend on one’s epistemic standing and thus the satisfaction conditions of RN* would be partly epistemic. Thus, RN* would not be an entirely nonepistemic, purely practical norm. To sum up: The distinction between norms and regulation conditions is ultimately not sufficient to deny the existence of genuine epistemic norms for practical reasoning. The problem is that it is not plausible that RN has the regulation conditions that would match up with the “But you did not know that p” criticism. Thus we can reject Fassio’s challenge to the existence of epistemic norms for practical reasoning.

Conclusion I have argued that there is epistemic encroachment on practical rationality, and I have delineated two ways in which epistemic considerations have a bearing on practical rationality. First, I have explained that epistemic irrationality makes us practically irrational by masking practical reasons that are within cognitive reach. While this does not get us directly to the view that one must satisfy positive epistemic requirements in order for it to be rationally permissible to treat a proposition as a reason, I have argued that the alternative view that denies such positive requirements is not plausible. Hence, the second way the epistemic encroaches on practical rationality is that permissibly treating a proposition as a reason for action requires that certain epistemic conditions must be met. I have not stated which epistemic conditions one must meet in order for it to be rationally permissible to treat propositions as reasons, such that actions based on these reasons are practically rational. That will be the central topic of the remainder of Part I.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.001

chapter 2

Practical Reasoning, Ends, and the End of Hope

In Chapter 1, I defended the assumption that there is an epistemic norm for practical reasoning. Such a norm concerns the question which epistemic condition must be met for it to be rationally permissible to treat p as a reason for action. Let us call this the classical question, as it has shaped the debate about epistemic norms for practical reasoning as I have outlined it in the Prologue. In this chapter, I intend to broaden the debate about epistemic norms by going beyond the classical question and focusing on ends. However, my findings will also be relevant for answering the classical question. To claim, as I will, that one can fail to be practically rational because of the end that one pursues is guaranteed to cause a bit of a kerfuffle. Many believe Hume to have denied this claim. I will attempt to defend a variant of the controversial claim, albeit I will put forward a version that will be acceptable to Humeans. To provide some intuitive motivation, consider the difference between Sarah and Anna. For Sarah, it is entirely rational go to the store if her end is to procure Sichuan pepper and she displays no epistemic irrationality concerning the proposition that the stores carries Sichuan pepper. In contrast, Anna has set herself the goal of resurrecting her beloved deceased pet dog Bella from the dead and she has decided to join a local cult because she believes that this is a way to succeed in her resurrection plan. Setting aside whether this means–end belief is rational, it is intuitive that Anna’s act in joining the cult is irrational, in virtue of the very end that she pursues with this action. The aim of this chapter is to explain the difference between Sarah and Anna and the rational status of their actions by providing an epistemic norm that specifies an epistemic condition on which ends one can rationally pursue. Perhaps surprisingly, I suggest that we can answer this question by answering a certain version of the old Kantian question: What may I hope for? Here is the itinerary for this chapter. In Section 2.1, I argue that we can approach the question which ends one can rationally pursue by answering 31

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

Beings of Thought and Action

32

the question what one may hope for. In Section 2.2, I argue that the standard condition on rational hope is too weak to properly constrain what one can rationally hope for. In Section 2.3, I give my own account of what one may epistemically hope for, to which knowledge is central. In Section 2.4, I point out that this suggests a novel angle on the knowledgefirst program. In Section 2.5, I relate my account of hope back to pursuing ends. In Section 2.6, I argue that the wide variety of ends one can rationally pursue shows that many of the suggested epistemic norms that concern the classical question are overly demanding.

2.1

Ends, Practical Reasoning, and Hope

The key idea behind my argument for epistemic encroachment was that we generally have good reason not to engage in certain actions if we have good reason to believe that these actions will be unsuccessful in achieving our ends. In Chapter 1, I focused on one kind of instance of this phenomenon: means–end relations. One way to have good reason to believe that one’s action will be unsuccessful in achieving one’s end is to have good reason to believe that the means one has chosen, that is, the specific action, is unsuitable to bring about the intended end. However, this key idea can also be instantiated in another way. Actions can fail to be successful in achieving their ends not only because one has chosen unsuitable means but because one is pursuing an end that cannot be obtained. If the intended end is unobtainable, then the action through which one pursues it is guaranteed to be unsuccessful. If it is not rational to act when one has a good reason to believe that one’s action will fail to achieve its intended end, then it is not rational to act in pursuit of an end one has good reason to believe to be unobtainable. It might be said that action in pursuit of an unobtainable end ultimately exhibits the same shortcoming as relying on means that are unsuitable to bring about one’s end. If an end is unobtainable, then any means one chooses to pursue it will be unsuitable to bring it about. While I can agree with this, I contend that acting in pursuit of an unobtainable end exemplifies a specific shortcoming, even though this shortcoming may give rise to a further shortcoming which I have already covered. Consider Anna’s case. Her end is to resurrect her beloved pet dog Bella from the dead. Surely, every means that she could choose would be unsuitable to bring about her end. But I think the problem is not merely that Anna is bound to engage in unsuccessful instrumental reasoning. The problem is that she is practically irrational because of the end she pursues. This is the root of the

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

Practical Reasoning, Ends, and the End of Hope

33

problem that ensures that her instrumental reasoning will be unsuccessful. At least, there seem to be two distinct failings that Anna is guilty of. Anna’s means–end belief that she stands a chance of resurrecting her dog by joining the local cult might be irrational. But there seems to be something wrong with just pursuing this very end, quite independent of which means she decides to take to pursue it. Anna should simply know that she cannot resurrect her dog because that is simply not possible, no matter how desirable this prospect might be. The usage of the word “knows” suggests that there is an epistemic norm that concerns which ends one can rationally pursue. As pointed out in the Prologue, the knowledge norm for treating propositions as reasons is largely motivated by folk appraisals. We criticize the doctor for failing to know that the needle is sterile and assume that this makes it inappropriate to treat this proposition as a reason to proceed with my flu shot. Similarly, we can criticize Anna for pursuing an end that she should know is unobtainable. Of course, we should be careful in assuming that the specific vocabulary used is indicative of a knowledge norm. But it suffices to motivate the idea that just as there is an epistemic norm on what one may permissibly treat as a reason, there is an epistemic norm on which ends one can rationally pursue. I hope that readers can agree that Anna’s case is different from the cases that incite debates between Humeans and Kantians. There is nothing morally abhorrent about having the end of resurrecting a beloved pet dog. Perhaps it is also useful to mollify worried Humeans by listing what I am not claiming. I make no claim about the rationality of desires. Anna’s desire to resurrect her dog might be entirely rational, or arational if one assumes that desires cannot be assessed for rationality. I also make no claim about the rationality of the end. Anna’s end to resurrect her dog might be entirely rational. All that I am saying is that an action or intention to act to actually pursue this end is irrational. That is what I hope everybody can agree on. If there is an epistemic norm on what ends one can rationally pursue, what exactly does this norm demand of us? My strategy to answer this question might raise eyebrows at first: It is to look for conditions of rationally permissible hope. I believe this indirect strategy to be useful because of the following principle. Hope–Action Link (HAL) S’s action ɸ in pursuit of p is rational only if S satisfies the epistemic condition for rationally permissibly hoping that p.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

34

Beings of Thought and Action

HAL is based on the underexplored idea that there is a link between rational action and rational hope. But why should one even assume that there is such a link? Because when one acts in pursuit of p, it will in many cases be fitting to say that one hopes to achieve p by ɸ-ing. Therefore, it is not all that surprising that there is a link between conditions of rational action in pursuit of an end, p, and a condition on rationally hoping that p. However, HAL is a weak link between action and hope. This can best be demonstrated with a comparison with Bobier (2017: 495), who argues that practical reasoning1 is not possible without hoping. Like me, Bobier also holds that hope and practical reasoning are linked via ends. Hope plays a role in setting the ends we then reason about how to obtain. I will provide a summary of Bobier’s argument and then turn to a number of concerns to show how HAL differs from Bobier’s claim that hope is necessary for practical reasoning. Bobier’s starting point is what he calls the lowest common denominator account of hope. For S to hope that p: (i) S believes the attainment of p to be possible; (ii) S desires that p; (iii) p is future to S. Bobier assumes that practical reasoning is limited to what an agent believes to be possible for her to attain, what an agent desires, and what is still an open possibility in the future. He argues that none of these conditions for practical reasoning are met when one fails to hope. Accordingly, in any case in which practical reasoning about how to obtain p is possible, one hopes that p. I have voiced my objections to this argument elsewhere,2 but I shall set them aside here. In any case, Bobier is putting forward a descriptive claim. Allegedly, practical reasoning about how to obtain an end p is only possible if one also hopes that p. I believe this to be false. However, HAL does not entail Bobier’s claim. HAL is not making the descriptive claim that in any case in which one rationally pursues an end p, one also hopes that p. HAL makes a normative claim. It does not claim that in any case in which one rationally pursues an end p, one must also hope that p, nor does it claim that one rationally hopes that p. HAL merely assumes that if one rationally pursues an end p, then one also meets the condition under which one can rationally hope that p. To rationally hope that p, one must satisfy certain epistemic conditions. For actions in pursuit of certain ends to be rational, one must likewise satisfy certain epistemic conditions. HAL holds that the former is a necessary condition for the latter. 1

2

Bobier (2017) uses the term “practical deliberation” instead of “practical reasoning.” I have taken the liberty of changing the terminology for the sake of consistency of usage in this book. See Mueller (2019).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

Practical Reasoning, Ends, and the End of Hope

35

For now, I have to ask the reader to be patient and to indulge me in my excursus to rational hope over the upcoming two sections. I will then return to Anna’s case and epistemic encroachment. I want to close this section with some preliminaries about rational hope. I assume that to hope that p can be a rational attitude, by which I mean that hoping that p is rationally permissible (although not required). Those skeptical about whether hope itself can be a rational attitude, in the sense in which belief can be a rational attitude, may want to consider the following option. It could be that hope is permissible in the sense that it leads to no conflicts among one’s other attitudes that are evaluable for rationality. I will write as if hope itself can be a rational attitude. But everything I say can be made compatible with the alternative view just mentioned. Furthermore, I assume that rational hope requires meeting certain epistemic conditions so as to distinguish it from mere wishful thinking. Both Martin (2014: 8) and McCormick (2017: 132), share the assumption that rational hope comes with certain epistemic requirements. Unlike them, I shall set aside the question how these epistemic requirements relate to the practical nature of hope. Therefore, my topic is what we may epistemically hope for, in the same sense that my topic is what is epistemically permissible to be treated as a reason for action. Thereby, I follow the standard approach in the debate about epistemic norms. For example, an assertion might be epistemically impeccable, yet it is not rational as it is imprudent to make it. The same can be said of the epistemic norm for practical reasoning that concerns the classical question, as I explained in the Prologue. One might meet the epistemic condition for treating a proposition as a reason, yet it could be irrational to treat p as a reason to ɸ because p is entirely unrelated to ɸ-ing. Likewise, it may be that I meet the epistemic demands for rationally hoping, but I might still fall short of rationally hoping because I violate other demands on rational hope. Therefore, the epistemic condition on rational hope I will propose is merely a necessary condition on rational hope. My proposal involves the notion of knowledge. It suggests that knowledge plays an important and uneliminable role in the regulation of our mental lives, thereby vindicating an aspect of the knowledge-first program, as I will explain.

2.2

Problems for the Standard Account of Rational Hope

The contemporary debate about hope focuses heavily on providing a descriptive account of hope. According to the standard account of hope, hoping that p consists in (i) a desire that p and (ii) a belief in the

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

36

Beings of Thought and Action

probability of p. These are just the first two conditions of what Bobier calls the lowest common denominator account. The standard account has been criticized by Bovens (1999) and Meirav (2009) as too weak to provide sufficient conditions for hoping that p. This is not my concern. The standard account does not address the normative question under which condition it is rational to hope that p. But a simple modification to the standard account suggests itself for an epistemic condition on rational hope. Take (ii) as a starting point, but assume that it requires rational beliefs about probabilities for the outcome hoped for. I shall call this the standard account of rational hope. At least on one reading, Day (1969: 93) is a proponent of this account.3 Day holds that for A to reasonably hope that p, A needs to have a reasonable belief about the probability of p, where this probability is above 0 but below 1. In a similar vein, Martin (2014: 37) holds that the “probability assigned to the hoped-for outcome is governed exclusively by theoretical norms, or considerations of truth approximation.” Setting aside practical constrains on rationally permissible hope, this gives us the following epistemic constraint on rational hope. Standard Account of Rational Hope (SARH) It is rationally permissible to hope that p only if one can rationally believe that the probability of p is below 1 and above 0.

In the following, I will raise an issue for SARH and propose a variation of it. I will also consider strategies to save these accounts. I will not rule out that such rescue attempts can be successful; however, I will point out that they require settling other fairly controversial issues in epistemology. One datum that any account of rational hope should respect is that it can be rationally permissible to hope for p even when the probability assigned to p is very low. Both, Day and Martin explicitly agree on this and SARH allows for this, as any rational probability assignment over 0 will suffice for rational hope. An example is what I call lottery hopes. Given that I hold a ticket for a lottery, it is rationally permissible to hope that my ticket is the winner, even when the probability of having the winning ticket is low.4 3

4

I say “on one reading” as Day (1969: 100) holds that A has a reasonable hope that p only if “the degree of A’s probability estimate that p corresponds to the degree of probability that p.” This is not equivalent to a demand for rational belief about a probability. I set this complication aside as my argument against the standard account also holds for this condition. McCormick (2017: 134) denies that it is rational to hope that one’s lottery ticket is the winning ticket. However, her denial is based on the practical nature of hope. Thus I shall set it aside.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

Practical Reasoning, Ends, and the End of Hope

37

Nonetheless, there seem to be limits to what one can rationally hope for even if the prospect hoped for has a probability above 0. Assume that you hope to have the winning lottery ticket but that you just read the result of the draw in the newspaper. According to the newspaper, your ticket is not the winning ticket. After reading the newspaper, you know that your ticket is not the winning ticket. In light of this, it seems no longer rational to hope. You would be hoping for something that you know not to be the case. For the moment, I shall set aside why it is irrational to hope in this case; I will return to an explanation later. For now, I simply hope that the reader agrees that it is irrational to hope in this case. This case is sufficient to raise trouble for the standard account given the further assumption that our ordinary understanding of knowledge is fallibilist. One can know that p even though one’s evidence does not entail that p. If one’s evidence for p suffices for knowing, but does not entail that p, then one can know that p even when there remain error possibilities. This creates a problem for SARH. If one fallibly knows that one’s ticket is not the winner, it remains possible that one has the winning ticket, as sometimes newspapers contain misprints. If every genuine possibility has some positive probability, then there is still a probability above 0 that one has the winning ticket. If one has a rational belief about this probability, then, according to the standard account, it could still be rational to hope that one has the winning ticket. But this seems absurd. If you know that your ticket is not the winner, then, even if you fallibly know, it is not rational to hope that your ticket is the winner. I assume that the intuition is quite robust, as there is a general tendency to assume that knowing that not-p and hoping that p cannot be conjoined. For example, Benton (2018) starts with the datum that assertions such as the following sound bad: (1) I know that my ticket is not the winning ticket, but I hope that it is.

Given how the case is set up, assuming that one knows that one’s ticket has lost after reading the paper, the problem seems to be with the second conjunct. And the most plausible option of what is wrong with this second conjunct is that it is not rationally permissible to hope in this case. Since this is not necessarily what SARH holds – one can still rationally hold that there is a probability above 0 that one’s ticket did not lose – we have a counterexample to SARH. However, there are a number of replies that proponents of SARH might want to give. They could endorse the concept of epistemic probability, expounded in Williamson (2000), which is neither subjective nor objective

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

38

Beings of Thought and Action

probability, but probability given what one knows. According to Williamson’s proposal, what one knows has epistemic probability 1. Opting for this view would get around the problem as one could no longer reasonably assign a probability other than 0 to the proposition that my ticket is the winning ticket after reading the paper. I shall set this proposal aside. In Section 2.2, I present a case in which Gettiered beliefs make it impermissible to hope, which suggests that the probability relevant for determining rational hope is not probability given what one knows. Proponents of SARH could reject fallibilism about knowledge and endorse infallibilism: If one knows that p, then one’s evidence entails that p. Additionally, they will have to hold that if one’s evidence entails that p, one has to assign probability 1 to p, and thus 0 to not-p. Given this combination of views, SARH plus infallibilism, one can avoid the previous case. If one knows that one’s ticket is not the winner, then one has to assign probability 0 to the proposition that one’s ticket is the winner and thus it would be irrational to hope that one’s ticket is the winner, even on the standard account. In fact, Benton (2018) assumes that the badness of assertions such as (1) here is a serious challenge to fallibilism. He argues that infallibilism has no trouble explaining the badness of (1), with which I agree. While I grant that the combination of SARH plus infallibilism does get around the issue of the impermissible lottery hope, this combination of views cannot account for all cases. There are cases in which one lacks knowledge that not-p, in which it seems nonetheless not rationally permissible to hope that p. And there are also cases in which one knows that not-p, and yet it is rationally permissible to hope that p. If all cases of knowledge that not-p require one to believe that not-p has probability 1, then these latter cases, at least, still pose a problem for SARH, even assuming infallibilism. I will present such cases in Section 2.2. Meanwhile, one should keep in mind that infallibilism is a hefty commitment and is often taken to have skeptical implications. I understand infallibilism to be the thesis that knowing that p requires one to have evidence that entails that p. Perhaps there are some cases of knowledge of p in which our evidence does entail that p. However, that seems hardly the case for most of our knowledge and especially not for the case at hand. One’s evidence consists of the simple newspaper report about the lottery draw. But this evidence certainly does not entail that one’s ticket is not the winner, as it does not entail that there is no misprint. Without passing judgment on infallibilism, one should carefully consider whether saving SARH is worth endorsing infallibilism.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

Practical Reasoning, Ends, and the End of Hope

39

The concession that there can be cases of rational hope that p despite knowing that not-p might motivate the following fallibilist push back against the intuition the aforementioned lottery case relies on.5 While one might agree that (1) sounds bad, one might hold that if one adds to it, then the badness disappears. (2) Yeah, I know that my ticket is not the winning ticket, but you know, my evidence is not conclusive, so I still hope that my ticket is the winning ticket.

While to me (2) does not sound as bad as (1), it still sounds at least a bit odd. However, I gather that to some, this seems to be a perfectly natural thing to say. Thus, one might say that there is really nothing rationally impermissible that occurs in the lottery case. Therefore, the case does not really amount to a counterexample to SARH, even if one holds a fallibilist account of knowledge. But there is a still a way to argue that hoping is rationally impermissible that goes beyond appeal to intuitions and what sounds odd or bad to say. So I will now make explicit why it is rationally impermissible to hope that one’s ticket is the winner after having read the paper and coming to know that one’s ticket is not the winner. The problem is that these two attitudes rationally commit one to a further combination of mental states that is incoherent. Hoping that p rationally commits one to holding that it might be that p, as by hoping that p, one assigns a probability above 0 to p. However, by holding that one knows that not-p, due to the factivity of knowledge, one is also rationally committed to holding that not-p. Thus the combination of hoping that p and taking oneself to know rationally commit one to holding “not-p, but it might be that p.” But such thoughts, for example, “it’s not raining, but it might be raining,” always express a rational tension or are, plainly said, incoherent.6 I assume that two individually rational states must be rationally cotenable, that is, they, and what they rationally commit one to, must be coherent. Assuming that taking oneself to know in this case is entirely rationally permissible, it must be that hoping that p is not rationally permissible. Strictly speaking, hoping against what one knows need not always be impermissible, at least that is not what the example suggests, and, as said previously, I will turn to cases in which hoping that p is rationally compatible with knowing that not-p. What is rationally impermissible is to 5 6

Thanks to Andrew Chignell for pushing me to address this objection. Reed (2013: 57) holds that thoughts in accordance with this schema “reflect a defective state of mind.” See also Yalcin (2007: 987) who writes that “an invitation to suppose such a conjunction strikes us as unintelligible.”

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

Beings of Thought and Action

40

hold that one knows that not-p, and simultaneously to hope that p. But this still puts pressure on SARH as holding that one knows that not-p, assuming fallibilism, is compatible with assigning p a positive probability. Finally, the proponent of the standard account might reasonably dispute that every genuine possibility has a positive probability. While this would get around the lottery case as I have presented it, this highlights that SARH runs into problems with infinite lotteries. Suppose that there is a lottery that lets you put one natural number of the infinite set of natural numbers on it. A machine will then randomly generate a natural number. If the number generated matches the number on your ticket, you win a million dollars. For such an infinite lottery, the probability of having a winning ticket is 0. If one has a rational belief that the probability is 0, then, according to the standard account, it is not rational to hope that one’s ticket is the winning ticket even prior to the draw. But that also seems absurd. While the probability of having the winning ticket is 0, there is nonetheless a genuine possibility that one has the winning ticket. If one has a rational belief that there is such a possibility, then it seems entirely rational to hope that one’s ticket is the winning ticket, even though the probability of having the winning ticket is 0. This suggests that SARH, which demands that the probability of the hoped-for prospect is above 0, is on the wrong track. In light of the infinite lottery case, one might abandon probability in favor of possibility. For example, Chignell (2013) endorses the following epistemic constraint on rationally permissible hope. Chignell’s Account (ChiA) S’s hope that p is rational only if S is not in a position to be certain that p is really impossible.7

The notion of “really possible/impossible” seems to be identified with the notion of metaphysical possibility. Chignell (2013: 206) writes that “If we are certain . . . that p is metaphysically impossible, then we can’t reasonably hope that p.” But if impossibility is understood as metaphysical impossibility, then, given the standard understanding of metaphysical impossibility, ChiA runs into the same problem as SARH concerning lottery hopes. As assumed earlier, it is rational to hope that one’s ticket is the winning ticket prior to the draw, but not after coming to know the result of the draw is that one’s ticket is not the winning ticket. But surely, even after 7

See Chignell (2013: 206).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

Practical Reasoning, Ends, and the End of Hope

41

coming to know this, it is metaphysically possible that my ticket is the winning ticket. According to the ordinary understanding of metaphysical possibility, p is metaphysically possible when p holds, at least in one possible world. But surely, there is a possible world in which my ticket won the lottery, even though this world is not the actual world. If it is metaphysically possible that my ticket is the winning ticket, then, if I am aware of the relevant metaphysics, I am definitely not certain that p is impossible, as I in fact hold that p is possible. But nonetheless, it remains irrational to hope that my ticket is the winning ticket, as this goes against what I know. So Chignell’s account is too weak. Simply not being in a position to be certain that p is impossible is too weak for rationally hoping if one knows that in the actual world, the possibility that p does not obtain. In light of this, one could change the interpretation of impossibility from metaphysical impossibility to epistemic impossibility. While I believe this could get around the problem I have raised, I think that anybody drawn to ChiA should not feel too comfortable. My main worry is that the term “epistemic possibility” is not sufficiently clear in the sense that there is a lively ongoing debate about how to interpret it.8 To see whether the principle so interpreted works would require determining what the best account of “epistemic possibility” is, and checking whether this account provides sensible results when we substitute it for the relevant notion of possibility in ChiA. This is certainly beyond the scope of this work. Let us sum up where we stand. I have raised a problem for SARH, which one might simply call hoping against what one knows. It seems that SARH must grant the permissibility of such hopes given that the outcome hoped for can be believed to have a probability above 0 of occurring. The defender of SARH might resort to assuming infallibilism about knowledge to resist my case. While this defense works, infallibilism is quite a hefty commitment. Additionally, as I will point out in Section 2.3 there are further troublesome cases for this combination of views. The infinite lottery case I have given might suggest that we should shy away from trying to capture rationally permissible hope in terms of probability, but should rather depict it in terms of possibility, where our best chance to do so seems to be in terms of epistemic possibilities. However, there is a lively ongoing debate about epistemic modals. While we might be able to save SARH, or a close relative to it, ChiA, it would require solving controversial debates about fallibilism versus infallibilism or settling on the correct account of epistemic possibilities. 8

See Egan and Weatherson (2011), to mention just one entry point to this still ongoing debate.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

42

Beings of Thought and Action

Those who are interested in an account of the epistemic condition of rationally permissible hope might be happy to hear that I have an alternative account to offer, which does not require solving any of the controversies mentioned here. I am quite open to the possibility that modifications of the accounts mentioned might, in the end, be extensionally equivalent to my own. We might all be climbing the same mountain, just from different sides. Meanwhile, however, I think we should take my alternative account seriously, even if other accounts may be vindicated in the future. Before I turn to my own account, I want to point out that the problem that I raised applies not only to lotteries. Similar to the lottery case, it is no longer rational to hope that I will get the job I applied for after receiving a rejection letter, even though it remains possible that there was a mix up and I was actually the successful candidate. In both cases, the lottery and the job application, while it is possible that subjects still hope, their hopes are not instances of rational hope. Hence, the problem I raised applies to a wider range of cases.

2.3 The Knowledge–Hope Account My general idea is to capture the epistemic condition on rational hope by contrasting it with the epistemic position toward the negation of the prospect hoped for, taking the relevant epistemic position to be knowledge. The details will emerge as I develop my account in three steps. I take this piecemeal approach because it seems to be the best way to explain that we really do need the complicated final account I will give. I accept that my final account is quite verbose at first sight, but I think that developing it step-by-step can convince the reader that it is necessary to deal with the full range of cases. Let us return to the lottery example. After reading the newspaper, one knows the proposition that one’s ticket is not the winning ticket, which I shall abbreviate as “not-w.” Such knowledge is compatible with the possibility of misprints and a remaining probability above 0 for the proposition that one’s ticket is the winning ticket, which I shall abbreviate as “w.” But before reading the newspaper, it was rational to hope that w, because one did not know that not-w, as it is commonly held that we cannot know such lottery propositions. The case suggests the following epistemic condition on rational hope: First try (FT) One satisfies the epistemic condition for rationally permissibly hoping that p iff one does not know that not-p.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

Practical Reasoning, Ends, and the End of Hope

43

FT captures that it is rational to hope that w before reading the newspaper, as one does not yet know that not-w, but also that it is no longer rational to hope after reading the newspaper, as then one knows that not-w. However, there are cases that suggest that FT is too weak to distinguish rational hope from irrational wishful thinking. The problem is that it is possible to fail to know that not-w and yet it will not be rationally permissible to hope that w. One way to fail to know that not-w is by lacking a belief that not-w, perhaps because one is psychologically unable to believe that not-w. Suppose that I am suffering from a psychological disorder that makes it impossible for me to believe that not-w, no matter how much evidence I might receive suggesting that not-w is the case. For example, after reading the paper, I declare, “Fake news again! They’re claiming that my ticket did not win!” In this case, I would fail to know that not-w, because I do not believe that not-w. Yet it would still be irrational to hope w after reading the newspaper that reports that not-w. Another way to fail to know that not-w is by ignoring evidence that is readily available, which can happen both voluntarily and involuntarily. Suppose that I am aware that the newspaper with the results of the lottery draw is on my kitchen table, but I ignore this evidence and thus fail to know that not-w. I do so because I just cannot take knowing not-w. Whether I avoid the relevant evidence voluntarily or involuntarily, it seems irrational in both cases to hope that w because there is evidence available that would allow me to know that not-w. To remedy these issues, we could replace FT with the following: Second try (ST) One satisfies the epistemic condition for rationally permissibly hoping that p iff one is not in a position to know that not-p.

I suggest that we understand the notion of “being in a position to know that not-p” to be only sensitive to epistemic factors, not to psychological ones. Thus understood, the notion does not require that one satisfies the psychological demands of knowing, that is, having the relevant belief. At first pass, we can say that whether one is in a position to know depends on the evidence available. For current purposes, I will rely on an intuitive understanding of the notion of availability. When I am at my house and the newspaper is on the kitchen table, this is available evidence. If I was kidnapped last night, the paper now on my kitchen table is not available evidence.9 9

“Available” should also be read as having a temporary component. I must have had some time to process the evidence, as the mere presence of the newspaper on the kitchen table should not make my hope that w irrational immediately after I wake up and before I have had a chance to read the newspaper.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

44

Beings of Thought and Action

ST gets the cases right that were troublesome for FT. Even if a psychological disorder prevents me from forming a belief that not-w, it is still true that I am in a position to know not-w, as being in a position to know does not require having this belief. Therefore, I do not rationally hope that w. ST can also handle cases of ignored evidence. If the newspaper containing the results is on the kitchen table, even if I ignore it, then I am still in a position to know that not-w. Therefore, it is not rational to hope that w. But ST has to face two further problems. The first one is Gettier cases. Assume that I have read the paper with the results of the lottery draw and come to believe that not-w, as the number of the winning ticket mentioned in the paper does not match the number of my ticket. However, the editor responsible for the lottery results at the paper made a mistake, posting the winning number from last week, not this week. However, by coincidence, the winning number of last week’s lottery ticket and this week’s are identical. While I end up having a justified true belief that not-w, I am not in a position to know that not-w, since I am in a Gettier case. But it would seem irrational to continue to hope that w after reading the paper, although I satisfy the condition suggested by ST. The problem is that it should seem to me that I know that not-w. My situation in the Gettier scenario is indistinguishable from my perspective from the original case in which nothing unusual happened with the newspaper and in which I know that my ticket is not the winning ticket. If I am irrational in hoping that w in the original case after reading the paper, then in the Gettier case, which is for me indistinguishable from the original case, it should also be irrational to hope w after reading the paper.10 This mirrors our reasoning about Case G pressed against the RKP in the Prologue. There is a similar problem for ST due to falsehoods. Clearly, one is not in a position to know that not-p when p is true. Assume that there actually is a misprint in the paper and that this is my luckiest day yet: My ticket is the winning ticket. Since w is true, I am not in a position to know that not-w. But this luck does not make it rational to hope that w, for the same reason expounded in the Gettier case previously. The problem is that even in this case, it should seem to me that I know that not-w after reading the paper. My situation in this case is still for me indistinguishable from the original case in 10

Externalists about rationality might want to deny that one must be equally rational or irrational in indistinguishable scenarios. Such externalists will feel no pressure to abandon ST from the Gettier case or the other case I will turn to next. An externalist can hold on to ST and endorse it instead of my final knowledge–hope account (KHA). However, the essential conclusions I derive from KHA in Section 2.6 can also be derived from ST and modified premises based on KHA.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

Practical Reasoning, Ends, and the End of Hope

45

which there was no misprint, in which I was not extremely lucky, and in which I know that not-w. If I am irrational in hoping that w in the original case after reading the paper, then in the extremely lucky case, which is for me indistinguishable from the original case, it should also be irrational to hope that w after reading the paper. Based on the observation that it matters what one’s impression about one’s epistemic standing is, the following is my final suggestion for a necessary condition on rationally hoping: Knowledge–Hope Account (KHA) S satisfies the epistemic condition for rationally permissibly hoping that p iff S is not justified in believing that S is in a position to know that not-p.

The notion of justification to believe that KHA employs is propositional justification; thus it poses no actual psychological demands on a subject to have higher-order beliefs. I assume that one can gain propositional justification for beliefs about one’s epistemic states by reflection. In the Gettier case given previously, one can arrive at a justified belief that one is in a position to know that not-w by reflection on one’s evidence, even though one errs. Thus, in the Gettier example, one is propositionally justified in believing that one is in a position to know that not-w, even when one does not form the corresponding higher-order belief. Therefore, KHA captures that it is irrational to hope that w in the given Gettier case. The same holds for the extremely lucky case. Even here, one can arrive at a justified belief that one is in a position to know that not-w. Therefore, KHA captures that it is irrational to hope in the extremely lucky case. KHA also allows that it is rational to hope that p when it fails to be luminous that one knows not-p, given the understanding of luminosity presented in Williamson (2000). In such a case, one knows, but is not in a position to know that one knows, as it seems that one could just as well not know not-p. In such a case, one cannot come to be justified in believing that one is in a position to know not-p by reflection. Thus it remains rationally permissible to hope that p. Cases of luminosity failure are the reason why one should doubt that one can save SARH by combining it with infallibilism about knowledge. As I pointed out in the Section 2.2, one might hold that if one infallibly knows that p, then one’s evidence entails that p, hence it is not rational to assign any probability other than 0 to not-p. This line of reasoning might be successful in the lottery case I have presented, but cases of luminosity failure show that this combination of views cannot capture all the relevant cases. If it is a general feature of knowledge that p that one cannot rationally assign a probability

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

46

Beings of Thought and Action

other than 0 to not-p, then one cannot account for the rationality of hoping that p when one’s knowledge that not-p is nonluminous.11 KHA also captures all the cases we have considered before. In the original case, after reading the newspaper I am justified in believing that I am in a position to know that not-w, as I simply know that not-w, and, if I were to reflect on this, I could justifiably form the corresponding belief. Thus it is not rational to hope that w. If I ignore the evidence available to me provided by the paper, this evidence still puts me in a position to know and to be justified in believing that I am in a position to know, even though I form neither a first-order nor higher-order belief. Thus it is irrational to hope that w. The same can be said about cases in which I may be psychologically unable to react properly to the evidence. KHA also captures that it can be rational to hope w prior to reading the paper. Before I read the paper, I am not justified in believing that I am in a position to know that not-w. I am not justified because we generally do not hold that one can know not-w based merely on statistical considerations. In the standard case, quite independently of one’s possible reflection, one’s evidence could not justify one in believing that one is in a position to know that not-w. Thus it is rational to hope that w. For this reason, KHA also captures the intuition that it can be rational to hope that p even when p has a very low probability. Even if I have no evidence for p or evidence that suggests that p is highly improbable, that does not mean that I have evidence for not-p. For example, it is very improbable that one will draw a red ball from a jar that contains 999,999 black balls and just one red ball. But still, this is not evidence that the ball to be drawn will not be red, and thus I do not have justification to believe that I am in a position to know that the red ball will not be drawn. Consequently, it is rationally permissible to hope that the red ball will be drawn even if this is very unlikely. 11

Cases of nonluminous knowledge pose a general problem for arguments for infallibilism found in Benton (2018), as his main argument is that there is a tension between knowing that not-p and hoping that p. Nonluminous cases suggest that this observation is not fully general. In a case of nonluminous knowledge, there seems to be no tension between knowing that not-p and hoping that p. Therefore, the case for infallibilism via hope might not be sustainable. But since I am not concerned with settling the fallibilism versus infallibilism debate, I will not further press this point. Generally, though, fallibilists can avail themselves of the diagnosis I offered in Section 2.2 to explain the oddity of utterances such as (1): “I know that my ticket is not the winning ticket, but I hope that it is.” The two conjuncts of (1) are not rationally cotenable.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

Practical Reasoning, Ends, and the End of Hope

47

Since KHA can account for a wide variety of cases of rationally permissible or impermissible hope, KHA should be taken seriously as an answer to the question what one may epistemically hope for.12

2.4 How KHA puts knowledge first I now turn to my second goal: arguing that KHA hints at a new approach to the knowledge-first program. Part of the current knowledge-first program is to argue that knowledge plays an important normative role. For example, knowledge is said to be the norm of assertion (Williamson (2000)). The idea is that what one actually knows determines what it is permissible to assert. Following Jackson (2012), I will call this the determination thesis. KHA suggests that knowledge does play an important normative role, but one that differs from the determination thesis. To make my case, I want to draw attention to the fact that we cannot just replace “knowledge” with “justified belief” in KHA, as debates about the determination thesis have often turned into “knowledge versus justification” debates. Assume that those skeptical about the knowledge-first program offer a competitor to KHA that centers around justification: Justification–Hope Account (JHA) One satisfies the epistemic condition for rationally permissibly hoping that p iff one is not justified in believing that one is in a position to justifiably believe that not-p.

JHA cannot account for the fact that it is rationally permissible to hope that w prior to reading the paper. Even prior to the draw, one is in a position to have a justified belief that not-w, at least if one accepts the standard assumption that one is justified in believing that one’s ticket is not the winner.13 Yet, even if one is justified in believing not-w, to hope that 12

13

The possibility of the sensitivity of knowledge to practical factors makes KHA even more flexible. This could be used to deal with the following concern that I have encountered several times. Suppose that a loved one has gone missing for several years and there is absolutely no sign that they could still be alive. Is it nonetheless not rationally permissible to hope that they will return even when one might be justified in believing that one knows they will not return? If the sensitivity of knowledge to practical factors makes it harder to know, then it could be harder to know that the loved one will not return, and it could still be rationally permissible to hope for the return, according to KHA. An alternative response to this worry is simply to deny that it is rational to hope in this case. This tough response could be backed up with the thought that sometimes one must face harsh realities. But facing the truth is nonetheless more rational than clinging to false hope. This is a congenial assumption for critics of the knowledge-first movement such as McGlynn (2013), who appeals to justified beliefs in such lottery propositions to argue against some tenets of the knowledge-first movement. The denial of this assumption is usually associated with knowledge-first views of justification. But on these views, it will automatically be true that there is a link between

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

48

Beings of Thought and Action

w seems rational. I shall leave it as an open challenge to opponents to find a better alternative to JHA that eliminates the notion of knowledge. The challenge to be addressed is that it is permissible to hope that p, even when one is not justified in believing that p, but also when one might be justified in believing that not-p, as in the given lottery case. KHA finds the right balance here – it is not clear that a justification-based account can too. If one cannot replace “knowledge” with “justified belief” without getting the simplest case wrong – that is, hoping that one’s lottery ticket is the winning ticket prior to reading the paper – the recourse to the notion of knowledge in KHA seems uneliminable. In which way does that put knowledge first? KHA is not a determination thesis. It does not say that what one actually knows determines what it is rationally permissible to hope for. As KHA was modeled to accommodate Gettier cases, it holds that it is not only what one actually knows that determines what it is rationally permissible to hope for. While KHA is not a classical determination thesis, we can still say that it assigns to knowledge or, more specifically, the concept of knowledge, a central role in determining what it is rationally permissible to hope for. It is not only what we actually know but also what we appear to know or not to know that determines what we can rationally hope for. This suggests that the concept of knowledge plays an important and uneliminable role in our thinking about what we can rationally hope for and thus in regulating an important part of our mental lives. This is how KHA puts knowledge “first.”

2.5 The Rational Pursuit of Ends and the End of Hope Given our fallibilist leanings, knowledge and justified beliefs about what we know are compatible with the possibility of error. But these remaining possibilities of error are insufficient for rational hope. I do not think this is easy to swallow. But I do think this is something that we ought to accept. If we know that we will not get a job or that we simply did not win the lottery, there remains the possibility that we are wrong about this. But we should not cling to these possibilities, as this would be to engage in wishful thinking that can be potentially harmful. Sometimes, hope ought to come to an end. Given the motivational force often associated with hope (see Martin (2011)), we might very well waste valuable time and effort knowledge and hope, because they assume a link between justification and knowledge. Hence, we can safely set aside this option for our present concerns.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

Practical Reasoning, Ends, and the End of Hope

49

chasing the unobtainable, while we are missing out on other opportunities that are just as desirable as the objects of our original hopes.14 This final pointer on the danger of hoping for the unobtainable brings us back to the question of which epistemic conditions must be in place such that one can rationally pursue an end. The observation that motivated the question was that acting in pursuit of unobtainable ends, at least when they are known or should be known to be unobtainable, is not rational. As I have just pointed out, it is also not rationally permissible to hope that p when one has excellent reason to believe that one knows that not-p. This supports the principle HAL that I introduced in Section 2.1. And if we add KHA, this gives us a new principle that answers the question I was seeking to answer. To be perspicuous, here are the two principles HAL and KHA again. Hope–Action Link (HAL) S’s action ɸ in pursuit of p is rational only if S satisfies the epistemic condition for rationally permissibly hoping that p. Knowledge–Hope Account (KHA) S satisfies the epistemic condition for rationally permissibly hoping that p iff S is not justified in believing that S is in a position to know that not-p.

These two principles entail a third principle. Rational End Pursuit (REP) S’s action ɸ in pursuit of p is rational only if S is not justified in believing that one is in a position to know that not-p.

REP can explain why Anna’s action of joining the local cult in pursuit of resurrecting her dog Bella is not rational. Anna, like all of us, has excellent reasons to believe that she will not resurrect her dog, as there is no actual phenomenon of resurrection. At least that is what the evidence that we all share, including Anna, suggests. This evidence puts her in a position to know that she will not resurrect Bella. And she is also propositionally justified in believing that she is in a position to know this, as this is what reflecting properly on her evidence would suggest. Anna does not satisfy the condition of rational action imposed by REP, thus her action in pursuit of the end of resurrecting Bella is not rational. REP suggests that epistemic encroachment of practical rationality goes further than suggested in Chapter 1. It is not merely possible that relying on 14

This holds independently of whether hoping comes with some extra-motivational force that is not captured by the desire component of hope, or whether the motivational force of hope is exhausted by the desire component.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

50

Beings of Thought and Action

irrational beliefs will lead to actions that go against practical reasons. It is not merely true that we must satisfy certain epistemic requirements in order for it to permissible to treat a proposition as a reason for action. Epistemic considerations have a bearing on which ends we can rationally pursue, as the condition REP imposes is epistemic. However, it should be noted that the condition REP imposes on us is modest. While REP rules out the rational pursuit of some ends, it actually rules out very little. It does rule out planning to resurrect one’s dead dog. But it does not rule out building a spaceship to fly to Mars or looking for a cure for cancer. We currently do not have evidence that puts us in a position to know that we cannot fly to Mars or that cancer is incurable. In fact, for all we know, these things are genuinely possible. And that is why we can rationally pursue these ends even if our chances of success may be slim. Finally, a few words on the relation of hope and practical reasoning are in order to avoid confusion. While my argument started with hope, this starting point is not meant to indicate any priority relation. I am not claiming that rational hope is a precondition for rational action. My turn to rational hope was motivated by the assumption that this might be a fruitful heuristic in finding out which ends one can rationally pursue. It seemed that if one cannot rationally hope for p, then one’s action in pursuit of p can also not be rational. But this heuristic does not indicate that rational hope is a requirement for the rational pursuit of ends. All that my argument suggests is that both rational hope and which ends one can rationally pursue are governed by the very same epistemic condition, which is found in both KHA and REP. Since REP is an epistemic norm, just as KHA is an epistemic norm, one can extend the argument in favor of the knowledge-first account given in Section 2.4. While REP is not a classical determination thesis, we can still say that it assigns to knowledge or, more specifically, the concept of knowledge, a central role in determining which ends one can rationally pursue. It is not only what we actually know but also what we appear to know or not to know that determines which ends we can rationally pursue. This suggests that the concept of knowledge plays an important and uneliminable role in our thinking about what ends we can rationally pursue. It plays an important part in regulating our mental lives, as it influences whether we engage in further practical reasoning that aims at finding suitable means to achieve those ends. This is how REP puts knowledge “first.”

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

Practical Reasoning, Ends, and the End of Hope

51

2.6 Returning to the Classical Question I shall now explain how REP might have a bearing on the classical question the epistemic norms debate aims to address. In the Prologue, I introduced the knowledge norm for practical reasoning, which claims that knowledge is both necessary and sufficient for it to be permissible to treat a proposition as a reason for action. I have introduced several counterexamples against the necessity direction of the knowledge norm: cases involving false beliefs, Gettiered beliefs, or partial beliefs. As I said then, I will not engage in a debate about the knowledge norm and potential rescue maneuvers. Instead, I want to address some alternative principles that are reactions to the shortcomings of the knowledge norm. I will argue that all these principles suffer from a common problem: mundane action. Fantl and McGrath (2009: 125) derive the following principle to replace Hawthorne and Stanley’s (2008) RKP by what they call a subtraction argument. RJ It is appropriate to treat the proposition that p as a reason for acting iff you are justified in believing that p.15

The subtraction argument in favor of RJ takes the knowledge norm as a starting point. Fantl and McGrath (2009: 96) then take the counterexamples that involve false and Gettiered beliefs to give us conditions that can be subtracted from the knowledge norm. Cases in which false justified beliefs suffice to treat the proposition believed as a reason for action suggest that truth is not a necessary condition for treating propositions as reasons for action. Likewise for Gettier conditions. Cases in which a Gettiered belief suffices to treat the proposition believed as a reason for action suggest that the absence of Gettier conditions is not a necessary condition for treating propositions as reasons for action. Therefore, we can subtract both truth and the absence of Gettier conditions from knowledge, which leaves us with justified belief as the norm for practical reasoning.16 However, for other purposes, Fantl and McGrath (2009: 98) argue for the following claim: 15

16

Fantl and McGrath (2009: 99) argue in favor of a slightly different principle, JJ. However, the differences between RJ and JJ are not significant for my purposes. I choose to mention RJ because it allows for a clearer comparison to other principles. Fantl and McGrath also argue that one can drop belief, so the relevant notion of justification is propositional justification, which one can possess for a belief even though one does not form the corresponding belief.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

52

Beings of Thought and Action Equivalence Thesis (ET) P is knowledge-level justified for you iff you are justified in believing that p.

Since the term will become important in later chapters as well, I would like to elaborate on what knowledge-level justification is. It is the degree of justification that is necessary in order for a belief to potentially amount to knowledge so that shortcomings in strength of justification do not stand in the way of knowing. Having knowledge-level justification does not entail having knowledge. Having knowledge-level justification is compatible with having a false or Gettiered belief. For current purposes, I will accept ET. The conjunction of ET and RJ entails that knowledge-level justification is the epistemic condition for it to be appropriate to treat p as a reason. Before turning to my critique of Fantl and McGrath’s proposal, I want to introduce another suggested replacement to the knowledge norm. Neta (2009: 686) and Smithies (2012: 270) have argued for variants of the following principle: JBK-Reasons Principle (JBK) It is rationally permissible for S to treat p as a reason for action iff S is justified in believing that S knows that p.

Neta and Smithies differ as to what the relevant notion of justification is. While Neta holds that it is doxastic justification to believe that one knows (Neta 2009: 696), which thereby requires one to have higher-order beliefs, Smithies opts for propositional justifications, which pose no such requirements on agents (Smithies 2012: 271). While I have my doubts about the doxastic reading, I will set them aside.17 The problem is not merely that such a reading of JBK is doxastically too demanding. Both readings are epistemically too demanding. And the same holds for Fantl and McGrath’s principle, RJ. Both, RJ in combination with ET, and JBK, run into trouble with what I call mundane cases. The following case taken from Gerken (2017: 145) illustrates this. Kickoff S believes that the game has started. But the only basis for her belief is that she vaguely remembers a stranger telling her the time of the kickoff in the bar the night before. But both S and the testifier were tipsy, and the fellow didn’t seem all that reliable anyhow. S treats the proposition that the game has started as 17

In a nutshell, my worry is that we rarely ever have higher-order beliefs about which epistemic position we have toward propositions, and especially not for all propositions that we treat as reasons. Therefore, a doxastic reading of JBK seems at least descriptively inadequate.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

Practical Reasoning, Ends, and the End of Hope

53

a reason to turn on the TV, as she is just looking for some distraction from her hangover.

S has at best a weakly justified belief, but certainly not a knowledge-level justified belief. So S does not satisfy the demand proposed by RJ. The same holds for the demands of JBK. S’s evidence is weak; she is not in a position to know and it should not even seem to S that she is. Thus she is certainly not justified in believing that she knows. So S does not satisfy the demands proposed by JBK. Yet the belief that the game has started seems sufficiently justified to be treated as a reason for S to turn on the TV, assuming that absolutely nothing hangs on this. So both principles, RJ and JBK, face intuitive counterexamples. Just like the knowledge norm, their demands seem too high to serve as a necessary condition on what one can permissibly treat as a reason for action.18 I think Kickoff alerts us to a significant fact that the debate about epistemic norms for practical reasoning has overlooked. Going back to the Prologue, Hawthorne and Stanley argue for the knowledge norm predominantly with examples that concern raised stakes for action – the doctor with the needle, the sous-chef under pressure to take the cake out, the parent at the playground. In such situations with elevated stakes, it seems very natural to say that the subjects should not act unless they know, and would be acting in rationally impermissible ways if they did otherwise. However, there are plenty of mundane situations in our lives just like Kickoff, where knowledge just seems overly demanding; and similarly for alternative proposals like RJ or JBK. This problem seems to be quite unsurprising if we go back to REP. In Section 2.5, I pointed out that REP is compatible with pursuing ends that are quite unlikely to be achieved. This suggests that REP really is a fairly minimal restriction on ends, which seems fitting. We pursue a wide variety of ends in our daily lives. Some of them momentous – for example, looking for a cure for cancer – but many others quite mundane, like having some distraction while recovering from a hangover. Obviously, these ends vary in importance. But then it should be equally obvious that the epistemic 18

Another proposal for which Kickoff is troublesome can be found in Littlejohn (2009). Littlejohn argues that justification to believe p is necessary for it to be appropriate to treat p as a reason. However, the notion of justification Littlejohn employs is factive: One is justified in believing p only if p is true. Littlejohn (2012, 2014) sets out a case for factive justification based on mistaken normative beliefs, highlighting many important connections between justification in epistemology and in ethics. I cannot do justice to these complex arguments here. But, at least prima facie, it seems that in Kickoff, one’s justification suffices to treat the relevant proposition as a reason quite independently of whether it is true or not.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

54

Beings of Thought and Action

demands for treating propositions as reasons for action in pursuit of those ends can vary. An epistemic norm for practical reasoning that aspires to be a norm for all practical reasoning must be flexible enough to capture all the practical situations in which we take others and ourselves to act rationally. However, the suggested norms, whether it is the knowledge norm or alternatives like RJ or JBK allow for absolutely no flexibility. They seem to be on the right track when we consider cases in which the stakes are elevated, in which the ends pursued are very important and thus it is fitting that the epistemic demands are high. But these proposals seem unsuitable if we look at mundane cases like Kickoff. Given that none of the suggestions considered so far allow for much flexibility, we must look for more flexible proposals to answer the classical question. In the next chapter, I turn to such proposals: those that hold that which epistemic demands one must meet varies with context.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.002

chapter 3

Contexts, Costs, and Benefits

For a great number of questions, the answer “that depends on the context” will be a disappointing one. It is an answer that is usually not illuminating and bound to lead to further questions, even if true. I expect things are no different when it comes to epistemic norms for practical reasoning. If one inquires about which epistemic conditions must be fulfilled in order for it to be rationally permissible to treat a proposition as a reason, the answer “that depends on the context” will not be very satisfying. It prompts further questions such as what matters in a context, why it matters, and how the context-dependent answer to the original question is ultimately derived. Chapter 2 in some sense laid the foundation, and might have prepared (or alarmed) the reader for the expectation that I will give the “that depends on the context” answer to the classical question about epistemic norms for practical reasoning. Hopefully, by the end of this chapter this answer will not leave the reader disappointed after all and all further questions will be answered satisfactorily. In Chapter 2, Section 2.6, I pointed out that given the wide variety of ends we can rationally pursue, which is matched by an equally wide range in their importance, it is only fitting that rationality does not demand that we pursue all these ends with equal stringency and rigor. I have already demonstrated that many classical proposals seem overly demanding when we consider how mundane some of our ends are. In this chapter, I will argue that contextualist proposals are better equipped to account for a large variety of practical reasoning situations. While I will not provide a deductive argument in favor of a contextsensitive proposal, the reason why we should endorse a contextualist proposal is extensional correctness: It can reach plausible verdicts in a wider range of cases than noncontextualist proposals. The contextualist account I will propose owes an intellectual debt to current contextualist accounts of epistemic norms for practical reasoning found in Brown (2008), Gerken (2011) and Locke (2015). In Section 3.1, 55

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

Beings of Thought and Action

56

I explain how these proposals are motivated and put forward three questions that will shape the following discussion. In Sections 3.2, 3.3, and 3.4, I discuss the proposals of Brown, Gerken, and Locke in turn. The most pressing issue for current contextualist accounts is what I call the incompleteness problem, which is how context determines the degree of justification a context calls for. In Section 3.5, I develop a solution to the incompleteness problem that helps to answer all the follow up questions that contextualism usually gives rise to, and, finally, I point to a contextinvariant principle that will become significant in Chapter 5.

3.1

Introducing Context-Sensitive Epistemic Norms for Practical Reasoning

It will be helpful to introduce some terminology to capture the difference between two opposing camps in the debate about epistemic norms for practical reasoning. I will reuse two labels that have already been established in epistemology – invariantism and contextualism. These labels refer to two opposing theories about the meaning of “knows.” Contextualists claim that the meaning of “knows” varies from context to context, while invariantists hold that “knows” has a single, context-invariant meaning. To avoid confusion, I want to be explicit that I apply these labels to theses about epistemic norms for practical reasoning, not theses about “knows.” It is important to keep these topics separate. Contextualism about “knows” does not imply contextualism for an epistemic norm for practical reasoning. Neither does contextualism about epistemic norms imply contextualism about “knows.” We can distinguish between invariantist and contextualist proposals for an epistemic norm for practical reasoning. A proposal is invariantist if it claims that there is a single epistemic status that, relative to all possible contexts, makes it rationally permissible to treat p as a reason. One example of an invariantist norm is the knowledge norm argued for in Hawthorne and Stanley (2008) considered in the Prologue. But other proposals also fit the bill, as we saw in Chapter 2, Section 2.6. A proposal is context-sensitive if it claims that the epistemic status that makes it rationally permissible to treat p as a reason for action varies with context. Currently, there are at least three authors who defend context-sensitive accounts of the epistemic norm for practical reasoning.1 Brown (2008: 171) 1

Other accounts that have context-sensitive leanings can be found in Lackey (2010 and 2016). Lackey holds that sometimes not only does being in a certain epistemic state matter but also having a certain kind of epistemic basis. Williamson (2005), on one reading, counts as a contextualist. Williamson holds that some contexts not only require knowledge for permissible practical reasoning, but higher-order

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

Contexts, Costs, and Benefits

57

writes that “the standard for practical reasoning varies with context,” while Gerken (2011: 53) states that “the degree . . . of warrant required for practical rationality may vary with practical context.” Locke (2015) sees his proposal as a refinement of a justification norm for practical reasoning. However, it is fitting to list him as a contextualist, because Locke (2015: 76) holds that whether it is appropriate to treat p as a reason depends on “the nature of the particular decision problem one faces.” But this amounts to holding that there is not one standard of justification that suffices to make it appropriate to treat p as a reason for all decision problems, or in all contexts. Since Locke holds that the relevant standard varies, it is fitting to describe his proposal as a context-sensitive one. Contextualism can be motivated by two different strategies. First, much like the classical invariantist proposal, the knowledge norm, contextualism can be motivated by ordinary appraisals of practical reasoning. Second, contextualism can be motivated by looking at a range of cases and our judgments about them. I will quickly outline the first strategy, but will mainly focus on the second. Brown (2008) agrees that “knows” figures prominently in our appraisal of actions. However, she argues that the case from these appraisals to the knowledge norm is not conclusive. She points out that we often defend and criticize actions by referring to other, stronger or weaker, epistemic statuses than knowledge. In defense of coming home late, one might refer to one’s evidence for one’s belief: the timetable said the train would leave at 12.20 p.m. A father might criticize his son for staying late at a party because he did not know with absolute certainty that there would be a bus back home after 12.00 a.m. Taking such utterances at face value, the varying epistemic vocabulary we use to defend and criticize actions could suggest that different contexts may make different epistemic demands on treating something as a reason. Whether this strategy is successful depends on whether the epistemic vocabulary we explicitly use is indicative of a norm for practical reasoning. Some remarks by other contextualists put some pressure on this approach. Gerken (2011: 532) reminds us that one should be very cautious when one is deriving general principles from vernacular language. His criticism aims at the knowledge norm and is based on the claim that vernacular uses of “knows” are often imprecise. But one may wonder whether vernacular use of other epistemic vocabulary, such as “evidence” or “certainty,” is any more precise and whether utterances knowledge. A discussion of Lackey’s and Williamson’s accounts is unfortunately beyond the scope of this chapter.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

58

Beings of Thought and Action

involving such vocabulary can serve as evidence for a context-sensitive epistemic norm for practical reasoning. If vernacular language use is generally imprecise, then the varying vocabulary need not indicate that defenses and criticisms of actions are based on varying epistemic statuses. In light of such issues and given the availability of a second strategy, it is advisable to focus on this second strategy. The second strategy can be illustrated by two cases we have already encountered. In the doctor example in the Prologue, it seems that the doctor needs to know, or at least have a degree of justification that would suffice for knowing, that the needle is sterile before treating it as a reason to continue with my influenza vaccination. Any lesser epistemic state seems insufficient for permissibly treating that proposition as a reason. While this case might suggest that there is a knowledge norm for practical reasoning, we saw in Chapter 2, Section 2.6, that there are many mundane cases like Kickoff that suggest otherwise. S in Kickoff has at best a weakly justified belief, but it certainly does not amount to knowledge and the degree of justification she has is insufficient for knowledge-level justification. Yet the belief that the game has started seems sufficiently justified to be treated as a reason for S to turn on the TV, assuming that absolutely nothing hangs on this. Invariantism can only be maintained if there is a common epistemic status between the doctor and subject S in Kickoff that suffices to render each of their actions rational and which makes it permissible to treat the respective proposition as a reason. The difference between the doctor and S seems so fundamental that it is hard to believe that there is a common epistemic status that could also suffice for making it permissible to treat the relevant proposition as a reason in the respective cases. This is a challenge for the invariantist, but I will not argue that this challenge cannot be met. However, unless the challenge is met, invariantism faces the threat of being extensionally incorrect. It does not seem to cover all practical reasoning situations in which we rationally treat propositions as reasons and in which we rationally act on those reasons. Contextualists can easily avoid this challenge by taking these two cases at face value. They can say that the doctor’s and S’s belief differ radically in epistemic status. Yet for each of them in their respective context, it is rationally permissible to treat the content of their belief as a reason for action. They can do so because their key claim is that the epistemic status that makes it rationally permissible to treat p as a reason for action can vary with context. Even just these two cases suggest that contextualist proposals have an advantage over invariantism when it comes to extensional correctness and this is all one needs to motivate contextualism.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

Contexts, Costs, and Benefits

59

One might be tempted to argue that a comparison of various cases can also help to directly show that invariantism is false, rather than only facing a challenge. All that one would need is two cases in which the consequences of error vary. Assume I have the reasonable belief that the train will leave the station at 12.20 p.m. because the timetable says so. Then I will not be at fault for coming late to a routine dentist appointment if the train cannot leave the station by 12.20 p.m. because of technical difficulties that I could not have known about. But if I have a hugely important appointment, for example an interview for my dream job, relying merely on a timetable, perhaps without knowing whether it is current or while knowing that it cannot predict technical difficulties, is inappropriate. Presumably, the belief that the train will leave at 12.20 p.m. has the same epistemic status in both cases, as in both, it is based on what the timetable says. But while it is appropriate in the dentist appointment case to rely on this belief in practical reasoning, it is not in the job interview case. Therefore, invariantism must be false. This might look like the strongest argument for contextualism, as it directly shows that invariantism must be false. However, it crucially depends on the assumption that there really is no difference in epistemic status between the two cases. This is exactly what some invariantists, such as Hawthorne and Stanley and Fantl and McGrath, deny. Setting aside the differences between the epistemic norms they uphold, both pairs of authors are known proponents of pragmatic encroachment in epistemology. According to this thesis, practical factors can influence the epistemic status of belief, that is, whether the belief amounts to knowledge or whether it is justified. Since practical factors differ between the two cases, it could be said that there is after all a difference in epistemic status as well. And if there is such a difference, then invariantism can explain why it is appropriate in one case, but not the other, to rely on one’s belief in practical reasoning. While the argument outlined in favor of contextualism is appealing, given that pragmatic encroachment is a view adopted by invariantists, it fails to establish its conclusion. Since I will argue that pragmatic encroachment is true in Part II of this book, I cannot deny my invariantist opponent this view in order to help myself to a stronger argument in favor of contextualism. Nonetheless, the issue of extensional correctness that the doctor’s case and Kickoff have highlighted still suffices to motivate contextualism, even if it does not conclusively establish it.2 2

Perhaps pragmatic encroachers will be tempted to argue that practical factors lower the bar for knowledge or knowledge-level justification so far down that one meets it even in Kickoff. While

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

60

Beings of Thought and Action

Perhaps there is even a third strategy that is available in favor of contextualism. One might say that given our nature as beings of thought and action that face a wide variety of practical situations, it is simply to be expected that contextualism is true. An epistemic norm for practical reasoning that aspires to be a norm for all practical reasoning must be flexible enough to capture all the practical situations in which we take others and ourselves to act rationally. And, prima facie, contextualism allows for just that flexibility, while invariantism does not. An analogy with biological features helps to illustrate why contextualism would not be surprising given the kind of beings that we are. Mueller and Ross (2017: 295) bring this out in terms of a well-functioning practical reasoning system. Just like the heart, which to function well must adjust its rate to varying situations, the practical reasoning system must adjust to different circumstances. In some situations, a very strong epistemic position is needed for permissible practical reasoning; in others, the standard is more relaxed. It does not seem that an invariantist norm can do justice to these various needs and changing situations. Since contextualism seems to have the resources to capture this need for variability, it is prima facie an attractive theory. In the following, I will mainly bolster contextualism by pursuing the second strategy. I will demonstrate that contextualism can cover a wider variety of cases than invariantism. I do not want to overstate my case for contextualism. It is in principle possible that future invariantists will be able to demonstrate that they can account for all the relevant cases I discuss. But at least given the current state of debate, I will provide sufficient reasons to prefer contextualism over invariantism. Up to this point, I have only introduced the main idea of contextualism in the broadest possible sense. I will spell out contextualism in the following by discussing the views of current contextualists. While I think that contextualism is on the right track, I have a number of misgivings about the accounts currently on offer. However, I will argue that all these problems can be fixed. I will use three questions to shape the following discussion. 1. The features question: Which features of the context are relevant for determining which epistemic status one must have and how do these features determine this epistemic standard?

I admit this theoretical possibility, and that it would undercut my argument for contextualism, I think this requires a rather dramatic form of pragmatic encroachment. For my part, I find it implausible that the flimsy evidence one has in Kickoff could ever amount to knowledge-level justification.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

Contexts, Costs, and Benefits

61

2. The subjective–objective question: Do the actual features of a context determine the epistemic status or the beliefs of the subject about the context? 3. The factivity question: Are there contexts in which only factive attitudes can make it permissible to treat a proposition as a reason?

3.2

Jessica Brown’s Account and the Factivity Question

Although Jessica Brown was the first to mention in print the idea that the epistemic status for rationally permissible reasoning varies with context, she does not fully develop the idea. In fact, Brown (2008: 171) says little more than I have already quoted earlier: “the standard for practical reasoning varies with context: sometimes the standard is knowledge, sometimes it is less than knowledge, and sometimes it is more than knowledge.”3 Given Brown’s terse remark, one can determine neither an answer to the features question, nor to the objective–subjective question on her behalf. I take it that an adequate development of a contextualist theory must answer both questions, but I will set this worry aside for now. Brown’s remark though seems to suggest a commitment regarding the factivity question. She states that in some contexts, the relevant standard is knowledge, and in some even more than knowledge. Since knowledge is a factive attitude, Brown seems to hold that there are contexts in which only a factive attitude can satisfy the contextually determined epistemic standard. Brown gives the following case to argue against the sufficiency direction of the knowledge norm, but it also brings out her commitment to factivity. Surgeon Case A surgeon is about to operate. She saw the patient in the morning when it was decided that they had to remove the left kidney. Yet she checks the patient’s records before she begins the procedure. A student notices this and the following dialogue takes place: student: I don’t understand. Why is she looking at the patient’s records? She was in the clinic with the patient this morning. Doesn’t she even know which kidney it is? nurse: Of course she knows which kidney it is. But imagine what it would be like if she removed the wrong kidney. She shouldn’t operate before checking the patient’s records. (See Brown (2008: 176)).

3

A more recent endorsement can be found in Brown (2012); however this article does not elaborate on the theory.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

62

Beings of Thought and Action

The nurse’s comment suggests that knowledge of a proposition is not always sufficient for it to be appropriate to treat that proposition as a reason. Some contexts call for even more than knowledge. In any case, since knowledge is factive, cases in which even more than knowledge is required also demand factivity. I am skeptical that some contexts call for a factive attitude. Understood as an existential thesis, it is a weak thesis that cannot be rejected via a single counterexample. Yet I think that it is an additional burden and I fail to see what contextualism is to gain by accepting it. A lack of a factive attitude is compatible with having the best possible evidence and being an entirely rational epistemic agent. Whether one’s attitude is factive will, in some sense, always be a matter of luck. That having bad luck, in the form of lacking a factive attitude while having the best possible evidence, could ever make one’s practical reasoning impermissible is implausible, as it does not seem to affect one’s epistemic rationality either. Hence, I believe that a contextualist account for an epistemic norm for practical reasoning is best advised to stay clear of a demand for factive attitudes in some contexts, whether they are understood as factive justification, knowledge, or some epistemic state even more demanding than knowledge. At least the contextualism I adopt does not require factive attitudes in some contexts. I shall leave it to my critics to argue that I thereby miss something significant. My negative answer to the factivity question is entirely compatible with the view that the surgeon is required to have a degree of justification that is higher than knowledge-level justification. We could say that the nurse’s comment does not indicate that one needs more than knowledge. Instead, it merely suggests that an unusually high degree of justification is required, while this degree of justification is still compatible with ending up with a false belief. I ultimately argue against this description of the case in Chapter 4. But my negative answer to the factivity question is in principle compatible with accepting that some contexts require a degree of justification that is higher than knowledge-level justification. This would still capture Brown’s judgments about rational permissibility in the surgeon case.

3.3

Mikkel Gerken’s Account and the Incompleteness Problem

Gerken (2011) frames his contextualist account in terms of warrant. Warrant is a nonfactive notion of epistemic rationality, as found in Burge (2003). Gerken argues that the deliberative context determines the

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

Contexts, Costs, and Benefits

63

degree of warrant one must have toward a proposition in order to satisfy the epistemic demands for permissible practical reasoning. Since Gerken (2011: 530) points out that he sees warrant as a variant of epistemic justification, I will transpose his warrant account (WA) in terms of epistemic justification to facilitate further discussion. Contextualist Justification Norm for Practical Reasoning (CJN) In the deliberative context (DC), it is rationally permissible for S to treat the proposition that p as a reason for action iff S’s degree of justification for believing that p is adequate relative to DC.4

Like me, Gerken holds that there are no contexts in which factive attitudes are required. While I have changed the terminology to “justification,” this leaves Gerken’s view regarding the factivity question unaltered. At least on most ordinary accounts of epistemic justification, justification is a nonfactive notion, as having a justified belief is thought to be compatible with having a false belief. Let’s turn to the objective–subjective question. Are the features of the context that are relevant for determining the adequate degree of justification to be thought of objectively or subjectively? Here, Gerken gives us what one might consider to be a mixed answer, in the following sense. The notion of DC is not to be identified with the de facto circumstances, or with the agent’s conception of the circumstances. Instead, Gerken (2011: 530, footnote 2) suggests that DC consists of what the agent rationally presupposed to be her practical context. While I am highly sympathetic to this answer, I would like to refine it a little, for reasons I will explain. I suggest that the DC consists of all the features a reasonable person would presuppose to obtain in a given context. This refinement is not intended to add anything to Gerken’s answer. It is merely meant to introduce a placeholder notion, to which I will return (see Chapter 5, Section 5.4, and Chapter 6, Sections 6.2 and 6.3). The reasonable person is a placeholder notion in the sense that I will not define it5; this would require a full-blown theory of rationality/reasonableness, which I will not offer in this book. Nonetheless, I assume that the reader has a sufficiently clear grasp of what a reasonable person would be like. The purpose of the notion of a reasonable person, or the reasonable person standard, is to deal with certain questionable objections: 4 5

Based on Gerken’s principle, WA, see Gerken (2011: 530). While she eschews a definition, Lawlor (2020) does offer some characteristics of the reasonable person with which I can agree, although I disagree with one of the main conclusions she argues for via the reasonable person.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

64

Beings of Thought and Action

questionable, because they are based on assumptions that a reasonable person would not make. Any theory will deliver certain verdicts on cases, and these depend on the details of the case. Assume that the details of the case are the inputs, and the verdicts of the theory the outputs. Any theory will, given questionable inputs, deliver questionable outputs. The notion of a reasonable person is meant to shield against questionable inputs. For example, if the features of a DC partly consisted of irrational presuppositions about the context, then it would hardly be surprising if a principle like CJN came to equally questionable verdicts about what is permissible to be treated as a reason. The notion of a reasonable person is meant to avoid such implausible verdicts from the start and thus helps to avoid questionable objections in the first place. While I will not further define the notion of a reasonable person, I would like to mention that it is intended to be neutral between an internalist and an externalist reading. On the internalist reading, S’s DC consists of all the features that are rational for S to presuppose given S’s (accessible) mental states.6 On the externalist’s reading, S’s DC consists of all the features that it is rational for S to presuppose, which could be independent of S’s mental states. Hence the notion of a reasonable person standard satisfies my desire for neutrality in the internalism/externalism debate. This brings us to the features question, which I take to be the crucial issue. To see why, I will return to the worry with which I opened this chapter. When we want to know which epistemic status makes it permissible to treat a proposition as a reason for action, the answer “that depends on the context” seems disappointing and surely gives rise to further questions. CJN faces this issue. According to CJN, one’s degree of justification needs to be adequate relative to the DC. But unless we are given an account of what counts as adequate in a DC, CJN is unsatisfying. Contextualists must provide an account of what counts as adequate in a DC, and which features of the context matter in determining adequacy. Gerken is quite aware of this issue and at least attempts to provide a partial account of which features of the context matter for determining adequacy. He suggests five parameters of the DC that jointly determine which degree of justification is adequate: 1. available alternative courses of action 2. availability of further evidence 6

The term “accessible” in parenthesis indicates that we can further distinguish between two versions of internalism – mentalism and accessibilism.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

Contexts, Costs, and Benefits

65

3. urgency of decision 4. what is at stake if one acted on that proposition 5. social roles and conventions associated with the action.7 Gerken says that this is not a definitive, but an open-ended list, and that more needs to be said about the interaction of these parameters in the determination process. Gerken (2011: 534) admits this gap in his account. The following case that features all the parameters from Gerken’s list is intended to illustrate why this gap seems troubling. Bank Hours On Friday morning, Hannah is at work and wants to plan the rest of her day. She considers whether to rely on the proposition that the bank is open this Saturday in her decision about when to deposit her paycheck. Making a decision right now is not extremely urgent, but Hannah would like to settle the matter and how she decides will make a difference as to how she organizes the rest of the day. If the proposition is true and she relies on it, she can avoid the queues on Friday afternoon and she would then have time to meet a friend. If the proposition is false and she relies on it, Hannah will not be able to deposit her paycheck. The consequences of this would be bad, but not disastrous. She might overdraw her account, which would mean paying a small fine, and she would like to avoid this. Additionally, her partner Sarah has recently asked Hannah to be more responsible when it comes to financial decisions and Hannah promised to do so. Hannah remembers that she was at the bank last Saturday, but she could also ask a coworker to be sure she has the hours right.

Admittedly, Bank Hours is a complex case. However, it is not overly intricate; I think that it mirrors the complexities we often face in our everyday life quite well. Given all the parameters involved in this case, what is the degree of justification that Hannah must have to make it permissible to rely on the proposition that the bank is open on Saturday? How are these different parameters aggregated to determine the adequate degree of justification? Unless contextualists can provide answers to these questions, I suspect that contextualism will be a disappointing option in the debate about epistemic norms for practical reasoning. Let us call these open questions the features questions that give rise to the incompleteness problem. I grant that Bank Hours is a hyperbolic case to make the incompleteness problem explicit. But the general incompleteness problem persists even in cases with fewer features. For example, how should we balance high stakes and the availability of further evidence? What we really want is a principled 7

Parameters 1–4 are listed in Gerken (2011: 53); parameter 5 is added in Gerken (2012a: 376).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

66

Beings of Thought and Action

explanation of how the various features of the context determine the required degree of justification. The incompleteness problem for CJN, which is a general problem for contextualism, is a drawback. But it calls for further work, not abandonment of the idea of a context-sensitive epistemic norm for practical reasoning. In the remainder of this chapter, I will look at one proposed solution to the incompleteness problem and offer my own.

3.4 Dustin Locke’s Account and the Incompleteness Problem Locke (2015) argues in favor of a version of contextualism and he also offers a solution to the incompleteness problem (although he uses different terminology). Locke introduces his proposal as a variant of a justification-based epistemic norm for practical reasoning. It is rationally permissible to treat p as a reason iff p is epistemically certain enough, where what counts as enough depends on the decision problem one faces. This latter feature, in my view, makes him a contextualist. To put it into the terminology used throughout this chapter: Locke holds that it is permissible to treat p as a reason iff one’s degree of justification for p is high enough, where what counts as high enough can vary with context. If we have an account of what counts as high enough in a given context, then we would have at least a part of the solution to the incompleteness problem. We would not necessarily have an answer to the question which features of the context matter, but we would have an answer as to when one’s justification counts as adequate. Locke suggests that one’s degree of justification for p is high enough iff p is practically certain. The notion of practical certainty is spelled out as follows: Practical Certainty P is practically certain for S relative to her decision D if and only if the actual degree of justification of p for S is such that the act which S has most reason to do is the act that S would have most reason to do were p’s degree of justification maximal.8

I will argue that this definition of practical certainty suffers from a serious flaw that makes it unsuitable to provide a solution to the incompleteness problem. 8

See Locke (2015: 88). I have switched from his use of “epistemically certain” to “maximal degree of justification” so as to use terminology consistently throughout this chapter. Practical Certainty is at least reminiscent of what is known in the literature as a practical adequacy on knowledge, which I will discuss in Chapter 5, Section 5.4. I will not offer a comparison of these two conditions.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

Contexts, Costs, and Benefits

67

Locke’s suggestion is that we check whether p is practically certain by checking whether the act p supports is the same as the act we have most reason to do if p‘s degree of justification was maximal. We first check what we have most reason to do conditional on having maximal justification for p, and then check what we have most reason to do given our actual degree of justification for p. If the two acts are the same, then p is practically certain. But it is unclear whether this comparison procedure is suitable to determine whether the degree of justification one actually has is adequate, or high enough, in one’s current decision context. Suppose you have to decide whether to buy a plane ticket to New Zealand for January. The only relevant consideration is whether it is summer in the southern hemisphere in January. Let us assume that you have a moderate degree of justification to believe that it is indeed summer. This belief, if one had a maximal degree of justification, would support buying the plane ticket and thus what one has most reason to do. Now to do the comparison, we must know whether it is true that one’s actual degree of justification is high enough to buy the plane ticket. But that was the original question that we tried to determine by using Practical Certainty – how high must one’s actual degree of justification be to license the decision to buy the plane ticket? It seems that the comparison procedure employed in Practical Certainty presupposes an answer to the question that Practical Certainty is supposed to provide. Thus Locke’s account cannot provide an answer to the incompleteness problem. Locke’s further explanations also illustrate the problem. Using the Surgeon Case as an example, Locke thinks that: […] given how easy it is to double-check the patient’s records, and given just how bad it would be to remove the wrong kidney, the surgeon in fact has most reason to double-check the patient’s records. But if it were epistemically certain [maximally justified] for the surgeon that it was the left kidney that was diseased, then given the (small) cost of double-checking the patient’s record, the surgeon would have most reason to proceed with the surgery without double-checking. (Locke 2015: 88)

While this seems plausible, it still shows that the approach violates the proper order of explanation. Locke assumes that we already know what we have most reason to do. But what one has most reason to do is an all-things -considered judgment which may be influenced by a number of reasons, not just a single one, and also by how one is epistemically related to these reasons. But it is illegitimate to help oneself to such an all-thingsconsidered judgment when one wants to determine whether one’s degree

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

68

Beings of Thought and Action

of justification regarding a single proposition suffices for it to be treated as a reason. In most ordinary situations, we do not know beforehand what we have most reason to do. In deliberation, it makes sense to answer this question, in part, in light of the epistemic relations in which we stand to various propositions that have practical relevance. So in order to know what one has most reason to do, one must know what degree of justification is required by our context to use a proposition in determining what one has most reason to do. However, the question “what degree of justification is required” was the one we were trying to address in the first place. Thus this purported further explanation suggests that Locke’s account violates the proper order of explanation. Locke presupposes that we have an answer to the question what we have most reason to do prior to knowing whether our actual degree of justification suffices for treating a relevant proposition as a reason. This gives us good reason to reject Locke’s solution to the incompleteness problem. However, I have encountered the following suggestion as to why there is no circularity in Locke’s account. One might say that the determination of the act one has most reason to do is not a matter of one’s reasons, but rather involves the machinery of expected utility theory. The act one has most reason to do is the act that maximizes expected utility, which is determined in light of one’s credences and the outcomes of various actions. Reasons never enter into the picture. This avoids any circularity. But it also implies that what is permissible to be treated as a reason never enters as an input for resolving the question what it is rational to do, which is resolved entirely by means of expected utility theory. This makes what one may permissibly treat as a reason for action an epiphenomenon to practical rationality, which is entirely accounted for in terms of expected utility theory. This suggestion does avoid my charge of circularity and it is admirably clear about its implications. However, I believe it robs Practical Certainty of significance and hence would be an odd partner in a defense of Practical Certainty. If reasons for action really do not contribute anything to the determination of practical rationality, it seems quite odd that we would care about reasons, and when it is rationally permissible to treat a proposition as one, as Practical Certainty supposedly does. Arguing that reasons may be significant for purposes other than determining what is rational to do does not avoid this issue. We started with the idea that practical rationality depends on epistemic rationality. We then considered when it is rationally permissible to treat a proposition as a reason, where such rational permissibility was meant to contribute to

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

Contexts, Costs, and Benefits

69

the practical rationality of the subsequent action. Even if reasons play some other role, if reasons are superfluous to the determination of practical rationality, then Practical Certainty, assuming this strategy, cannot add anything of interest to our question as, pace the above strategy, the question we asked was misguided in the first place. As I explained in the Prologue, I follow what one may call a reasonsbased approach to practical rationality, which is assumed in the debate about epistemic norms for practical reasoning, to which Practical Certainty is meant to be a contribution. The aforementioned strategy breaks with this approach. This is fine in itself, as I concede that the reasons-based approach is not without its critics. But this break is rather odd as a defense of Practical Certainty. While the suggestion outlined avoids my circularity objection, it comes at the cost of robbing Practical Certainty of the significance that it was meant to have. Therefore, I believe that this suggestion, while perhaps capable of solving the incompleteness problem, clashes with the larger project we are engaged in and is thus best avoided.

3.5 Solving the Incompleteness Problem Here, in a nutshell, is my main idea for solving the incompleteness problem: We can capture the various contextual factors, their interplay, and the resulting demands they place on the agent, in form akin to a cost– benefit analysis. Two parameters play a key role. The costs of further inquiry (CFI) and the costs of error (COE). These two costs determine whether one’s degree of warrant, given by one’s current evidence for p, is adequate to treat p as a reason. A minor detour is helpful to set up my proposal. William James famously held that there are two maxims governing our epistemic conduct: believe truths and shun error (James 1897: 18). But how should one balance these two maxims? One can easily satisfy the first by just believing anything, thereby believing all truths, while at the same time believing all falsehoods and doing badly in regard to the second. Or one can withhold forming beliefs at all, thereby doing well by the second maxim, but badly by the first. While both maxims are plausible, the reasonable epistemic agent must strike some sort of balance between them. A similar problem arises for us as beings of thought and action that must act in the world. There are two plausible but opposing desiderata for one’s practical reasoning. On the one hand, one wants one’s epistemic position to be maximally strong because one thereby reduces the risk of unsuccessful actions. On the other, one cannot postpone actions indefinitely. So while

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

70

Beings of Thought and Action

one wants to have a maximal degree of justification prior to acting, trying to acquire this degree will get in the way of acting at all. I see contextsensitive norms as an attempt to balance these two desiderata based on various contextual factors. The incompleteness problem can be seen as a failure to give an account of how to properly balance these two desiderata, which my solution to it will provide. One generally needs evidence, assuming a broadly construed notion of evidence,9 to be justified in believing to any degree at all. Even if one has some evidence for believing p, this evidence might not provide me with a degree of justification to actually fully belief that p. But it still provides me with some degree of justification, that may provide me with justification to have a certain positive credence in p. The challenge for contextualism is to provide an account of how much evidence for p is needed in a given context so that one’s resulting degree of justification makes it permissible to treat p as a reason. I assume that there is a maximal degree of justification. One’s degree of justification for believing p is maximal iff one’s evidence entails that p, or as I often say, if one’s evidence rules out all not-p possibilities. Depending on whether one has fallibilist or infallibilist leanings about knowledge, one will either reject or accept that a maximal degree of justification is needed for knowledge-level justification. I will take no stance on this here. For many ordinary propositions that we are justified in believing, we will not have a maximal degree of justification and it will, in principle, be possible to gather further evidence. But acquiring further evidence comes with a cost, as inquiry takes up time and, at least, mental energy, which I call the costs of further inquiry (CFI). I suppose that we have an intuitive grasp of the notion of CFI. I also assume that this notion is a basic notion and cannot be reduced. At least, I will not attempt a further reduction. I will spell out which contextual factors can influence CFI and I will draw on Gerken’s list of contextual parameters introduced in Section 3.3 to do so. However, the list only serves as an illustration as to how contextual factors can influence CFI. It is not an attempt to reduce CFI in terms of the list. By acquiring further evidence, I mean evidence for p that either rules out a previously uneliminated not-p possibility or several not-p possibilities, or at least reduces the probability of a not-p possibility. Accordingly, just obsessively double-checking a timetable for a train will not necessarily 9

I mean to stay neutral on the debate about what evidence is, whether evidence is identified with facts, mental states, propositions, or what one knows.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

Contexts, Costs, and Benefits

71

count as acquiring further evidence for a specific departure time of the train. Double-checking once might be a way to rule out or to reduce the probability of a not-p possibility, because one rules out the possibility that one has misread the timetable the first time around. But, at some point, further checks will not count as acquiring further evidence that would rule out or reduce the probability of a not-p possibility. Consequently, obsessively double-checking is not a way to increase one’s degree of justification. CFI depends not only on the availability of further evidence but also on the evidence that one already has. If one already has excellent evidence for p, it will be hard to acquire evidence for p that could rule out further notp possibilities. For example, if my perceptual experience provides me with evidence that there is a bottle of water on the table, this evidence is strong evidence that there is a bottle of water on the table. Consequently, I already have a high degree of justification for believing and, barring skeptical proclivities, I will have a degree of justification that suffices for knowing. But, of course, there are still not-p possibilities that my evidence fails to rule out. I might be deceived by an evil demon and there might be no bottle of water on the table. However, acquiring evidence that rules out such skeptical scenarios is hard, if not impossible. In general, if one already has a high degree of justification, then one’s evidence will already rule out many, or at least the most pertinent, not-p possibilities. Therefore, acquiring further evidence that will rule out further not-p possibilities will be hard. Consequently, when one already has a high degree of justification, or when one already has excellent evidence, acquiring further evidence will often, even if not always, be costly. Therefore, CFI can depend partly on what evidence one already has. CFI also include practical costs. Some cases call for immediate action and taking too much time to inquire about p dramatically affects the intended outcomes. For example, a surgeon could inquire further about whether the needle he is about to use is sterile but, at some point, further inquiry lessens the survival chances of the patient lying on the operating table. So, under the label of CFI, we can aggregate the factors of availability of further evidence, urgency of the decision, and the evidence that one already possesses. How such aggregation is possible, given the incommensurate values of factors such as availability of evidence and urgency of decision, is an interesting question. I assume that such aggregation is possible, as it seems to be something that we do on a daily basis. For example, consider a choice between two different restaurants. We assign a value to each restaurant by adding up various incommensurate considerations, such as

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

72

Beings of Thought and Action

the price of a meal, location, service, etc. While it is not clear how we aggregate these incommensurate considerations, we do so quite often in our everyday life and often quite well. All I need for my proposal is that such aggregation of incommensurate values of the parameters comprising CFI is possible as well. CFI is just one factor that determines whether one’s degree of justification is adequate. The other is the costs of error (COE) about p. Among the COE are the consequences one incurs if the action that one took p to be a reason for fails to achieve one’s intended end because p turns out to be false. For the moment, I shall adopt the notion of stakes; however, I will offer a refined account of it in Chapter 5. The parameter stakes associated with the action from Gerken’s list clearly falls under COE. For example, suppose you believe that the bank will be open on Saturday and you treat this as a reason to deposit your paycheck on Saturday. However, if the bank is not open on Saturday, then you will fail to deposit your paycheck. And if you fail to deposit your paycheck by Saturday, then you will fail to be on time with the mortgage and lose your house due to foreclosure. Here, the costs of foreclosure are the COE about p. As a rule of thumb, COE rise when the stakes associated with the action rise. COE can also vary with social roles and conventions associated with the action, the final parameter in Gerken’s list. For example, the cost of making a wrong call about a penalty shot are higher for the referee of the game than for somebody watching the game on TV. Just like for CFI, the various factors that affect COE are incommensurable, at least in the sense that there is no obvious arithmetical procedure that would allow us to add up costs such as monetary losses and those such as breaking with social roles and conventions. But as earlier, I assume that aggregation of such incommensurate considerations into a final value of COE is possible. And I believe that we often do aggregate such diverse considerations, and do so quite reliably. So far, we have covered all the parameters on Gerken’s list other than availability of alternative actions. This parameter can affect the overall value of both CFI and COE, depending on the context. Imagine a context in which alternative actions are available and would have equally good outcomes given the agent’s preferences. In addition, the agent has reasons for these actions that are well supported by the evidence that she already has. In such a context, further inquiry could be seen as a cost and CFI should rise accordingly. Suppose that I face a choice between two restaurants that are equally good. I have excellent evidence that one is on 5th street, while I have little evidence that the other is on 8th street. In that

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

Contexts, Costs, and Benefits

73

situation, the available alternative to go to the restaurant on 5th street raises CFI as to whether the other restaurant is on 8th street. Why bother with further inquiry when the restaurants are equally good and my end of having a tasty meal can be achieved by going to either one? Here, additional inquiry could be an additional cost, hence the availability of an action that is sufficiently justified can be seen as contributing to CFI. In a different context, the availability of alternative actions might raise COE. It might not be immediately necessary to act on p and one can postpone acting on p by pursuing some alternative course of action. In such a context, the availability of alternative courses of action raises COE. Suppose that it is not immediately necessary for a doctor to prescribe medication. The patient’s condition is not so severe as to require urgent treatment. Instead, the doctor could do a further test to ensure that the patient’s condition actually needs medical treatment. In such a situation, the availability of an alternative course of action to prescribing medication should count as raising COE about p, as there is no decisive reason to act on p right now. Gerken claims that his list of parameters is open-ended. So the question might arise whether one can be satisfied by simply regrouping his list into factors that affect CFI or COE. But if there are further factors that are not accounted for, how could my suggestion be a solution to the incompleteness problem? My answer to this concern is that I merely used Gerken’s list as an illumination of what CFI and COE are, and how they can vary from context to context. I take CFI and COE to be fundamental in the sense that whatever contextual parameters one might discover in the future, these will be parameters that affect CFI and/or COE. There will be no further parameters that make a contribution that goes beyond CFI or COE. I take this to be a reasonable assumption as there is no clear need to consider anything besides CFI or COE when one is deciding whether one’s current evidence provides a degree of justification good enough to act on. While further parameters affecting CFI or COE might exist, their existence does not lead to a critical form of incompleteness of my proposal. This is also my answer to the features question. The only relevant features are CFI and COE. Having introduced the two factors to be balanced, CFI and COE, I will now show how the balancing works. It will be helpful to give a more graphical presentation. I suggest that we represent CFI and COE as vectors each starting from the same point that represents the deliberative context of an agent, C. This deliberative context includes the agent’s current evidence for p, and for the sake of simplicity I will assume that the agent has some

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

74

Beings of Thought and Action (1) COE

C

(2) COE

C

(3) COE

C

(4) COE

CFI

CFI CFI C

CFI

Figure 3.1 COE and CFI: Variations

positive evidence for p, but this is not a necessary condition of my account.10 The length of these vectors is determined by the respective total costs. Whenever these vectors have the same length, CFI and COE are balanced. Consider the following scenarios (1–4) depicted in Figure 3.1, where the vector pointing left is associated with COE and the vector pointing right is associated with CFI. My proposal for solving the balancing issue and thus the incompleteness problem is as follows: Adequate Justification (AJ) One’s degree of justification is adequate in DC iff, given this degree of justification, CFI and COE are either balanced or the former exceeds the latter.

I will examine scenarios (1–4) in turn to explain the details. In (1), a scenario in which COE by far exceeds CFI, one’s degree of justification is not adequate to make it permissible to treat p as a reason. Any of the high-stakes cases introduced in this chapter works as an example, whether it is the interview for the dream job or the important paycheck deposit. In the high-stakes bank case, one considers treating “the bank is open on Saturday” as a reason to postpone going to the bank on Friday, since the lines are long. One must make an important deposit in time to avoid foreclosure of one’s house. But now add that one’s actual degree of justification for the bank being open on Saturday is relatively low, as it is merely based on memory of a previous visit. Is it nonetheless adequate to treat “the bank is open Saturday” as a reason? The costs of error here are losing one’s house, while the costs of further inquiry are just standing in line to find out the hours the bank is open. Given one’s current 10

Accordingly, my account leaves room for the possibility of scenarios in which one has no evidence for p, and thus no positive degree of justification for p, while it is still permissible to treat p as a reason for action. This is compatible with my thesis in Chapter 1 as I have merely argued that epistemic irrationality can make one practically irrational, and I have argued for the existence of epistemic norms for practical reasoning. However, a context-sensitive epistemic norm may deliver the result that, in some situations, even a nonpositive degree of justification may be sufficient to act on p.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

Contexts, Costs, and Benefits

75

evidence, COE exceeds CFI; the current degree of justification provided by one’s evidence is not adequate to treat “the bank is open on Saturday” as a reason. In (2), a scenario in which CFI exceeds COE, one’s degree of warrant is adequate to make it permissible to act on p. Kickoff in Section 3.1 is an example of such a scenario; here, one relies on testimony by a drunken stranger from the pub as evidence that the soccer match will be on TV. The costs of error are merely being wrong about whether there is a soccer match on TV, but nothing hangs on this. The costs of further inquiry are not massively high either, but it seems that even the simple effort involved makes them higher than the costs of error. Since CFI exceeds COE, one’s degree of justification provided by one’s current evidence is adequate for treating “there is a soccer match on TV” as a reason to turn on the TV. In (3), COE and CFI are balanced and low. Because the costs are balanced, it is permissible to act on p. A real life case might be a situation in which one relies on one’s watch to catch a particular train, where nothing depends on catching that particular train. One could easily improve one’s epistemic status by asking people the time, or checking whether one’s watch is still accurate. But since the costs of missing the train are minimal, such further inquiry seems excessive. So the evidence one already has makes one’s degree of justification adequate to make it permissible to treat p as a reason for action. That does not mean, as I will explain in more detail in Chapter 4, that it is rationally impermissible to check further. My claim is merely that checking further is not required. In (4), COE and CFI are also balanced, but both costs are high. Nonetheless, I take it that the evidence one already possesses is adequate to make it permissible to rely on p. This might seem controversial, but we should remember what can drive CFI up. CFI can rise if one’s evidence for p is already very strong. If one already has very strong evidence for p, then it seems permissible to rely on p even if COE are high. CFI might also rise if one’s situation demands a quick decision. Take the situation of a surgeon about to perform an operation as an example. The costs of using a nonsterile needle might be high. But if further inquiry about whether the needle is sterile would take precious time that the patient does not have, it seems nonetheless permissible to rely on the needle being sterile. So even if CFI and COE are high, as long as they are balanced it is permissible to rely on p. Finally, my proposal can handle even such complex cases as Bank Hours. Bank Hours is a complex case because we added so many parameters from the list. The case becomes considerably less complex if we think

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

76

Beings of Thought and Action

about it in terms of CFI and COE. In fact, I believe we have a clear intuitive grasp of how CFI relate to COE here. CFI are very low since Hannah can easily increase her degree of justification for believing that the bank is open on Saturday by calling the bank or asking colleagues. COE are intuitively higher. If Hannah is wrong, she will incur a small fine and, more importantly, she will break a promise to her partner, Sarah. Since COE outrun CFI, the degree of justification Hannah has is not adequate to make it permissible to rely on the bank being open on Saturday in further practical reasoning. To me, at least, this seems to be the intuitively correct verdict. Before summing up, two concerns should be addressed. The first is whether the costs are a subjective or an objective matter. This is a variant of the subjective–objective question. It should not be that the laziness of an agent can raise CFI. And it should not be that the agent’s indifference about the consequences of her action can diminish COE. But, at the same time, the agent’s preferences should not be entirely removed from making a difference to COE. If the agent simply does not care about being at a certain place at a certain time, the costs of error about where a train goes should be low. I think that the answer to this kind of subjective–objective question can be modeled along the lines of my suggestion that invokes the reasonable person. The relevant notion of costs for you is what a reasonable person can accept as your costs assignment. This means that your laziness cannot drive up CFI, or only by as much as laziness can reasonably be accepted as a costs determining factor. On the other hand, this leaves enough leeway for personal preferences in determining COE. If a rational person can accept that for you the costs of being somewhere at a certain time are not high, then we cannot assume that they are very high. However, it does rule out the possibility of lowering COE by simply not caring in a manner that cannot reasonably be accepted. The second concern is about whether supplementing the suggested contextualist norm with AJ is enough to solve the incompleteness problem. I assume that the various factors that affect COE and CFI could be aggregated in a reasonable manner to determine the total COE and CFI. But doesn’t this assumption point to a further incompleteness problem? I concede that it represents a valid misgiving, as I agree that we ultimately want a precise aggregation procedure. However, I believe that it does not lead to a critical incompleteness problem. The incompleteness problem for contextualism I raised was that we need to know when a degree of justification can count as adequate in a deliberative context and which features play a determining role in adequacy. I have offered answers to both

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

Contexts, Costs, and Benefits

77

questions. Along the way, I have helped myself to two assumptions that one could legitimately ask to be clarified further. The first was the use of the notion of a reasonable person as a placeholder. The second was the assumption that it possible to aggregate incommensurate values of various factors that affect CFI and COE. One can rightly say that these assumptions point to an incompleteness in my account, and I am sympathetic to the demand of a resolution of this incompleteness in future work. But this is not the incompleteness problem I raised for contextualism and that I intended to answer. AJ can do justice to all the factors mentioned on Gerken’s list, but it is superior to an open-ended list. It gives us a clear answer as to which contextual factors matter, namely, anything that affects CFI and/or COE. And it tells us how they determine whether one’s degree of justification is adequate. This is the incompleteness problem I intended to solve.

3.6

Summary and Outlook

Throughout this chapter, I have made a number of suggestions with the aim of improving current contextualist epistemic norms for practical reasoning. To summarize, I want to present my full account, provide another reason to prefer contextualism over invariantism about epistemic norms for practical reasoning, and provide a further outlook. The basis of my account is Gerken’s warrant account, which I have reformulated as CJN: Contextualist Justification Norm for Practical Reasoning (CJN) In the deliberative context (DC), it is rationally permissible for S to treat the proposition that p as a reason for action iff S’s degree of justification for believing that p is adequate relative to DC.

I have answered the factivity question negatively: no context requires factive attitudes. My answer to the subjective–objective question appealed to what a reasonable person would presuppose about which features of a context obtain. And, finally, I argued that CJN must be supplemented with an account of what counts as an adequate degree of justification. My suggestion for this supplement is AJ: Adequate Justification (AJ) One’s degree of justification is adequate in DC iff, given this degree of justification, CFI and COE are either balanced or the former exceeds the latter.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

78

Beings of Thought and Action

My answer to the features question is contained in AJ: anything that contributes to CFI and COE is a relevant feature. While this resulting account is no doubt more complex than any simple invariantist norm, I believe that it does have the advantage of capturing more cases correctly. I have already pointed out that current invariantist norms have trouble with mundane cases like Kickoff. In closing, I would like to show how AJ gives us a recipe for creating further troublesome cases for classical invariantist proposals: cases in which CFI exceed COE because considerations of urgency raise CFI. According to AJ, as long as CFI exceed COE, it will still be rationally permissible to treat the relevant proposition as a reason. But one can construct cases where this is the case, yet, one does not meet the demands of the invariantist proposals (e.g. knowing, having knowledge-level justification, or being justified in believing that one knows). Here is an example, inspired by Anderson (2015): Wounded Hiker You are hiking in a remote area with a friend, who, all of a sudden, stumbles and suffers a deep cut to his upper leg, causing serious bleeding. Luckily, you are a doctor and you have a needle in your bag to stitch him up. However, as you remove the needle from the package in your bag, you realize that the packaging is already slightly torn. So you realize that the needle might not be sterile and using it for stitches might lead to a serious infection. However, considering the prospect of your friend bleeding to death, you decide that your degree of justification for believing that the needle is sterile is good enough to treat it as a reason to stitch your friend’s wound.

In Wounded Hiker, given you notice the tear in the packaging, you do not know that the needle is sterile; you are neither justified in believing that you know, nor do you have knowledge-level justification. Nonetheless, it seems rationally permissible to treat the proposition as a reason to stitch your friend’s wound. Therefore, the case is another counterexample against the classical invariantist proposals I have already discussed. Let me make more clear the way in which a case like Wounded Hiker (or Kickoff in Chapter 2) is a counterexample against the invariantist proposals. Proponents can claim that the reason in such cases is what one may call a probabilistic reason. The relevant reason is not that the needle is sterile, but rather that the needle is sufficiently likely to be sterile. And that is something that you know, or have knowledge-level justification to believe, or are justified in believing that you know. I concur that this a strategy to account for similar cases, but it does not account for Wounded Hiker and misses the point I want to make with this case. Wounded Hiker

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

Contexts, Costs, and Benefits

79

specifically features a reason that is not a probabilistic reason. But despite that, there is seems to be no rational shortcoming when one specifically treats the nonprobabilistic proposition as a reason. For this proposition, one does not satisfy any of the proposed invariantist conditions. Nonetheless, there is intuitively no rational shortcoming in treating this proposition as a reason. This is what the last sentence of the case is meant to make explicit. The problem for invariantism is that it cannot endorse this verdict. Instead, it must find a rational fault, where intuitively there is none. In this sense, invariantism seems extensionally incorrect. My contextualist account can handle Wounded Hiker. Even with the tear in the packaging, you still have some degree of justification for believing that the needle is sterile, perhaps because the tear could only have happened very recently or is so minimal that it is not overly likely that the needle has been contaminated. It also seems that CFI exceed COE about whether the needle is sterile. Gathering further evidence is extremely costly given the dire situation that your friend is in. Granted, the risk of giving him a serious infection is not desirable, but it is still obviously better than letting him bleed to death. Thus, in this situation, CFI exceed COE. Since AJ is fulfilled, one’s degree of justification is adequate, and you can permissibly treat the relevant proposition as a reason. This supports my claim that contextualism does better than classical invariantism when it comes to extensional correctness. While simplicity might be a theoretical virtue, we should never put it above extensional correctness. Thus, in the absence of a simpler invariantist norm that can capture just as many cases, we should prefer the contextualist account given here. I hope I can also make good on the worry I mentioned at the beginning of this chapter. While I have not given a straight answer to the question which epistemic status makes it permissible to treat a proposition as a reason, I hope that my contextualist account provides enough detail to overcome the potential disappointment a contextualist answer might initially cause. Finally, while my account implies that there is no single epistemic status that makes it rationally permissible to treat a proposition as a reason, we can nonetheless derive an interesting noncontext-sensitive principle from AJ. AJ implies the following necessary condition on which propositions one can permissibly treat as reasons: Rational Permissibility (RP) If it is rationally permissible to treat p as a reason for action in a deliberative context (DC), then, in DC, the costs of error regarding p do not exceed the costs of further inquiry into whether p.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

80

Beings of Thought and Action

RP follows from AJ because one’s degree of justification cannot be adequate if COE exceed CFI, and rational permissibility requires an adequate degree of justification. RP is an important consequence of my contextualist account and will play a significant role in my case for pragmatic encroachment in epistemology in Chapter 5, where I also return to some objections to RP and elaborate on the notions of CFI and COE.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.003

chapter 4

Knowledge and Seemingly Risky Actions

We started with the idea that, at least in many ordinary cases, practical rationality does not come for free epistemically. However, this leaves room for the possibility that at some point one has paid one’s epistemic dues quite invariantly of any context. The epistemic price for practical rationality may differ with context, but it cannot go infinitely high. Speaking less metaphorically: It is compatible with contextualism that there is a single epistemic state that suffices for rational permissibility in all contexts. That there is a single state that must suffice in all contexts might seem obvious. One might think that one’s degree of justification cannot go infinitely high. Therefore, whatever the highest degree of justification is, this is an epistemic stopping point. If there is such an epistemic stopping point, then it must suffice for rational practical reasoning, as it is simply not possible to go beyond the stopping point. But this line of thinking is less persuasive if one considers that there could be doubt about whether one has reached the stopping point. Suppose that knowledge is the epistemic stopping point. Then we can easily imagine that there are situations in which one knows, but one could ask oneself whether one knows that one knows. And, in principle, it will always be possible to go for a higher iteration of knowledge. Once the possibility of endless iterations is on the table, it is no longer obvious that there really are epistemic stopping points. And thus it is no longer obvious that there must be a single epistemic state that suffices for permissible practical reasoning in all contexts. Even if there is no epistemic stopping point, we nonetheless have reason to assume that a single epistemic state suffices for rational permissibility in all contexts. I will ultimately argue that this state is having knowledge-level justification. I will arrive at this conclusion indirectly, by first defending the sufficiency direction of the knowledge norm for practical reasoning (KRS) in Sections 4.1, 4.2, 4.3, and 4.4 against popular counterexamples. In Section 4.5, I consider why our intuitions about the counterexamples are misleading. In Section 4.6, by running the subtraction argument presented 81

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

82

Beings of Thought and Action

in Chapter 2, I argue that knowledge-level justification for believing p suffices in all contexts for rational permissibility, and I point out how this view still vindicates part of the knowledge-first project.

4.1 Is Knowledge Sufficient for Treating a Proposition As a Reason for Action? To set the stage, let us review a few ideas. I mentioned that several authors endorse a principle along the following lines: Knowledge–Reasons Sufficiency (KRS) Knowing that p is sufficient for it to be rationally permissible to treat p as a reason for an action A, provided p is relevant to A-ing.1

KRS specifies which epistemic standing is sufficient for it to be rationally permissible to treat p as a reason. Despite knowing that p, it can be unfitting to treat p as a reason. I know that penguins waddle. But it is inappropriate to treat this proposition as a reason to buy stocks for my portfolio, as this proposition is practically irrelevant. Thus the proviso that p is relevant to A-ing. To keep things simple, all the cases discussed feature only practically relevant propositions so that the only concern is with the agent’s epistemic position toward these propositions. Furthermore, to stay clear of a problem discussed in Ichikawa (2012), I will only consider cases in which just a single proposition is relevant to the assessment of the rationality of an action.2 KRS seems to be an intuitively plausible principle. Its proponents are sometimes content with pointing out that it is unclear why knowledge would not suffice to treat something as a reason – why ask for more? It also sounds odd to say “I know that p, but I must inquire further into whether p before I may rely on p in my practical reasoning.” Following an “innocent until proven guilty” approach, it seems fine to adopt KRS.

1

2

See Hawthorne and Stanley (2008: 578) and Fantl and McGrath (2009: 66) for proposals that are, for current purposes, identical to KRS since they hold that knowledge of a proposition is sufficient for permissibly treating it as a reason for action. Ichikawa (2012) argues that some cases of irrational actions could plausibly be said to involve several propositions, which undermines their claim of being clear counterexamples to KRS. Even when it is rationally permissible to treat p as a reason, p could be outweighed by other reasons. But that does not indicate that one’s epistemic standing toward p was not sufficient to treat p as a reason. Cases in Lackey (2010) fall prey to this issue, which is why I have not included them in my discussion. For the same reason, I omit the Survey Case discussed in Roeber (2018) and Beddor (2020), about which Beddor writes that “you have an additional reason,” suggesting that the case involves more than just one reason.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

Knowledge and Seemingly Risky Actions

83

While things are not as simple as that, I will largely follow the “innocent until proven guilty” approach in defending KRS – in the following sense. I defend KRS against a number of counterexamples from the literature; none of them amounts to a compelling case against KRS. I will not provide a principled argument in favor of KRS, as Fantl and McGrath (2009) attempt. Brown (2012) challenges one of the principles Fantl and McGrath use by citing the very same case that I will reintroduce next. Since the case is also used against KRS directly, my objection to this case might also be of relevance for Fantl and Mc Grath’s approach in arguing for KRS via other principles. The popular surgeon case from Brown (2008)3 appeared in Chapter 3, but since my focus here is a different one, a repetition might be useful. Surgeon Case A surgeon is about to operate. She saw the patient in the morning when it was decided that they had to remove the left kidney. Yet she checks the patient’s records before she begins the procedure. A student notices this and the following dialogue takes place: student: I don’t understand. Why is she looking at the patient’s records? She was in the clinic with the patient this morning. Doesn’t she even know which kidney it is? nurse: Of course she knows which kidney it is. But imagine what it would be like if she removed the wrong kidney. She shouldn’t operate before checking the patient’s records. (See Brown (2008: 176)).

The nurse’s comment suggests that knowledge of a proposition is not always sufficient for it to be permissible to treat it as a reason, hence KRS must be false. The case relies on two intuitive judgments. The first is that the surgeon indeed knows which kidney is to be removed. Let us call this the knowledge intuition. There might be room for doubt about it. Pragmatic encroachment holds that what is at stake can influence whether one knows.4 A proponent of pragmatic encroachment might possibly deny that the surgeon knows that it is the left kidney, because of the abnormally 3

4

The case also appears in Brown (2012: 45). Locke (2014) defends the case and he relies on it in Locke (2015) in an argument for an independent proposal of an epistemic norm for practical reasoning. Fassio (2017) and Gerken (2015) seem to take it as a valid challenge. Similar cases that I will not discuss here are put forward in Neta (2009: 688) and Anderson (2015: 7). Schroeder (2012: 269) at least entertains the possibility that KRS is false. Skepticism about KRS seems to be widespread. I cannot discuss all the individual cases, but my defense of KRS is intended to cover all of them. The main proponents of KRS endorse pragmatic encroachment on knowledge; see Stanley (2005) and Fantl and McGrath (2009). Pragmatic encroachment is at least entertained in Hawthorne (2004).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

84

Beings of Thought and Action

elevated stakes. Pragmatic encroachment is a controversial thesis, but as I said in Chapter 3, I will ultimately argue for it. However, at this point I want to be clear about the dialectic. I will simply grant Brown the knowledge intuition and I will not appeal to pragmatic encroachment in order to defend KRS. Hence KRS, and any principle derived from it, can serve as a neutral premise in an argument for pragmatic encroachment.5 The second intuitive judgment is that starting right away with the procedure is not a rationally permissible action and that it is obligatory to check the patient’s records. Let us call this the impermissibility intuition. It matters that readers not only think that double-checking is acceptable or somehow intelligible but rationally required. Otherwise the case poses no threat to KRS. Only if it is rationally impermissible to start the procedure right away can it be that the surgeon’s knowledge of the relevant proposition is not sufficient for it to be rationally permissible to treat it as a reason. As Brown (2008: 177) puts it: “although the relevant evaluations explicitly concern action, it seems that they reflect claims about the underlying reasoning. For instance, the relevant intuition in SURGEON is that the surgeon should not rely on the premise that it is the left kidney which is affected in practical reasoning.” I agree with Brown that we can shift freely here between evaluations of actions and what one may treat as a reason. This is because of the restriction to cases in which only a single consideration is relevant to the rationality of the action. Thus, if an action is rational, then it was rationally permissible to treat the consideration acted upon as a reason. I will sometimes shift between these evaluations as well, as this will not undermine the arguments.6 While I grant Brown the knowledge intuition, I will scrutinize the impermissibility intuition. To back it up, one should be able to articulate why the surgeon cannot rationally act on her knowledge. The best candidate for an explanation of the impropriety seems to involve the consequences of error and the fallibilist nature of knowledge.7 Fallibilism holds that one can fallibly know that p, even when one’s 5

6

7

Another way to doubt the knowledge intuition is to question whether the surgeon has the relevant belief, perhaps because it would be natural to suspend judgment in this situation. Thus, the surgeon fails to know. Thanks to Sean Neagle for bringing this possibility to my attention, though I shall set it aside and grant Brown the knowledge intuition. It is important to be clear on this, as the exchange on this point between Ichikawa (2012) and Locke (2014) demonstrates. Concerning this point, I clearly side with Locke (2014), who also points out that the relevant intuition in the Surgeon Case is the impermissibility intuition. I believe that an infallibilist view of knowledge makes it even harder to maintain the impermissibility intuition as, given infallibilism, there is no chance of error.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

Knowledge and Seemingly Risky Actions

85

evidence does not entail that p and hence there is a chance that not-p. This is not an epistemically significant chance of not-p that would undermine having knowledge. Yet this chance could become significant in certain contexts. This is what happens in extraordinary cases like the Surgeon Case. Surely, the surgeon only fallibly knows that the left kidney needs to be removed – she might have mixed up her patients. Here, a small chance that not-p that does not undermine one’s knowledge that p must be taken into account in order for it to be rationally permissible to treat p as a reason. While this looks like a very plausible explanation, I will argue against it in Sections 4.2 and 4.3. It needs two sections because there are two equally plausible readings of Brown’s case. To demonstrate this, I suggest that we switch from the third-person perspective of the nurse to how the surgeon herself conceives of her situation. This switch changes nothing essential, as the case exploits no difference in the information available to the nurse and to the surgeon. One could assume that the surgeon has a thought akin to the nurse’s utterance. Surgeon-Belief (SB) Of course, I know that the left kidney needs to be removed, I just saw the patient this morning. But it would be disastrous to remove the wrong kidney. I shouldn’t operate before checking the patient’s records again.

But I gather that some envision the case quite differently, where the surgeon may have doubts about whether she knows. Surgeon-Nonbelief (SNB) Of course, I just saw the patient this morning. I believe that the left kidney needs to be removed, but do I really know that? It would be disastrous to remove the wrong kidney. I shouldn’t operate before checking the patient’s records again.

While SB and SNB differ in whether the surgeon believes that she knows, the surgeon does know in both scenarios. Readers who endorse the KK principle, which holds that knowing that p entails knowing that one knows that p, might want to reject SNB. I will assume that it is possible to know that p without knowing that one knows that p. This assumption is congenial to Brown and only poses an additional challenge for my defense of KRS. In Sections 4.2 and 4.3, I will argue that neither way of envisioning the scenario provides a compelling reason to reject KRS.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

86

Beings of Thought and Action

4.2 The Factivity of Knowledge and Epistemic Possibilities In this section, I argue that under the assumption of SB, the impermissibility intuition cannot be maintained. I first outline a tempting argument that is ultimately unsuccessful. However, there is another argument in the vicinity that delivers the result that the impermissibility intuition cannot be maintained. Here is a tempting argument based on the felicity of certain utterances and the semantic of epistemic modals such as “might” and “possible.” In SB, the surgeon believes she knows, yet recoils from starting the procedure after thinking of the potentially disastrous consequences of an error. But these consequences will only obtain if the surgeon is wrong about which kidney needs removing. The surgeon can only reasonably worry about them obtaining if it is reasonable to believe that it is possible that it is not the left kidney that should be removed. Here then is the problem. For the surgeon to reasonably worry, it must be possible that the surgeon knows that the left kidney is diseased, and, simultaneously, for her to deem it possible that it is not the left kidney that is diseased. Put schematically, it would mean that the following could be true of the surgeon: S knows that p, but it is possible that not-p. Such concessive knowledge attributions (CKAs),8 sound very odd. Even worse than this oddness, on a prominent semantic theory about epistemic uses of “possible,” CKAs are false. According to this semantic account, an utterance of “it is possible that not-p” by S is true iff S does not know that p (see DeRose 1991). If this account is right, then whenever the knowledge claim in the first conjunct is true, the claim that it is possible that not-p is thereby automatically falsified. Likewise, if the second conjunct is actually true, then the knowledge claim is false. Consequently, we should not draw any conclusions from the Surgeon Case because it rests on a false CKA. While this argument is enticing, we should not put much weight on it. The literature on epistemic modals and CKAs does offer alternatives that vindicate SB and work in Brown’s favor. Worsnip (2015b) argues that CKAs are merely unassertable in the sense that they become false when asserted. Dougherty and Rysiew (2009) argue that CKAs can be true and that they merely convey, pragmatically, incompatible propositions. Brown could adopt either position. Adopting Worsnip’s proposal, she might say that the relevant CKA is unassertable, but that the relevant claim in the 8

The term “concessive knowledge attribution” is due to Rysiew (2001).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

Knowledge and Seemingly Risky Actions

87

background is true. Adopting Dougherty and Rysiew’s proposal, she could say that the relevant CKA merely implies, pragmatically, something false, while it is nonetheless true. Either proposal can avoid the criticism that the Surgeon Case, understood along the lines of SB, relies on a false CKA. Yet I think another segment of the debate about epistemic modals that I touched upon in Chapter 2 can be employed against the SB reading of the Surgeon Case. The problem concerns the factivity of knowledge. Factivity, and even the mere assumption of factivity, clashes with certain epistemic modals that the Surgeon Case relies on. Yalcin (2007) points out that an utterance of “it is raining, but it might not be raining” sounds odd and contradictory, given a reading of “might” as an epistemic modal.9 The oddity is not merely pragmatic and thus unlike Moore’s paradoxical sentences of the form, “It is raining, but I do not believe that it is raining.” The oddity of Moore’s paradoxical sentences disappears in certain embeddings. For example, “Suppose it is raining and I do not believe that it is raining” sounds fine. However, “Suppose that it is raining and that it might not be raining” still sounds odd and contradictory. The problem is that one simply cannot coherently entertain the thought that it is raining while at the same time entertaining the thought that it might not be raining. Therefore, none of the defense moves mentioned pertaining to CKAs mentioned earlier apply either. The impropriety concerns not merely an utterance of a thought, but the underlying thought.10 We can take this as a cue that something about the state of mind of the surgeon in SB is actually incoherent. The incoherence arises due to the factivity of knowledge. We can illustrate this with the following dialogue. Brown’s surgeon and an imaginary interlocutor, let’s call her Alice, discuss whether the surgeon may rationally treat the proposition that the left kidney is diseased (LKD) as a reason for action and thus start right away with the procedure. surgeon: I know LKD. But it might be that not-LKD. So it is not permissible to act on LKD. alice: You know that LKD. surgeon: Yes. alice: So it is true that LKD. surgeon: Yes. 9

10

In section x of Philosophical Investigations II, Wittgenstein draws attention to the “misbegotten sentence ‘It may be raining, but it isn’t’.” See also Reed (2013: 57) for the shared assessment that such thoughts “reflect a defective state of mind.”

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

88

Beings of Thought and Action

alice: Then what is the problem? surgeon: Even though it is true that LKD, it might be that not-LKD. If it is the case that not-LKD, then acting on LKD would have disastrous consequences! alice: But you said that it is true that LKD. surgeon: Yes. alice: Then I don’t see the problem.

The dialogue brings out the oddity of believing that one knows while worrying about the consequences of being wrong about that which one claims to know. The problem is that if one believes to know that p, then despite all the chances that not-p, one must take p to be the case. However, if one takes p to be the case, it is incoherent to entertain the possibility that not-p and to worry about what would happen if not-p. This would commit one to an incoherence of the sort “p, but it might be that not-p.”11 The previous dialogue suggests that the Surgeon Case, if envisioned as an instance of SB, exhibits such incoherence. A problem also arises from the reader’s perspective if the surgeon case is supposed to work along the lines of SB. If one judges that the surgeon cannot act right away, one must hold that the surgeon must consider that it is possible that not-LKD. But one thereby demands that the surgeon entertains an incoherent thought of the form “LKD, but it might be that not-LKD,” as the surgeon believes that she knows LKD. Assuming that one cannot be rationally required to entertain incoherent thoughts, this suggests that the underlying intuition that led to this demand is not to be trusted. It also undermines the purported explanation of the impermissibility intuition for cases like SB. Perhaps other cases fare better? Reed (2010) offers a kind of betting case that could be used against KRS. Similar betting cases can also be found in Brown (2008: 176). Asymmetric Reward/Punishment The researcher of a psychological study asks you questions about Roman history – a subject with which you are well acquainted. Every correct answer is rewarded with a jelly bean; every incorrect answer is punished with an extremely painful electric shock. There is neither reward nor punishment for failing to give an answer. The first question is: When was Julius Caesar born? 11

I am not saying that it is impossible to imagine that not-p, or to suppose that not-p, for example, to engage in a reductio argument, when one believes that one knows that p. I am merely claiming that it is incoherent to have a state of mind that commits one to p, while another state of mind contradicts this commitment. No incoherence arises merely because one imagines that not-p or argues for p by reductio.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

Knowledge and Seemingly Risky Actions

89

You are confident, though not absolutely certain, that the answer is 100 bc. You also know that, given that Caesar was born in 100 bc, the best thing to do is to provide this answer, as then you will be one jelly bean richer!12

If one knows that Caesar was born in 100 bc, then, according to KRS, it is appropriate to treat this as a reason to take the bet. Since there are no other relevant considerations, it is rationally permissible to take the bet. But this is the wrong result, Reed believes. Given the meager reward and the potential punishment, it is not rational to take the bet. Therefore KRS is false. I think it is natural to interpret this case as an instance of SB. Reed writes that one remains confident in Caesar being born in 100 bc, which I take to signal that one believes that one knows. Further details corroborate this. Reed backs up his case by pointing to the utterance “I do know that Caesar was born in 100 bc, but it is not worth risking a shock.” It is a natural assumption that anyone honestly uttering it believes that they know. This also affirms my warning about the tempting argument. It is undeniable that we sometimes say such things. But it is not clear that such utterances are infelicitous or express false CKAs. However, this is no reason to believe that these cases fare any better. One could easily show that such utterances express incoherent attitudes by constructing a parallel dialogue to Alice and the surgeon here, and thus raise the same issues. I will leave it to the reader to do this, should the need arise. What I want to do instead is consider whether one can reasonably be accused of any rational shortcoming if one takes the bet. If the provision of a justification of this accusation proves difficult, then the impermissibility intuition is on shaky ground. The following dialogue between Ian, who actually took the bet and who now faces criticism from Tim, suggests as much: timid tim: Wow, you took such a risky bet for one jelly bean! That’s insane! intrepid ian: There was no risk. timid tim: Sure, you could have gotten the shock. intrepid ian: I could not have. I know that Caesar was born in 100 bc. That means that Caesar was born in 100 bc, and thus it was perfectly safe to take the bet. timid tim: OK, you do know, but you could have gotten the shock. That was irrationally risky. intrepid ian: You keep saying that, but I do not see why that would be.

12

Case based on Reed (2010: 228).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

90

Beings of Thought and Action

I think Tim’s reaction is fairly natural. Most of us would probably shy away from the bet and be bewildered by anyone taking it. But we should not leap to conclusions about rational permissibility. If there is a person like Ian, who takes the bet precisely because they know and they take themselves to know, then it seems that we lack a justification for accusing them of any rational failure. The argument based on fallibilism about knowledge cannot succeed, as even fallibilists must maintain that it is incoherent to entertain thoughts of the form “p, but it might be that not-p,” and hence Ian cannot be rationally required to have such a thought. At least, it is not obvious what Ian’s rational failure is if he takes the bet. Ian’s action might be unusual, but that does not make it irrational. In fact, Ian might exhibit a character trait that we usually value. Consider the idiom “put your money where your mouth is.” We criticize people when they fail to do so in ordinary situations. But Ian is not like this. He is willing to put his money where his mouth is, even in a situation where most of us would rather not.13 Perhaps one can come up with a good reason for the charge of irrationality that Tim could bring forward, but it is not obvious what that reason could be. In regard to risk, there is one more consideration to support the rational permissibility of Ian’s action. Knowledge is commonly thought to include a safety condition, understood as safety from error in a very similar situation.14 Thus, by conceding that Ian’s belief amounts to knowledge, one concedes not only the truth of his belief but also that it could not easily have been false. This makes the assumption that Ian exposes himself to a serious risk in taking the bet even more puzzling. To be clear: I am not saying that risk averseness is irrational or that I endorse taking bets like that here. My point is simply that if one concedes to others that they know that p, it seems there is no obvious reason to charge them with irrationality if they treat p as a reason for action. Let’s take stock. I argued that the crucial impermissibility intuition cannot be maintained for the SB version of the Surgeon Case. The main problem is that to explain the impermissibility intuition as suggested, one most hold that one is rationally required to hold an incoherent thought of the form “p, but it might be that not-p.” Since it is implausible that there is such a requirement to be incoherent, the impermissibility intuition does 13

14

Those not swayed by my dull dialogues might want to consider the short story “Taste” by Roald Dahl. The moral of this story is that if one knows and believes so, even if one came to know in fraudulent ways, one can quite reasonably take bets even when those bets come with potentially huge losses. Thanks to Luke Davies for bringing this story to my attention. See, for example, Williamson (2000: 147).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

Knowledge and Seemingly Risky Actions

91

not pass scrutiny and should be discarded. If these are the only cases that speak against KRS, then they do not provide strong evidence against KRS.

4.3

Fallibilism and Nonluminosity

One might hold that real trouble for KRS arises when the Surgeon Case is modeled along the lines of SNB, which is one example of a broader pattern. There are at least three specific instances of this pattern. Knowing that the left kidney needs to be removed can be conjoined with any of the following three attitudes to create what I call, with a nod to Williamson (2000), a nonluminous case of knowledge, a case in which one fails to know that one knows.15 N-KK I do not know that I know that the left kidney needs to be removed. N-JBK I am not justified in believing that I know that the left kidney needs to be removed. N-VPK Given my evidence, it is very improbable that I know that the left kidney needs to be removed.

In such nonluminous cases, for all that one knows, it is genuinely possible that one fails to know. One might say this is the relevant kind of fallibility that makes it inappropriate to treat a proposition as a reason even if one does know it. This is not an explicit feature of Brown’s original case, but it is easy enough to come up with such variations, as the introduction of SNB shows. Clearly, the argument of Section 4.2 cannot be employed here. Since one is not committed to knowing that p, the factivity of knowledge does not commit one to p and thus no incoherence arises if one were to assume that possibly not-p. Hence, I will pursue two different strategies. I will argue that nonluminous cases can be used to undermine any reasonable alternative to KRS. If nonluminous cases are a problem, they are a problem for every reasonable alternative and thus they provide no reason to abandon KRS specifically. In light of this, one may assume that rational permissibility is a luminous condition. However, this is an extremely strong claim for which there is currently no argument on offer. 15

The first two are less controversial examples of a failure of the KK principle; the third example refers to Williamson’s claim that there can be instances of very improbable knowing (see Williamson 2014).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

92

Beings of Thought and Action

The nonluminosity strategy is extremely broad. It can be used against any other epistemic state that one might reasonably offer as a replacement for KRS, as I will demonstrate by discussing four possible replacements. We might assume that Cartesian certainty, the absence of even the slightest doubt, is an epistemic state stronger than knowledge and actually luminous. But kidney removal is hard enough. We cannot reasonably expect a surgeon to solve the problem of Cartesian skepticism prior to performing such a procedure. There might be epistemic states that are stronger than knowledge, but not as strong as Cartesian certainty. Let us call such states knowledge+. It is not self-evident why we should assume that knowledge+ is luminous. It rather seems that we will always be able to construct cases in which one’s epistemic position suffices for knowledge+, but it will not be luminous that one has knowledge+. Iterations of knowledge face the same issue. One might assume that the surgeon must know that she knows that the left kidney is diseased – as N-KK suggests. But since the whole argument assumed that knowledge is not luminous, we will be able to construct parallel cases in which one’s higher-order knowledge is not luminous, and for any iteration of knowledge. One might assume that the surgeon must be justified in believing that she knows that the left kidney is diseased – as N-JBK suggests. We need not turn to luminosity concerns to see that this is not a fitting replacement if the impermissibility intuition is driven by fallibilism. On ordinary accounts, justified belief is compatible with having a false belief. This allows for the possibility of cases in which one worries whether one’s justified belief that one knows p is true. In such cases, there remain certain error possibilities which, due to the potentially disastrous consequences of being wrong must be considered, and they make it impermissible to treat p as a reason. Therefore, the suggested replacement will be subject to similar counterexamples. In light of the lack of a convincing replacement, one might consider a more radical move. One could hold that there is no single epistemic condition that makes it rationally permissible to treat p as a reason in all decision contexts. But that raises the question which epistemic state makes it permissible in a context to treat p as a reason and whether that state is luminous. If the answer to the latter question is “no,” then the nonluminosity strategy is entirely self-undermining. If it is successful against KRS, it is equally successful against any other principle or context-specific requirement. If nonluminosity is a problem for any

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

Knowledge and Seemingly Risky Actions

93

view, then it is not a problem that demands abandoning KRS in favor of another view. If the answer to the latter question is “yes,” then the opponent of KRS is committed to the luminosity of rational permissibility. However, an argument for the nonluminosity of knowledge found in Williamson (2000) can be applied to other conditions, including rational permissibility. If the opponent of KRS accepts this argument, it is hard to see how they could endorse the luminosity of rational permissibility. Williamson’s arguments have received their fair share of criticism and some of it, namely, Berker (2008), is motivated by the idea that rationality is a luminous condition. An ultimate defense of KRS would require an argument that establishes that rational permissibility is not a luminous condition. I cannot give such an argument here. Instead, I want to offer reasons to resist the move to the luminosity of rational permissibility and to make its nonluminosity palatable. First, the claim that rational permissibility is luminous is a very strong claim for which the current literature offers no argument. It is much stronger than for example an accessibilist theory of rationality, which has many proponents. Accessibilists like Audi (2001) assume that one always has access to the grounds of what makes it rational to believe p. But even if this thesis about access is true (which many dispute), it does not follow automatically that this access to the grounds is sufficient to be in a position to know that it is rational to believe p. For example, even if I have introspective access to grounds that make a belief rational, it does not automatically follow that this access is always reliable enough to suffice for knowing. For that, a separate argument is required; mutatis mutandis for rational permissibility. This is the burden of establishing that rational permissibility is luminous, which, to the best of my knowledge, nobody in the current literature has shouldered. Second, it is not as if accepting the nonluminosity of rational permissibility amounts to an obvious conceptual confusion.16 Consider the following utterance about W who is in a situation described earlier in N-VPK, a case of very improbable knowledge, which is perhaps the most extreme case of nonluminous knowledge. Given W’s evidence for p, it is very improbable that he knows that p, but since he knows, it is rationally permissible for him to treat p as a reason. 16

Berker (2008: 1) makes a charge along those lines. He writes that to suppose that one can be irrational without being able to tell on reflection that this is so is a strained use of the word “irrational.”

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

94

Beings of Thought and Action

This utterance is no doubt a bit of a mouthful. But for me, at least, it does not elicit the reaction that somebody uttering it is obviously conceptually confused. If the assumption of nonluminosity amounts to no obvious conceptual confusion, then we are at least not immediately required to assume luminosity. Third, accepting the nonluminosity of rational permissibility does not have the damning consequences one might think it has. Let us consider N-LTK explicitly. I agree that a surgeon’s utterance “Given my evidence, it is very improbable that I know that the left kidney needs to be removed” screams for a reply like “Don’t operate right away, double-check!” But this is not indicative of the falsity of KRS. All that KRS claims is that it is rationally permissible, even in a case like N-LTK, to treat what one knows as a reason for action. It does not claim that one is required to do so. I agree, especially in light of the utterance just mentioned, that double-checking is entirely rational. But KRS makes no claim to the contrary. Only if combined with the claim that, in every choice situation, there is only one rationally permissible course of action, in this case double-checking, would there be an issue for KRS.17 But accepting the nonluminosity of rational permissibility does not commit one to the perhaps shocking verdict that the surgeon is rationally required to operate right away when in her view, given her evidence, it is very improbable that she knows which kidney needs to be removed. I will return to this last point in Section 4.5. Let us take stock of this section. Opponents of KRS might modify Brown’s Surgeon Case so that it becomes a case of nonluminous knowledge. This strategy faces the following dilemma. On the one hand, it undermines itself, because it can be leveled against any alternative suggestion: If nonluminosity is a serious problem, then it is a problem for any alternative and so these cases provide no reason to abandon KRS specifically. Alternatively, one could commit to the luminosity of rational permissibility. This is however a very strong, nonobvious claim for which we can find no convincing argument in the current literature. In sum, we can conclude that these variations of the Surgeon Case fail to provide a strong reason to abandon KRS. 17

See Greco and Hedden (2016) for a uniqueness condition on rational action. However, even accepting their view and holding that double-checking is the only rational option does not yet impugn KRS. Here is a cursory reply. In a case like N-LTK, there is more than one relevant consideration that has a bearing on what to do: One knows which kidney it is; one also knows that it is unlikely that one knows which kidney it is. This second consideration could be the weightier reason that makes it rationally obligatory to double-check. But as mentioned in Section 4.1, cases relying on two distinct considerations that have a bearing on what it is rational to do are not suitable to argue against KRS.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

Knowledge and Seemingly Risky Actions

95

4.4 Knowledge, Expected Utility Theory, and Credence 1 In this section, I discuss a variant of the Surgeon Case that employs the machinery of expected utility theory (EUT). As I suggested in the Prologue and as I will mention again later, the relation between EUT and a reasons-based framework is not obvious. Hence it is not obvious that one can refute KRS by relying on examples that employ the notion of EUT. Still, I find it worthwhile to engage with these examples and to explore what one can say in response to them without rejecting their applicability or EUT outright. Locke (2014: 86) sets up the Surgeon Case in terms of EUT. His variation shows that the surgeon cannot act on her knowledge because she thereby fails to maximize her expected utility. The way he presents the case boils down to the following decision table (Table 4.1). At the top of the table are the two possible states of the world; in the row below are the credences that Locke specifies the surgeon to have in them. Locke stipulates the following for each possible outcome. Operating now when the left kidney is diseased has the highest utility, while operating now has the smallest utility when the left kidney is not diseased. Doublechecking whether it is the left kidney receives slightly smaller utility than operating now because we might say that it is slightly better to start the surgery as soon as possible.18 Double-checking has the same utility if it is not the left kidney because then one will ultimately remove the kidney that is supposed to be removed. If we accept all of these stipulations, then double-checking maximizes expected utility.19 Thus EUT supports the Table 4.1 Surgeon case in EUT

Operate now Double-check

18

19

LKD

Not-LKD

Cr. 0.99 1,000,000 999,999

Cr. 0.01 −1,000,000 999,999

Some might not accept this utility stipulation and think that operating now and double-checking given that LKD should have the same utility. But even with this alternative stipulation in place, KRS is not threatened. As we will see, the problematic stipulation concerns the credence in LKD. With the alternative stipulation, double- checking and operating now both have the same expected utility when we assign credence 1 in LKD. Given that if two acts have the same expected utility, both are rationally permissible, KRS could still be upheld. EU (operate now) = 0.99 x 1,000,000 + 0.01 x -1,000,000 = 990,000 + (-10,000) = 980,000. EU (double-check) = 0.99 x 999,999 + 0.01 x 999,999 = 989,999.01 + 9,999.99 = 999,999.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

96

Beings of Thought and Action

verdict that the surgeon’s knowledge is not sufficient for it to be rationally permissible to treat the proposition that the left kidney is diseased as a reason to operate now. One problem is that this case relies on a contested credence ascription. Locke stipulates that the surgeon has a credence of 0.99 when she knows. But Hawthorne and Stanley, two of the main proponents of KRS, subscribe to an idea from Williamson (2000). They adopt a framework in which credences express not merely subjective probabilities, but epistemic probabilities, that is, the probability of a proposition given what one knows. In this framework, what one knows has epistemic probability 1 (see Hawthorne and Stanley 2008: 588) or, as I will say, credence 1, so that K → Cr 1. It is this reading of credence that is relevant to rational decision making. Locke’s case can only be constructed when having knowledge is compatible with a credence lower than 1. However, if K → Cr 1 is true, then if the surgeon knows, she has credence 1 in LKD and 0 in not-LKD. The resulting expected utility calculation would come out in favor of operating now. Thus, if the surgeon truly knows, she can operate now, which supports the view that knowledge is sufficient for it to be rationally permissible to treat a proposition as a reason. In a final footnote, Locke (2014: 89) acknowledges this controversy. He also concedes that he cannot settle the issue, but holds that anybody with fallibilist leanings should be drawn to deny K → Cr 1.20 Like Locke, I concede that I cannot settle the issue whether K → Cr 1, but I think I can provide good reasons that disarm these counterexamples based on EUT. Let us begin first with Locke’s claim that fallibilism about knowledge favors a rejection of K → Cr 1. Since this is merely claimed in a footnote, here is what I take the underlying worry to be. Since we often only fallibly know that p, because there is a chance that not-p, we can still improve our epistemic position when we know that p. Thus, we should not accept K → Cr 1 because we have no more room to capture these improvements in a framework that assigns values only between 0 and 1. To maintain this concern, one must assume that “Cr 1 in p” means that one’s epistemic position toward p cannot be improved anymore. But there is reason to assume that it does not and could not have this meaning in standard formal frameworks. In standard formal frameworks (as well as in Hawthorne and Stanley’s preferred alternative), one is required to assign 20

There might be other reasons to accept K → Cr 1. Knowledge implies belief and some, for example Clarke (2013) argue that belief is credence 1. I shall not pursue this defense strategy further.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

Knowledge and Seemingly Risky Actions

97

credence 1 to all logical truths. Setting aside controversies about logical omniscience generally, it seems that we can always improve our epistemic position toward logical truths just as we can for ordinary propositions. Suppose I competently prove a logical truth p and thus have credence 1 in p. Nonetheless, my epistemic position toward p could still be improved, perhaps by having the proof confirmed by a colleague. Given that it is possible to improve one’s epistemic position toward p even when one has credence 1 in p, we should not reject K → Cr 1 because one’s epistemic position can still be improved if one knows. I now turn to more general reasons to be suspicious about the general idea of using EUT to draw conclusions about KRS. EUT does not involve the notion of a reason. Thus the approach clearly differs from a reasonsbased framework that KRS is based on. Given that one need not assume that different accounts of rational action will always come to the same results, and that EUT has final authority, one might simply repudiate the applicability of the counterexample. I want to state clearly that I am not implicitly advocating the abolition of the EUT framework in favor of a reasons-based framework. I think that the idea that we use a variety of decision-making methods, depending on the decision problem we face, has a lot of appeal.21 Weisberg (2013) offers an admirable presentation of how EUT and a reasons-based framework can peacefully (and usefully) coexist. I cannot go into detail here. My point is simply that we should not hastily abandon principles that concern reasons just because they deem actions permissible that are impermissible according to EUT. Even if one believes that different theories about rational action must always agree on what is rational, there is further reason for concern. Locke attempts to rebut a thesis about knowledge by using the numerical framework of credences. This attempt can only succeed if we can adequately translate what it means to know in terms of a numerical framework. But there are reasons for skepticism here. Just as we have good reason to be skeptical that more coarse-grained doxastic attitudes like belief can be reduced to credences, we have good reason to be skeptical that we can capture what it means to know in terms of credences.22 Knowledge has certain features that distinguish it from mere belief; for example, 21

22

One idea that I find plausible, but which I cannot defend here, is that knowledge licenses the move away from an EUT decision procedure to a reasons-based procedure, as knowledge eliminates uncertainty. See Ross and Schroeder (2014) or Jackson (2019) on the nonreducibility of belief to credence; or Friedman (2013) on skepticism about the reducibility of withholding to credence.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

98

Beings of Thought and Action

knowledge is factive and if one knows that p, one could not easily have been wrong about p. But how can we capture that in a numerical framework? If we cannot, then proponents of KRS should simply reject counterexamples based on EUT. The EUT framework fails to capture some features of knowledge that can plausibly make a difference to how one can rationally decide. Cases in which one believes that one knows, and thus believes that one could not easily have been wrong, as for example in the betting dialogue in Section 4.2, clearly demonstrate that. So one had better suppose that an adequate translation is possible. But if a translation is possible, then K → Cr 1 seems to be the most plausible option. Assigning anything less than credence 1 to knowledge means that one will not even be able to track the difference between knowing and merely having a high credence. This is perhaps clearest when we consider lottery propositions. Orthodoxy holds that one cannot know that one’s lottery ticket is a losing ticket, but one can certainly have a very high credence, for example, Cr 0.9999. But if we assign anything less than Cr 1 to propositions that we know, then our numerical framework can no longer capture this difference between knowledge and merely having a very high credence. To be perfectly clear: This consideration does not establish that K → Cr 1 is true. But it suggests that given the general objective of the opponent of KRS, one is pushed toward accepting K → Cr 1. This, however, undermines the original counterexample based on EUT. To summarize my discussion: I defended KRS against a variation of the Surgeon Case that relies on EUT. Combining KRS with the view that K → Cr 1 undermines the possibility of such cases. K → Cr 1 will remain a contested thesis, but I have outlined two other alternative strategies. One can either deny the applicability of examples based on EUT to a reasonsbased account of rational action of which KRS is a part. Or one can deny that one can adequately translate what it means to know into a credence framework. In sum, Locke’s variation of the case provides no decisive reason to abandon KRS.

4.5

Misleading Intuitions

At this point, I want to address the lingering question why so many seem to share the impermissibility intuition on which the Surgeon Case relies. I will offer three potential (and not mutually exclusive) explanations that favor giving up this intuition. My aim here is modest. I do not intend to offer a definitive or best explanation of why the impermissibility intuition

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

Knowledge and Seemingly Risky Actions

99

is misleading. I merely offer considerations that assuage the worry that the impermissibility intuition might be too persistent to be given up. First, the impermissibility intuition might be misleading because it is the result of a psychological mechanism called the availability heuristic. A simple example of this heuristic misleading us is estimates about the frequency of causes of death. There is a robust tendency to overestimate rare causes of death, such as homicide, compared to more frequent causes, such as diabetes. Slovic et al. (1982: 465) write that one “particularly important implication of the availability heuristic is that discussion of a low-probability hazard may increase its memorability and imaginability and hence its perceived riskiness.” More recently, Harris et al. (2009) found that the severity of negative events leads people to assign them a higher probability than their objective probability. If we combine both of these insights, we have a plausible explanation why our intuitions might be misleading. Note that all the counterexamples against KRS are high-stakes cases. If one errs, the consequences are quite dramatic – one removes the wrong kidney, or one receives a painful electric shock. But if there is a general tendency to overestimate the probability of negative events, especially when they can be vividly imagined, as is the case in our scenarios, then we should be careful when considering them. We might perceive the cases to be much riskier than they actually are given that the subject knows, and thus be led astray in judgments about rational permissibility. However, if my discussion of these cases is convincing, we can also come to see that these initial judgments are misleading. This sits nicely with the suggestion that the initial judgment is due to the availability heuristic; Nagel (2010) has pointed out that careful consideration can overwrite initial judgments due to the availability heuristic.23 I must mention an important caveat of this explanation.24 If the impermissibility intuition is due to a misleading judgment that involves the availability heuristic, then one must wonder why the same mechanism leaves the knowledge intuition, which I granted, unaffected. If it is misleading to give too much weight to certain error possibilities, why wouldn’t we refrain from ascribing knowledge, as one might assume that the saliency of error possibilities can equally undermine knowledge ascriptions? An explanation based on the availability heuristic would not only explain the 23

24

Thus my application of the availability heuristic stays clear of the critique Nagel levels against Hawthorne (2004) and Williamson (2005). Thanks to Alexander Dinges and Mikkel Gerken for pressing me on this point.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

100

Beings of Thought and Action

impermissibility intuition but would also suggest that people tend to deny knowledge. Therefore, this explanation is unfitting, as the cases are supposed to be such that the knowledge intuition holds. I am not sure how damning this caveat is. There might be room to argue that the availability heuristic can lead to the impermissibility intuition without undermining the knowledge intuition, because these intuitions are not sensitive to the same possibilities. But I admit that this is speculation on my part that I cannot back up with a principled argument. Therefore, I present two further possible explanations that do not run into this issue. So, second, Levy (2016) argues that anxiety may deviantly cause behavior that goes against one’s beliefs. Plausibly, this explanation is also available when beliefs amount to knowledge. This may explain why we sometimes do not act in accordance with KRS. If we combine this with an assumption of protagonist projection, that is, putting ourselves into the position of another person, one can explain why the impermissibility intuition is misleading when we merely think about the cases.25 When we read the cases, we imagine ourselves in the position of the protagonist and actually feel some anxiety when we consider the disastrous outcomes. This anxiety does not undermine the knowledge intuition. However, this feeling of anxiety might suffice to tilt our judgment about the case toward the impermissibility intuition. But if our intuition is driven by anxiety, and we cannot support it through explicit reason-giving, then it is reasonable to assume that it is misleading. Third, we are prone to mixing up two distinct evaluations of the protagonist. Our intuitions might concern the habit that operating right away manifests. According to the bad habit view, protagonists in the relevant cases exhibit a bad habit that will get them into trouble in the long run.26 This holds particularly for cases of nonluminous knowledge. Disaster will not strike as long as one only acts on what one knows. But if one develops a habit of acting on knowledge that is not appreciably different from nonknowledge, one’s luck is bound to run out at some point and one will act on a nonluminous nonknowledge state. And that would be disastrous. We might explain the impermissibility intuition away because it is the result of a mix up between the bad habit view and a judgment about whether something is permissible in a particular case. While we can agree that the suggested habit is bad, we need not hold that, 25

26

See Holton (1997) for the introduction of this strategy, although for purposes pertaining to the philosophy of language. Thanks to Julien Dutant for bringing this view to my attention.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

Knowledge and Seemingly Risky Actions

101

in the particular cases, there is anything impermissible happening. Our intuition is misleading because we are mixing up the evaluation of a habit as bad with an evaluation of a particular action that is actually permissible. Does the bad habit view otherwise pose a genuine problem for KRS? Why not hold that the risk of developing a habit with potentially disastrous consequences makes it that one should not always act on what one knows? I doubt that the risk of cultivating a bad habit amounts to an objection to KRS. First, the badness of the bad habit arises because of all the cases in which one fails to know. But KRS makes no claim about cases in which one fails to know. Second, KRS is a claim about what it is permissible to treat as a reason, but it amounts to no requirement. KRS does not require us to treat p as a reason when we know that p, but it does not seem that we know that p. Thus KRS does not require the adoption of behavior that could lead to the development of a bad habit. Third, it is entirely compatible with KRS that in all the cases it is rationally permissible to double-check which kidney it is, or to refrain from taking the bet, as the bad habit view might suggest. All that KRS claims, as pointed out in Section 4.1, is that it is permissible to treat p as a reason and thus permissible to act on p. Thus, while the bad habit view might explain why many have misleading intuitions about the cases, it poses no threat to KRS.

4.6

From Knowledge to Knowledge-Level Justification

In this final section, I have three aims. First, I will argue that we can replace KRS with a weaker principle. Then I will point out how this weaker principle still makes good on part of the knowledge-first program. And last, but not least, I will explain how this weaker principle and my arguments in this chapter relate to the contextualist epistemic norm I argued for in Chapter 3. Following the subtraction argument presented in Chapter 2, I now argue that we can actually adopt an even weaker principle than KRS as a sufficient condition on what can permissibly be treated as a reason for action. Knowledge-Level Justification Sufficiency (KJS) If one’s degree of justification for p in a deliberative context (DC) suffices for knowledge-level justification, then it is rationally permissible to treat p as a reason for action in DC.

KJS is a weaker principle because it is possible to satisfy KJS without satisfying KRS.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

102

Beings of Thought and Action

Knowledge requires true belief, as well as arriving at that true belief not merely by luck, as one does, for example, in a Gettier case. Having the degree of justification that is necessary for knowing, knowledge-level justification, neither requires having a factive attitude nor satisfying antiGettier conditions. Take any ordinary proposition that you take yourself to know under normal circumstances. For example, one can know that the train one is on will stop in Foxboro if one asks the train conductor who asserts that it will. But suppose that a malfunctioning rail switch makes it that the train does not actually go to Foxboro and thus it does not stop in Foxboro. While one lacks knowledge because one’s belief is not factive, it does not seem that a malfunctioning rail switch can also diminish one’s degree of justification. Hence, if one has knowledge-level justification when there is no malfunctioning switch, then one also has it when there is one. An analogous story can be told for there being no anti-Getter condition on knowledge-level justification. Take the previous case but add a second malfunctioning switch after the first one, which makes it that one does end up in Foxboro, although entirely by accident. But just as before, it seems that one’s degree of justification should be unaffected by being in such a Gettier case. If one has knowledge-level justification after talking to the train conductor, and when there is a malfunctioning switch, then a second malfunctioning switch should make no difference. Having made explicit that having knowledge-level justification does not require having a true belief or satisfying anti-Gettier conditions, we can now run the previous subtraction argument to argue that, actually, KJS provides us with a sufficient condition for it to be permissible to treat p as a reason for action. As I argued in Chapter 3, it does not seem that truth is ever a requirement for what is rationally permissible. A lack of truth is possible while having the best possible evidence, but it does not seem that a lack of truth should also make for a lack in rational permissibility. Dropping the anti-Gettier conditions can be argued for in much the same way. Let’s start with an observation about the epistemic standing one must necessarily meet in a deliberative context (DC) for it to be permissible to treat p as a reason for action in DC. The principles defended in Chapter 3 did not feature an anti-Gettier condition, as there simply seem to be no cases in which it is crucial to meet an anti-Gettier condition for permissibility. Not only is a lack of truth compatible with having the best possible evidence. Being in a Gettier case also seems quite compatible with having the best possible evidence. The best evidence can still be misleading, yet it is possible that, at the same time, one’s belief ends up being true by luck.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

Knowledge and Seemingly Risky Actions

103

But it does not seem that being in a Gettier case should make for a lack of rational permissibility. If neither truth nor anti-Gettier conditions are ever necessary for rational permissibility, then we can safely subtract them from KRS, which gives us KJS. Now, even if my defense of KRS is successful, given that KJS is weaker than KRS one might worry that KJS is too weak to serve as a sufficient condition in all possible decision contexts. But this worry seems misguided given what I have just pointed out: It seems that, generally, factivity and antiGetter conditions are never required for rational permissibility. Thus dropping them does not weaken KJS in a significant way given its purpose as a sufficient condition for rational permissibility. If the Surgeon Case is a prima facie problem for KRS, then it is also one for KJS. One might worry that opting for KJS over KRS will rob one of the sources to rebut Surgeon Case. But this worry can easily be laid to rest. Even if we stipulate that the surgeon fails to know because of being Gettiered or her belief being false, we must still hold that the surgeon’s degree of justification suffices for knowledge-level justification. Based on this, it seems that even if the surgeon fails to know, she could still utter SB, even if the first part of the utterance is false. Reflecting on her justification, she would still come to believe that she knows. And that is really all we need to raise the problem that if the surgeon case is to work, rationality would require one to have incoherent attitudes. If the surgeon believes that she knows that p even when she merely has knowledge-level justification then, due to the factivity of knowledge, she is still committed to p. And thus even when she has merely knowledge-level justification for believing p, she would still be incoherent if, at the same time, she seriously entertains the possibility that not-p. If the Surgeon Case worked as a counterexample to KJS, then rationality would require the surgeon to have incoherent attitudes. But since it is hard to believe that rationality requires rationally incoherent attitudes, we may reject the surgeon case as a counterexample against KJS. One might worry that knowledge-level justification, just like knowledge, is not luminous. For example, it might be that one’s degree of justification suffices for knowledge-level justification, but just ever so slightly, so that given one’s evidence, it is very improbable that one’s degree of justification suffices for knowledge-level justification. But as I have pointed out, we should only be impressed with this sort of objection if we assume that rational permissibility is a luminous notion. I have argued that this is a claim for which we lack an argument. Assuming that antiluminosity of knowledge was not a problem for KRS, we may also assume

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

104

Beings of Thought and Action

that the anti-luminosity of knowledge-level justification is not a problem for KJS. Still, one might suggest that if we can find a luminous alternative to KJS, then that would be preferable as it would rid us of anti-luminosity worries. But it is hard to believe that there is a suitable luminous alternative to KJS. Just as we lack a strong argument for the thesis that rational permissibility is luminous, we lack reasons to believe that there are interesting epistemic states that are entirely luminous. For any epistemic standing that is sensitive to one’s evidence, which is the only relevant epistemic standing for rational permissibility, it will be possible to come up with a scenario in which one’s evidence barely, but actually, suffices for this epistemic standing. But since one’s evidence only barely suffices, it will not be luminous to one that one has this standing. Thus, while I can understand the desire for a luminous alternative, I believe this to be a desire guaranteed to be frustrated. In Chapter 2, I argued that KHA helps to make good on the knowledgefirst program because it shows that the concept of knowledge plays an important and uneliminable role in the regulation of our mental lives. Much the same can be said about KJS. While the account centers on justification, it does also center on knowledge-level justification. And it is hard to see how one could have that concept, or otherwise replace it, without having the concept of knowledge. The concept of knowledge features a certain degree of justification, knowledge-level justification, that we cannot independently capture. In fact, in most ordinary circumstances we determine whether we have this degree of justification by asking whether we know. It is in this sense that the concept of knowledge plays another important part in the regulation of our mental lives. It determines when it is definitely permissible to treat a proposition as a reason for action. This is extremely important given that as beings of action we must act in the world, and even more so given that there might not be any real epistemic stopping points. If we could always go for higher and higher iterations, when is it actually permissible to treat p as a reason? KJS provides a clear answer to this question, one that suggests that actually no higher-order epistemic state is needed: If one’s degree of justification for believing p suffices for meeting the justification condition of knowing, then it is rationally permissible to treat p as a reason for action. Finally, a few words on how the account of Chapter 3 relates to the arguments in this chapter. My main focus here was on Surgeon Case, so let us consider what verdict my contextualist proposal provides. According to this proposal, for it to be permissible to treat a proposition as a reason, the

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

Knowledge and Seemingly Risky Actions

105

costs of error may not exceed the costs of further inquiry. I assume this is not true in Surgeon Case. The costs of removing the wrong kidney by far exceed the minor nuisance of further inquiry by double-checking with the patient’s records. Consequently, it is not permissible for the surgeon to treat the proposition that the left kidney is diseased as a reason. My contextualist account supports the impermissibility intuition. One might thus wonder why I was arguing that it was misleading, and whether this is not bad news for the principle just introduced, KJS. Remember that I granted the knowledge intuition from the outset, which does not mean that I endorse it or that I am bound to hold that the subject knows. I did so because there can be no case against KRS if the relevant case does not involve knowledge. What I wanted to show was that there is a tension if it was true that the subject has knowledge, and one were to maintain that the impermissibility intuition is correct. I have shown why this leads to a tension and why we should thus discard these cases as counterexamples against KRS. I then explained why we have the impermissibility intuition in combination with the knowledge intuition. None of this has committed me to holding that, in the relevant cases, the subject actually knows. All I argued was that if they indeed know, the impermissibility intuition cannot be backed up by argument. Since I have no commitment to the knowledge intuition, I also have no commitment to holding that the relevant subjects have knowledge-level justification. Thus there is no tension between my contextualist proposal and KJS. However, given that my contextualist proposal supports the impermissibility intuition, I am committed, by modus tollens on KJS, to hold that the surgeon does not have knowledge-level justification, and hence lacks knowledge. This does not undermine my case for KJS or KRS, which consisted in showing that we cannot coherently maintain both the knowledge and the impermissibility intuition. This does, however, suggest that the combination of principles I accept and argued for, KJS and my contextualist principle, have profound implications for epistemology. The second part of this book is dedicated to spelling these out.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.004

part ii

Beings of Action in Thought

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985

chapter 5

Pragmatic Encroachment in Epistemology

In Part II, I defend and develop a rather controversial thesis: that there is pragmatic encroachment in epistemology.1 To a first approximation, the thesis holds that at least some epistemic statuses are sensitive to practical factors. Even epistemologists that ultimately embrace pragmatic encroachment often do so only reluctantly.2 I consider myself to be one of those epistemologists. However, there is at least anecdotal evidence that the desire to avoid pragmatic encroachment is not universally shared. My undergraduates seem to have neither strong intuitions in favor nor against pragmatic encroachment prior to having heard the arguments in favor or against it. To some people, pragmatic encroachment actually seems obviously true. Jason Stanley (2005) opens his Knowledge and Practical Interests with a remark that the thesis seemed obviously true to his father; I have heard similar stories from other sources. It might also be worth pointing out that if we look at pragmatic encroachment from the angle of our nature as beings of thought and action, it might be at least a less surprising thesis. Given our basic needs, for example for food or shelter, it is inevitable that we must act in the world. When we realize that we are acting in an environment in which our endeavors are not guaranteed to be successful, and where failure might be disastrous, it is perhaps not all that surprising that our practical circumstances can influence how we think about the world and what epistemic status these thoughts enjoy. While pragmatic encroachment is perhaps not surprising or not obviously false to some, I believe that most epistemologists still recoil from entertaining the thought. However, I suggest a decent case for pragmatic 1

2

The thesis is also labeled anti-intellectualism or impurism. The term “pragmatic encroachment” was coined by Jonathan Kvanvig in his popular, but now defunct, epistemology blog Certain Doubts. A certain discomfort, if not outright hostility, was attached to the term, but nowadays it seems to have a neutral meaning. McGrath (2018: 3053) admits that pragmatic encroachment is an “unattractive thesis.” In McGrath (2017: 177) he writes: “You might not like pragmatic encroachment – I don’t – but . . . .”

109

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

110

Beings of Thought and Action

encroachment can be made, and I will do so in this and the remaining chapters. In Sections 5.1 and 5.2, I introduce and argue for a specific form of pragmatic encroachment in epistemology: pragmatic encroachment on knowledge. In Sections 5.3 and 5.4, I explain how my argument differs from similar ones in the literature and how it avoids certain arguments against pragmatic encroachment that criticize the notion of stakes. In Section 5.5, I set the scene for later chapters by introducing several competing explanations of pragmatic encroachment.

5.1 Pragmatic Encroachment on Knowledge In this chapter, I am concerned with the form of pragmatic encroachment that has dominated the debate so far: pragmatic encroachment on knowledge. This is a metaphysical thesis about the nature of knowledge. We can distinguish between two opposing metaphysical views about practical factors and their relation to knowledge. Intellectualism(INT) Whether a true belief amounts to knowledge depends exclusively on truthconducive factors. Pragmatic Encroachment on Knowledge (PE-K) Whether a true belief amounts to knowledge does not only depend on truthconducive factors but also on practical factors.

Intellectualism, also referred to as purism, is the thesis that whether a true belief amounts to knowledge depends entirely on truth-conducive factors, or truth-relevant or truth-related matters. It has been notoriously difficult to spell out what exactly truth-conducive factors are and what makes them so. Often, the notion is explained by giving examples such as evidence or the reliability of certain cognitive processes. These factors make it more likely that a relevant proposition is true. Further clarification can be gained by contrast with the opposing position, PE-K, which holds that whether a true belief amounts to knowledge does not depend on truth-conducive factors alone; practical factors matter as well. As mentioned in Chapter 1, Section 1.4, talk of “dependence” is ambiguous between conditional claims and grounding claims. My use of “dependence” here is meant as a grounding claim; hence, PE-K holds that practical factors can make it true whether a true belief amounts to knowledge or not. In the current debate, it is common to express this idea in terms of the notion of stakes. Whether it is true that S knows that p depends partly on what is at stake for S. However, the notion of stakes has been recently

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Pragmatic Encroachment in Epistemology

111

criticized as unclear. I will turn to these criticisms in Section 5.4. Meanwhile, I will rely on notions I have already introduced in Chapter 3, the costs of error (COE) and the costs of further inquiry (CFI). I believe that the former captures most of what others often refer to as stakes. For a first approximation, at least, one can say that the practical factors that can influence whether one knows are COE and CFI.3 Neither COE nor CFI are truth-conducive factors – at least, they seem quite distinct from the list of traditional truth-conducive factors. COE and CFI are not evidence that a proposition is true. They do not affect whether a process that was used in forming a belief is reliable. They do not make it more or less likely that a proposition is false. Therefore, they are not truthconducive factors. This gives us a clear distinction between PE-K and INT. An example can help to demonstrate that PE-K might receive some intuitive support. Consider the following pair of cases, which only differ in regard to the COE. Low Costs of Error (LCOE) Hannah is an untenured professor in a philosophy department and she is organizing a conference on pragmatic encroachment in epistemology. She desires that the ratio of proponents to opponents of pragmatic encroachment is balanced; however, nothing hangs on that. She still needs one more proponent of pragmatic encroachment for a balanced speaker lineup. Hannah has carefully read all of Jason Stanley’s work on this issue, which clearly indicates that he is a proponent of pragmatic encroachment. Therefore, Hannah concludes that she knows that Jason Stanley is a proponent of pragmatic encroachment and thus she sends him an invitation to the conference. High Costs of Error (HCOE) Hannah is an untenured professor in a philosophy department and she is organizing a conference on pragmatic encroachment in epistemology. It is very important that the ratio of proponents to opponents of pragmatic encroachment is balanced. It is the department’s policy to have such a balance. If one does not conform to this policy, this will negatively affect tenure decisions. Hannah needs one more proponent of pragmatic encroachment. Hannah has carefully read all of Jason Stanley’s work on this issue, which clearly indicates that he is a proponent of pragmatic encroachment. However, his latest endorsement is from 2015, and it is now 2019. Sometimes, philosophers do change their mind. Therefore, Hannah concludes that she does not know that Jason Stanley is a proponent of pragmatic encroachment and, therefore, given the costs of error, she does not invite Stanley straight away but asks whether he is currently still endorsing pragmatic encroachment. 3

I will disregard the influence of other practical factors. For example, Shin (2014) argues that time concerns can lead to pragmatic encroachment. However, I will not explicitly discuss this possibility.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

112

Beings of Thought and Action

These and structurally similar case pairs, which prominently feature in the literature, can provide intuitive support for PE-K.4 Let “j” refer to the proposition that Jason Stanley endorses pragmatic encroachment and let us stipulate that j is true. Suppose that we share the intuition that Hannah knows that j in LCOE and that she does not know that j in HCOE. The only difference between Hannah in the two cases, besides the difference in knowledge, is her COE. Intuitively, they are very low in the first case. If Hannah errs, then her desire for a balanced speaker lineup would be frustrated. But this would not be disastrous. If Hannah errs in the second case, then this would be disastrous. Not only would one of her personal desires be frustrated, but her error would have further careerthreatening repercussions. If the only difference between these two cases is the difference in COE, then it seems like the best explanation of what accounts for the difference in knowledge is the difference in COE. Since these are clearly a practical factor, it seems that PE-K is true. But these case pairs are no conclusive argument in favor of PE-K. There are at least three ways for resistance. First, these case pairs can be used to motivate theories other than PE-K. The cases were initially used to motivate a novel account of the semantic of “knows,” contextualism. The contextualist claims that the meaning of “knows” varies from one context to another. Due to this shift in meaning, we have another explanation why Hannah “knows” in LCOE, but not in HCOE. However, several contextualists are explicit about maintaining the metaphysical thesis intellectualism, and they argue that their contextualism allows them to do so.5 Assuming this is a cogent sum of views, the case pair here does not provide an unavoidable argument in favor of PE-K. I will return to contextualism in the Epilogue and how it relates to the argument for PE-K that I will provide. Second, the intuitive verdicts about knowledge have been questioned on the basis of empirical findings. Experimental philosophers have tested how laypeople attribute knowledge. A first wave of studies came to the result that people do not ascribe knowledge in a shifty manner as predicted by epistemologists, or at least that there is no statistically significant effect of stakes, or costs of error, on knowledge attributions.6 These findings seem to undermine PE-K. Empirical 4

5 6

Albeit, used for different purposes. See DeRose (1992) for the origin of structurally similar case pairs, now known as “the bank cases,” or Cohen (1999) for “the airport cases.” See especially DeRose (2009) for a defense of intellectualist contextualism. See, for example, Feltz and Zarpentine (2010), May et al. (2010), Buckwalter (2010), and Knobe and Schaffer (2012).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Pragmatic Encroachment in Epistemology

113

findings from psychology pose another problem. Gerken (2017) argues that our judgments about cases like HCOE are subject to systematic biases that distort our judgments about when an alternative is a relevant alternative that needs to be ruled out in order to know. These biases lead us to consider an alternative as relevant, when it is in fact irrelevant. In this case, the alternative that Jason Stanley recently changed his mind seems relevant, but is actually not relevant. Thus we mistakenly judge that Hannah fails to know in HCOE. Third, one might argue that traditional epistemology has the resources to deal with such case pairs. One could argue that Hannah’s lack of knowledge in HCOE is due to a lack of belief, as it seems that, under such circumstances, most would refrain from believing. This lack of belief easily explains the lack of knowledge, as knowledge is traditionally thought to require belief. Thus there is no need to adopt a nontraditional view such as PE-K.7 One might also question whether Hannah really knows in LCOE, or whether we merely tend to attribute knowledge loosely (see Davies 2007). Alternatively, we could question whether Hannah really lacks knowledge in HCOE. We might withhold on ascribing knowledge to Hannah because we would thereby imply something that we would not want to imply.8 Adopting either of these views allows one to hold that there is actually no difference in knowledge (or lack thereof) across the case pair. And if there is no change, then there is no need to explain a change, hence no need to adopt PE-K or any other novel theory about knowledge. None of the options mentioned in the three preceding paragraphs have been conclusively established. So it is, in principle, possible to defend PE-K by relying on case pairs such as that here, a strategy that with a nod to Fantl and McGrath (2012a), I call the argument-from-cases strategy. The last three paragraphs mention some of the hurdles such an argument for PE-K must overcome. Many have attempted to do so, but I do not intend to contribute to these debates.9 7

8

9

We can find variations of this strategy in Weatherson (2005), Ganson (2008), and Nagel (2008). It should be noted that Weatherson has since changed his mind on the general issue. He no longer thinks that we can do without pragmatic encroachment and he endorses it in Weatherson (2011 and 2012), while maintaining his earlier views about belief. Instances of this strategy, called the warranted assertability maneuver, can be found in Rysiew (2001 and 2007), Brown (2006), and Lutz (2013). For example, see Sripada and Stanley (2012) and Pinillos (2012) for defenses of pragmatic encroachment against the challenge from experimental philosophy, and Buckwalter and Schaffer (2015) for a critical reply. For an attack on those who defend intellectualism by appealing to the warranted assertability maneuver, see Roeber (2014).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Beings of Thought and Action

114

What I will pursue is what Fantl and McGrath (2012a) have labeled the argument-from-principles strategy. I will argue that two principles that I defended in Chapter 3 and Chapter 4 jointly entail another principle that is at odds with INT. I will explain how this argument stays clear of the hurdles for the argument-from-cases strategy that I have mentioned here.

5.2 A Principled Argument for Pragmatic Encroachment on Knowledge We already have all the principles we need to establish that PE-K is true. At the end of Chapter 3, I argued that whether one’s degree of justification for believing p makes it rationally permissible to treat p as a reason for action depends on COE and CFI. My arguments came down to the following principle. Rational Permissibility (RP) If it is rationally permissible to treat p as a reason for action in a deliberative context (DC), then, in DC, the costs of error regarding p do not exceed the costs of further inquiry into whether p.

Because RP and the other principles are a bit lengthy, it is helpful to make them into a simpler logical form that drops the reference to the deliberative context. RP RP(p) → ~ (COE(p) > CFI (p))

In Chapter 4, I argued that if one’s degree for justification for believing p suffices for knowledge-level justification in one’s deliberative context, then one’s degree of justification is sufficient for it to be rationally permissible to treat p as a reason for action. Knowledge-Level Justification Sufficiency (KJS) If one’s degree of justification for p in a deliberative context (DC) suffices for knowledge-level justification, then it is rationally permissible to treat p as a reason for action in DC.

Translated into a simpler logical form, this becomes: KJS KJ(p) → RP(p)

By simple modus ponens from KJS and RP, we can derive another principle.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Pragmatic Encroachment in Epistemology

115

Pragmatic Encroachment about Knowledge-Level Justification (PE-K*) If one’s degree of justification for p in a deliberative context (DC) suffices for knowledge-level justification, then, in DC, the costs of error regarding p do not exceed the costs of further inquiry into whether p.

Or in a simple logical form: PE-K* KJ(p) → ~ (COE(p) > CFI (p))

I call this principle PE-K* because it suggests that PE-K is true. PE-K* makes it a necessary condition on knowledge-level justification for p that the costs of error regarding p do not exceed the costs of further inquiry. PEK* is not compatible with INT*, which, if INT were true, would also have to be true, since if knowledge is not sensitive to practical factors, then none of the necessary conditions on knowledge can be sensitive to practical factors either. Intellectualism (INT*) Whether one’s degree of justification for p suffices for knowledge-level justification in a DC is not dependent on the costs of error of acting regarding p or the costs of further inquiry into whether p.

If INT* were true, knowledge-level justification would be independent of any specific deliberative context and its COE and CFI. Therefore, it would be possible to have knowledge-level justification for believing p, while it is the case that COE(p) > CFI(p), as having knowledge-level justification is in no way dependent on COE or CFI. But this possibility contradicts PE-K*. According to PE-K*, it is a necessary condition on having knowledgelevel justification that COE do not exceed CFI. Thus, if PE-K* is true, which follows straight from the principles I have argued for, then INT* is false. If INT* is false, then PE-K is true. If it is a necessary condition for knowledge-level justification to have a certain balance of COE and CFI, which are both sensitive to practical factors as I pointed out in Chapter 3, then PE-K must be true. Whether one knows depends on whether one has knowledge-level justification but, as PE-K* suggests, this seems to depend in part on how certain practical factors, CFI and COE, align. There is one important caveat to mention. PE-K* is a conditional claim and PE-K is a grounding claim. As I pointed out in Chapter 1, Section 1.4, conditional claims do not strictly indicate the truth of grounding claims. Given this, there remains a gap between PE-K* and PE-K. However, I think the burden of proof lies with the intellectualist to prove this gap

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

116

Beings of Thought and Action

can actually by exploited in their favor. I will return to this in Section 5.5, where I explain why I believe this burden is hard to shoulder. This argument for PE-K via PE-K* follows straightforwardly from RP and KJS. The influence of practical factors is due to RP. The intuitive idea behind RP is that it cannot be rationally permissible to treat a proposition as a reason for an action when the costs of this action going awry are higher than simply inquiring further about whether the proposition is true. If CFI are lower than COE, then do not treat that proposition as a reason, but inquire further. That seems hard to deny. And it also seems hard to deny that a practical factor like COE has a bearing on what one can permissibly treat as a reason for action. Although he is clearly an intellectualist, Gerken (2011: 545) assumes that the degree of justification that makes it permissible to treat p as a reason for action “is partly determined by a subset of the subject’s stakes and interests.” At least some intellectualists do not seem to be bothered by the influence of practical factors on what one may permissibly treat as a reason.10 However, this influence is hard to contain once we accept a principle like KJS. Gerken denies KJS based on Brown’s Surgeon Case. But I have argued that the Surgeon Case cannot be relied on because the underlying intuitions are incoherent. The only option not entirely ruled out by my previous arguments is to argue that rational permissibility is a luminous condition. It seems that this is the intellectualist’s best bet to avoid KJS but, as I have pointed out, this is a strong thesis for which we currently lack an argument. I have no doubt that there will be resistance to this argument for PE-K and I think it is sufficiently clear what the options to resist are: deny either RP or KJS, or both. The intellectualist might be tempted to argue that we should deny one of them because they entail PE-K*. But this obviously begs the question against PE-K. I fully concede that PE-K is a surprising thesis, and I admit that it faces certain objections that need to be taken seriously. But it is not an obviously false thesis. Consider the Sorites paradox for comparative purposes. From plausible premises and through logically valid reasoning, one can derive the obviously false claim that a single grain of sand constitutes a heap. This might justify rejecting one of the premises that was used to derive this conclusion. In contrast, PE-K is an unusual claim, but not an obviously false one. Thus, nobody without prior theoretical commitments ought to reject PE-K outright. I have made 10

Roeber (2018: 180f.) seems to assume that what one can permissibly treat as a reason is not sensitive to practical factors. I disagree and have argued otherwise in Part I of this book.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Pragmatic Encroachment in Epistemology

117

my case for RP and KJS. Let’s see what others will say in reply. Meanwhile, I think we should follow the argument wherever it leads, even if that is to pragmatic encroachment on knowledge. While, as mentioned in Section 5.1, my argument for PE-K does not rely on case pairs in which knowledge shifts, it is highly plausible that such shifts could occur. There are two ways such shifts can come about: either because one moves from a nondeliberative context to a deliberative context, or because one moves between two different deliberative contexts. I will illustrate these possibilities later. The principles my argument relies on are all relativized to a deliberative context. But clearly, one is not always in a deliberative context. Sometimes one is just acquiring information in a random manner. For example, imagine it is 2018 and I am reading a newspaper. I learn that Theresa May faces a cabinet revolt or that the soccer world championship is over. I am acquiring random information but, while I am reading, this information is not related to any sort of deliberation. Accordingly, there are no COE. I could even learn propositions that will always be entirely practically irrelevant, for example, that Angela Merkel knows how to cook a good potato soup. Short of artificial betting situations that involve this proposition, I assume that this fact about Angela Merkel will never become practically relevant to me. Whenever there are no COE, it will trivially be fulfilled that COE do not exceed CFI and thus the necessary condition on having knowledge-level justification will be trivially fulfilled. Since this is only a necessary condition, there being no COE does not entail that one has knowledge-level justification. But assuming that one is reading a reliable newspaper, it would seem that one comes to know that Theresa May faces a cabinet revolt, that the soccer world championship is over, or that Angela Merkel knows how to cook a good potato soup. But one can eventually switch from a nondeliberative context to a deliberative one. Suppose that after reading the newspaper, I start deliberating about how to spend my evening and whether the soccer world championship is over has implications about how I spend it. If it is over, then I can go to the pub again without the hassle of coming across annoying soccer fans. Yet, if I am wrong about this, I will have a miserable evening at the pub. In this deliberative context, there are COE that were not present in the nondeliberative context when I was just reading the newspaper. Furthermore, one could easily imagine that having a miserable evening has such dramatic consequences for me that COE absolutely outrun CFI about whether the world championship is actually over. In such a deliberative context, the necessary condition on knowledge-level justification that PE-K*

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

118

Beings of Thought and Action

specifies is not fulfilled. Thus, I would fail to know that the soccer world championship is over in this deliberative context. Consequently, there can be a change in what one knows from a nondeliberative context to a deliberative context, which makes it true that there can be a change in what one knows due to a change in practical factors. While my argument does not strictly entail that such switches are possible, it is highly plausible that one can switch from a nondeliberative context in which there are no COE to a deliberative one in which acting on p becomes associated with COE. And it seems possible that, at least sometimes, the newly incurred COE exceed CFI, which according to PE-K* would result in a loss of knowledge-level justification that was present in the earlier nondeliberative context. Basically, the same phenomenon can be observed in changes of rising COE, such as in the example in Section 5.1. In LCOE Hannah knows that j. Here, the COE do not exceed the CFI. Thus Hannah satisfies the necessary condition on knowledge-level justification that PE-K* posits. But it is highly plausible that the COE do not always stay constant. For example, the departmental policy cited could be introduced overnight and Hannah is now in HCOE. In this case, COE exceed CFI.11 Thus, according to PE-K*, Hannah fails to know because she does not satisfy the posited necessary condition on knowledge-level justification. To be clear: my argument for pragmatic encroachment does not presuppose that there are such case pairs. However, the possibility of such case pairs is guaranteed given two further plausible assumptions. The first is that the COE associated with a proposition are not constant, which seems like an undeniable assumption. The second is that for many ordinary propositions that we take ourselves to know, it is possible that COE of p can rise so that they exceed CFI of p. One might be tempted to deny this second assumption by holding that if one has knowledge-level justification, then it is not possible for COE to exceed CFI as the CFI will already be high. But there is little reason to assume that, for the bulk of ordinary knowledge claims, CFI are already higher than any potential COE. In Chapter 3, I tied CFI to one’s current epistemic standing. If one already has good evidence, this makes the 11

It could plausibly be said that it is not only the COE that change but also Hannah’s beliefs about them. I take that to be an insignificant change, or at least not a change that intellectualists might exploit. It does not sit well with intellectualism to hold that a change in a belief about what is at stake would make a difference to whether one knows. Such beliefs about practical matters do not make it more or less likely that the target proposition j is true. Hence, even if we grant that there might be more than just a change in the costs of error between LCOE and HCOE, this does not provide the intellectualist with resources to avoid PE-K.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Pragmatic Encroachment in Epistemology

119

acquisition of further evidence that would improve one’s position, by ruling out or reducing error possibilities, harder and thus costlier. But it is hard to believe that this is always the case when one knows, at least not given our ordinary practice of knowledge ascriptions. It seems that sometimes one can come to know just by reading a newspaper. But it is implausible that, thereby, one has raised CFI so massively that COE could never exceed them. Take the previous example about going to the pub. Whatever one’s degree of justification for believing that the world championship is over is after reading the newspaper, it seems implausible that COE, an absolutely miserable night at the pub, would not be higher than CFI, which, in this case, could simply be cross-verifying with another news source. Since the two additional assumptions are rather plausible, it is not plausible to accept PE-K* while denying that the typical knowledgeshifting cases prevalent in the debate about pragmatic encroachment are not possible. Of course, one could deny that knowledge is as easily attainable as reading a newspaper, or that most ordinary knowledge claims, which are not based on anything even remotely close to conclusive evidence, are true. But this move has at least mildly skeptical consequences. I take it that even a mildly skeptical view is to be avoided, and thus that we can discard objections along those lines. More importantly, though, even if there is no case pair in which knowledge shifts due to shifting practical factors, this would only be a Pyrrhic victory for intellectualism. Even if there is no such case pair, the consequent of PE-K* remains a necessary condition on knowledgelevel justification and thus on knowledge. Thus, even if there is no shifting knowledge, a necessary condition on knowledge is that practical factors are aligned in a certain way. Thus PE-K is true: Whether a true belief amounts to knowledge is partly determined by practical factors. Finally, let us spell out why this principle-based argument for PE-K cannot be undermined by the strategies that are troubling for the argument-from-cases strategy. I set aside the contextualist strategy and will return to it in the Epilogue. My principle-based argument is immune to undermining by experimental philosophy, or at least its current findings. My argument relies on principles, not on intuitions about particular cases. When I appeal to cases in the presentation of my argument, this serves merely to illustrate some otherwise abstract claim. My arguments for the principles KJS and RP relied on some intuitions. In the former case, this was that there is something incoherent about being in a state of mind expressed by the sentence “p, but it might be that not-p.” In the latter case,

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Beings of Thought and Action

120

my derivation of RP was based on an attempt to capture many intuitions about various cases of when one can permissibly treat a proposition as a reason. Experimental philosophers might choose to attack those intuitions, but currently there are no such findings in the literature. My argument for PE-K obviously cannot be undermined by claims to the effect that either there is no difference in knowledge across a certain case pair, or that this difference can be explained via traditional factors like a loss of belief. While my argument suggests that there are such case pairs, it is not based on case pairs and thus immune to alternative explanations of them. To sum up: the principles RP and KJS imply a further principle PE-K*. This principle contradicts intellectualism, as it entails that one cannot have knowledge-level justification unless COE and CFI are aligned in a certain way. Since both COE and CFI are sensitive to practical factors, having knowledge-level justification, and hence having knowledge, is sensitive to practical factors. While PE-K* itself does not entail that there are case pairs with shifting knowledge-ascriptions such as LCOE and HCOE, this is a possibility given the two further plausible assumptions I outlined.

5.3

A Comparison with Other Arguments for Pragmatic Encroachment on Knowledge

The principle-based argument for pragmatic encroachment argument I offered in Section 5.2 might be received by some with nothing but a yawn. After all, the current literature already offers several principlebased arguments for PE-K. The purpose of this section is to show how, despite some similarities, my argument differs from them and to point out certain advantages it has. Readers who are not too concerned about differences among pragmatic encroachers may want to skip this section. Probably the most discussed principle-based argument is by Fantl and McGrath. Their argument appears in a number of different forms, for example, in Fantl and McGrath (2002) and Fantl and McGrath (2007). I will focus on the version they present in their book Knowledge in an Uncertain World (Fantl and McGrath 2009). The argument has already received a lot of discussion.12 I will offer a new way to resist it. Central to Fantl and McGrath’s argument is a principle with at least superficial similarity to KJS. 12

To mention just a few of the critical replies to Fantl and McGrath’s argument, see Reed (2012), Cohen (2012), Neta (2012), and Brown (2012).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Pragmatic Encroachment in Epistemology

121

KJ If you know that p, then p is warranted enough to justify you in φ-ing, for any φ. (Fantl and McGrath (2009: 66)).

I take it that to be “justified in φ-ing” includes that it is rationally permissible to φ, and that “warranted enough’ simply means that one’s degree of epistemic justification for believing p is high enough. While there are some important dissimilarities between KJ and KJS, as I will explain in Section 5.4, these principles are somewhat similar. And given this superficial similarity, one might think that my argument really cannot be all that different from Fantl and McGrath’s. However, that is not so. They adopt no equivalent to my principle RP; their only additional assumption is a fallibilist conception of knowledge.13 To stay close to the original, I have adapted their use of the notion of stakes (see Fantl and McGrath 2009: 85f.). I assume that one could redescribe LCOE as a low stakes case, and HCOE as a high stakes case. This is their argument: (P1) If fallibly knowing that p is possible, then it is possible for there to be a subject, Low, who is in a position like Hannah in LCOE. (P2) If there is such a possible subject, Low, then it is possible that there is a subject, High, in a position like Hannah in HCOE, who differs from Low only by a difference in stakes. (P3) If a subject is in a position like High, then p is not warranted enough to justify High in φ-ing. (P4) Given KJ, if p is not warranted enough to justify one in φ-ing, then High does not know that p. (C) It is possible that there is a pair of subjects, Low and High, that differ in whether they know that p, but besides that only differ in stakes. Of course, (C) entails PE-K. While I am highly sympathetic to this conclusion, I believe that my defense of KJS in Chapter 4 might have provided the intellectualist with a reasonable strategy to resist the argument. The problem concerns the transition from (P2) to (P3). I will point out that an intellectualist could resist this transition precisely because they endorse KJ. The intellectualist could reason as follows. If Low knows that p, even if she just fallibly knows, and if High is just like Low, apart from what is at 13

Another difference between Fantl and McGrath’s argument and mine concerns P1 (following). It might seem that my argument does not presuppose fallibilism about knowledge. However, I am reluctant to claim this as an advantage. I believe that my argument is compatible with what Brown (2018: 13) has called shifty infallibilist views; however, some may characterize that as a fallibilist view.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

122

Beings of Thought and Action

stake, then High must know that p too. At least it seems prima facie reasonable to assume that High must also know, as the only difference to Low is what is at stake. After all, we have as yet no reason to suppose that the stakes do have a bearing on what one knows. This does not beg the question, as Fantl and McGrath intend to argue for the conclusion, that what is at stake can influence whether one knows. So they cannot reject this kind of reasoning because doing so would presuppose the conclusion they intend to argue for. But if High knows that p, then, given KJ, p is warranted enough to justify High in φ-ing. This blocks the move from (P2) to (P3) and consequently the rest of the argument. Fantl and McGrath might argue that (P3) just has a lot of intuitive appeal. But as I have pointed out in my defense of the sufficiency direction of the knowledge norm, the impermissibility intuition is hard to square if we hold that High knows, as the intellectualist attack outlined here holds. And, furthermore, I have also offered a number of explanations that the intellectualists can avail themselves of to argue that the relevant impermissibility intuition that (P3) relies on is misleading. Fantl and McGrath might want to argue that there is another principle in the background that guarantees that (P3) is true. In Fantl and McGrath (2012a) they assume that one can derive PE-K based on a fallibilist conception of knowledge – hence (P1) – KJ and the following principle. Certainty–Action Principle If p is not absolutely certain for a subject in case C1 but actionable, then there will be a correlate case C2 which differs in actionability from C1 merely because the stakes, a practical factor, are higher in C2 than in C1. (Fantl and McGrath (2012a: 68)).

I assume that by p being actionable, Fantl and McGrath mean that p is warranted enough to justify one in φ-ing. Why assume the Certainty–Action Principle? We can offer bets on propositions about which we are not epistemically certain – for example, whether at least one student in Frankfurt will get at least a B+ on their term paper this year. We can take bets as long as the stakes are low, that is, as long as losing the bet would not be disastrous. We can take a bet in C1 that costs a penny if we are wrong, but it seems that we cannot take a bet in C2 if the costs of being wrong are losing one’s entire fortune. While this seems intuitively appealing, one might question whether we should accept this principle for all epistemic states that do not entail absolute certainty. Fallibly knowing that p entails a lack of absolute certainty. We need to know whether a version of the Certainty–Action Principle that involves fallibly knowing is true.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Pragmatic Encroachment in Epistemology

123

Specified Certainty–Action Principle If p is fallibly known by a subject in case C1 but actionable, then there will be a correlate case C2 which differs in actionability from C1 merely because of the stakes, a practical factor, are higher in C2 than in C1.

If the Specified Certainty–Action Principle were true, then it would guarantee that we can after all move from (P1) and (P2) to (P3). However, it seems that one could deny the Specified Certainty–Action Principle because one strictly holds to KJ. One might say that if one knows that p in C1, then one will also know in C2, as there is only a difference in what is at stake between C1 and C2 and, prima facie, stakes have no bearing on knowledge. Then p must also be actionable, even when the stakes in C2 are higher. If that were not true, then KJ would be false. The only way to make sense of the Specified Certainty–Action Principle and KJ is to assume that whether one knows is sensitive to practical factors. But that is precisely what Fantl and McGrath intended to establish. The intellectualists can point out that KJ is not compatible with the Specified Certainty–Action Principle and will hence reject it as well as the Certainty–Action Principle, which entails it. The intellectualist who holds fast to KJ and who resists Fantl and McGrath’s argument for PE-K will hence not be impressed by their invoking the Certainty– Action Principle in order to vindicate (P3). To summarize and offer some diagnosis of the trouble for Fantl and McGrath’s argument and how it could be improved. While I am highly sympathetic to (P3), it does rely on a judgment about what it is rationally permissible to do, or what one is justified in doing. However, as I mentioned at the beginning of this section, Fantl and McGrath do not have a principle like my RP which specifies conditions for rational permissibility. Such a principle would put the intuitions on which the consequent of (P3) relies on solid ground. The objection that intellectualists could raise to (P3) based on KJ shows that there is a genuine need for a corresponding principle like RP in order to avoid the said objection. I believe that Fantl and McGrath could adopt my principle to strengthen their argument, so my criticism really comes in the spirit of improvement rather than in the spirit of rejection. In any case, it should now be clear how my argument for PE-K differs from Fantl and McGrath and why, in all modesty (which this disclaimer might fail to adequately express), it might be superior to it. Roeber (2020) provides another interesting argument in favor of PE-K. While it can neither be classified straightforwardly as a case-based

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

124

Beings of Thought and Action

argument or an argument from principles (I explain why later), it runs into a similar problem as Fantl and McGrath’ argument. To see this, we need to quickly rehearse Roeber’s argument. He starts with the standard case pairs, using a variant of Baron Reed’s jelly bean case discussed in Chapter 4, Section 4.2. Case A You are participating in a psychological study where the researcher asks you questions on a subject with which you are very well acquainted. For every correct answer, the researcher will reward you with a jelly bean; for every incorrect answer, and every question left unanswered, you will get nothing. The first question is whether p. You know that the answer is “p.” Case B You are participating in a study exactly like the one in Case A, except in this study you know that you will be punished by an extremely painful electric shock each time you give an incorrect answer. Again, the first question is whether p. You remain convinced that the answerer is “p,” and your epistemic position with respect to the proposition that p is just as strong as it was in Case A. So far, so familiar. Then Roeber argues for (1) and (2): (1) If you know that p in Case B, then you know that answering “p” will have the best consequences of your options in Case B. (2) If you know that answering “p” will have the best consequences of your options in Case B, then you may answer “p” in Case B.

(1) seems hard to deny given the stipulations of the case. If one knows the correct answer, then providing this answer will have the best consequences, as receiving a jelly bean is better than nothing. Roeber also argues that a closure principle is not needed in order to justify (1). All that matters is that it is possible in some cases to gain knowledge by deduction. Therefore, the argument does not depend on the truth of a potentially contested closure principle. I suppose that (2) will be a more contested premise although I will not challenge it, as I agree with Roeber’s argument. His idea is basically that while it makes sense to act in a manner that maximizes expected utility when one does not know, it is not clear why one would not choose an act that maximizes actual utility, at least in some cases, if one happens to know what this act is. This claim will be controversial because it amounts to a rejection of expected utility theory (EUT), unless one accepts that what one knows has credence 1. My sympathies here are with Roeber, for reasons already spelled out in Chapter 4. It is also important to note that Roeber is not endorsing a general principle that would entail (2), such

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Pragmatic Encroachment in Epistemology

125

as “If one knows that φ-ing will have the highest actual utility in c, then one may φ in c.” That is why the argument cannot be called a principle-based argument. All that matters is that one can coherently stipulate a case pair like A and B, for which (1) and (2) holds, which seems to be a genuine possibility. (1) and (2) jointly entail (3). (3) If you know that p in Case B, then you may answer “p” in Case B.

However, (3) entails PE-K, as Roeber (2020: 12) holds that “[b]y hypothesis, you shouldn’t answer the question in Case B.”14 If one should not answer the question in B, then by modus tollens, one does not know that p in Case B. However, the initial stipulation was that one knows in Case A. Since the only difference between Case A and Case B is a practical factor, there is a case pair that suggests that PE-K is true. Even if this argument is not principle-based, it still depends on (3). And one can reject Roeber’s conclusion, precisely because one accepts (3) but opts for modus ponens over modus tollens, similar to the strategy that I used against Fantl and McGrath. The intellectualist is bound to assume that one also knows in Case B. Then, all the intellectualist needs to do to resist Roeber’s argument is to question the final hypothesis that one should not answer the question in Case B. And I have already provided the imagined intellectualist with the necessary resources in Chapter 4. Since Roeber does not further defend the crucial hypothesis that licenses using modus tollens on (3), it seems that I have provided the intellectualist with the resources to resist Roeber’s argument for PE-K. Roeber thus faces the same problem as Fantl and McGrath. He needs a principle to defend the relevant impermissibility intuition against the intellectualist. A principle like my RP would do the job, but Roeber does not assume an equivalent principle. As I said earlier, this is meant in the spirit of improvement, not of rejection. I assume that Roeber could adopt my principle RP in order to strengthen his argument for PE-K. To summarize: I have pointed out problems with two other arguments for PE-K. While I am highly sympathetic to both arguments, their lack of a principle to back up the crucial parts of their argument that concern impermissibility is troubling. My argument for PE-K works via such a principle, namely, RP. This is how my argument for PE-K differs from

14

One might argue that this additional stipulation turns this into a case-based argument after all. However, the relevant stipulation is not about knowledge, but, unlike traditional case pairs, about what one may (rationally) do. Be that as it may, it is not of vital importance to me that Roeber’s argument is neither case-based nor principle-based.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

126

Beings of Thought and Action

those given by Fantl and McGrath and by Roeber, and this difference seems to be advantageous.

5.4 Stakes and Practical Adequacy In this section, I want to defend my argument for PE-K against a number of objections and to further explain some of the key notions it employs. Some objections target the use of the notion of stakes, which is ubiquitous in the pragmatic encroachment literature. The criticism that this notion is objectionably unclear has been voiced in one way or another in Worsnip (2015a), Eaton and Pickavance (2015), and Anderson and Hawthorne (2019a). I eschewed the use of the notion of stakes. Nonetheless, it is useful to engage with these objections to show how my views and arguments differ from others in the pragmatic encroachment literature, and to elaborate in which way I avoid the notion of stakes. Eaton and Pickavance (2015) and Anderson and Hawthorne (2019a) complain that the use of the notion of stakes in the debate about pragmatic encroachment has been too vague. They acknowledge that Fantl and McGrath and others use principles which are referred to as practical adequacy conditions on knowledge. I assume that Fantl and McGrath’s principle KJ mentioned in Section 5.3 falls in this category,15 as well as a principle relied on by Ross and Schroeder (2014: 252). Practical Adequacy Condition on Knowledge (PAK) If one knows that p, then one’s strength of epistemic position is practically adequate.

What then is practical adequacy? One’s epistemic position toward p is practically adequate only if the act that maximizes expected utility is the same as the act that maximizes expected utility conditional on p. Eaton and Pickavance (2015) give the following example akin to Clifford’s famous ship owner case. A ship owner has a credence of 0.9 that his ship is seaworthy and consequently a credence of 0.1 that it is not. He is considering whether to send the ship to sea with many passengers or whether to do some further checking. A decision table (Table 5.1) adds the values associated for each action considering the possible state of affairs. The act with the highest expected utility (EU) is to do further checking.16 However, the act with the highest EU conditional on the 15 16

That is, I assume that if one is justified in φ-ing, then one’s epistemic position is practically adequate. EU(send to sea)=0.9x(10)+.1x(-10,000)= -991. EU (do further checking)= 0.9x(-10)+.1x(-200)= -29.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Pragmatic Encroachment in Epistemology

127

Table 5.1 Practical adequacy

Send to sea Do further checking

Seaworthy

Not seaworthy

10 −10

−10,000 −200

ship being seaworthy is sending it to sea. Since there is a difference between the act that maximizes EU and the act that maximizes EU conditional on the ship being seaworthy,17 the ship owner’s epistemic position toward the ship being seaworthy is not practically adequate and thus he does not satisfy a necessary condition for knowledge. Assuming that everybody shares the intuition that the ship owner in Clifford’s case fails to know, this seems to be the right result. For ordinary cases, at least, the practical adequacy constraint delivers the right results. However, it does not once things get more complicated, as Eaton and Pickavance and Anderson and Hawthorne argue. I will not elaborate on their cases, as I believe that there is a simpler way to demonstrate that their cases are no threat for my argument. I will grant them their arguments against PAK. I will point out that the principle that is crucial for my argument, KJS, is neither equivalent to PAK, nor does it entail PAK. Because of the notion of practical adequacy, PAK relies on the notion of an act that maximizes EU, which is an overall notion. So is the notion of rational action. However, the notion of a reason for action is a pro tanto notion. P can be a reason for φ-ing, even if φ-ing is, overall, not rational, because there is a further reason q that speaks against φ-ing which outweighs p. So even if one knows that p, and it is thus rationally permissible to treat p as a reason for φ-ing, it could be that φ-ing is not rational. Perhaps, because one also knows that q and q outweighs p and speaks against φ-ing, KJS is only a sufficient condition on whether it is rationally permissible to treat a proposition as a reason for φ-ing. But it is not a sufficient condition on when φ-ing is rational. Therefore, KJS is not equivalent to PAK, as practical adequacy is sensitive to whether φ-ing, overall, is rational (assuming that maximizing EU is equated with being rational). And, therefore, KJS also cannot entail PAK, as the rationality of φ-ing can be determined by a multitude of reasons; but KJS is only about 17

The act with the highest EU given that the ship is seaworthy is to send it to sea, which has an expected utility of 10.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Beings of Thought and Action

128

a single reason, which need not fully determine the overall rationality of an action. This also provides a reason to stay clear of PAK. Because the rationality of φ-ing can depend on a multitude of reasons, and hence on a multitude of propositions, tying knowledge of a singular proposition p to the overall notion of rational action (whether rational action is determined in terms of maximizing EU or not) is questionable. As critics have pointed out, if PAK was true, then knowledge of p would come and go in quite implausible ways, because many considerations other than one’s epistemic position to p could affect what it is rational to do overall. This is why I believe we should stay clear of principles like PAK. Since my argument for PE-K avoids such a principle, it is immune to objections that attack such principles. Now that it is clear that my argument does not rely on a contested principle like PAK, one might still wonder about the notion of stakes. Worsnip (2015a) argues that the notion of stakes is unclear because it allows for two readings. He sets up a decision table (Table 5.2), which I have adapted for HCOE (see Section 5.1). In Table 5.2, Worsnip distinguishes believing j (Bj) from withholding on whether j (WHj). He assumes that if Hannah actually believes j, she will act on j and invite Stanley right away, which he calls reliant belief. If she withholds, she will ask him first. Table 5.2 illustrates the outcomes of Hannah’s actions. If Hannah believes j and j is true, then the outcome of her action is (i) success, which is the optimal outcome. She can invite Stanley right away and avoid annoying emails. If not-j is the case and Hannah believes j, then this will have a negative impact on her tenure decision, hence this outcome is (iii) disaster. If Hannah withholds and asks Stanley first about his current position, she will ultimately be successful because, as was stipulated, j is true, although her success is accompanied with a mild inconvenience (ii). If j were not true, Hannah would still be successful, as she would then not invite Stanley and could still have a balanced conference lineup (iv). Outcomes (ii) or (iv) are preferable to (iii), while the best outcome is (i). Table 5.2 Worsnip on stakes

j Not-j

Bj

WHj

(i) success (iii) disaster

(ii) success with mild inconvenience (iv) success with mild inconvenience

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Pragmatic Encroachment in Epistemology

129

Worsnip holds that the notion of stakes is unclear because there are two ways for the stakes to be high. He distinguishes between A-Stakes and W-Stakes. The following is a simplification of Worsnip’s account, but sufficient in order to understand the distinction. W-Stakes are determined by how much it matters how the world turns out to be, independent of one’s attitude. In Table 5.2, they are provided by the contrast between (iii) and (i). A-Stakes measure how much it matters which attitude one has, independently of how the world turns out. In our example, they are given by the contrast between (iii) and (iv). Although Worsnip’s final account of A-Stakes is more complex than just this contrast, this will do for our purposes. Worsnip rightly points out that in cases like HCOE, both kinds of stakes are high. If Hannah’s belief turns out to be false, she faces disaster. So the W-Stakes are high. However, it also matters a lot which attitude Hannah takes toward j. If she believes that j in a not-j world, the consequences are disastrous. So the A-Stakes are high as well. However, W-Stakes and A-Stakes are not both elevated in all the relevant cases. Here is the example Worsnip provides. You are waiting to hear the results of a recent job application. The interview is done, and there is nothing you can do to affect the outcome. You either get the job or not, and whether you do matters a great deal to you. Thus, Worsnip holds that the W-Stakes are high. Yet nothing hinges on whether you believe or do not believe that you will get the job, since there is no action you could take to change anything after the interview.18 Thus the A-stakes are low; it does not matter what your attitude is toward the proposition that you will get the job. I think this example shows that for those embracing the notion of stakes, the relevant notion of stakes that features in their argument for PE-K must be the W-Stakes. Worsnip, though, argues that this does not sit well with the arguments provided in favor of PE-K, which supposedly rely on principles for which A-Stakes are a better match. I will set aside whether this is correct. Arguably, the principles that feature in my argument do not involve A-Stakes, as I have made no reference about costs of having or not having certain doxastic attitudes. Hence, I could hold that the relevant notion of stakes for PE-K are W-Stakes. However, I am wary of embracing the notion of W-stakes. W-Stakes are a contrast between two different outcomes of an action depending on 18

This does not mean that this has to be a nondeliberative context. For example, one may have to decide whether to sign a lease for one’s favored apartment, while knowing that one will only be able to make the monthly rent if one gets the job.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

130

Beings of Thought and Action

a possible state of the world. But it is not clear to me that just any large contrast will be sufficient for high stakes. For example, it might be that φing given p leads to a hugely beneficial outcome, while φ-ing given notp leads to a neutral outcome. While there is a large contrast in outcome, it is not, at least not to me, intuitively clear that this is a high stakes situation, as one does not stand to lose anything. On the other hand, there surely is a sense in which missing out on a beneficial outcome could be seen as a loss. But this potential disagreement just shows that interpreting stakes as W-Stakes may not cover one’s intended understanding of stakes. The significant notions in my principle PE-K* that support PE-K are COE and CFI, which seem to be intelligible notions. Yet, in Chapter 3, I claimed that COE are sensitive to the stakes. One might thus ask what this notion refers to. I think the fitting answer is that COE are sensitive to what I shall call the consequences of a failed action. Consequences of a Failed Action (CFA) The consequences of a failed action due to a false belief.

While CFA mentions belief, the consequences of a failed action are not merely consequences of a false belief. The qualification is meant to bring out why the consequences of a failed action obtain. The consequences of a failed action are, however, in an important sense, independent of one’s belief; they would equally arise if one had no beliefs, but acted as if one had. Hence, we have an understanding of the consequences of a failed action independently of beliefs and they are not merely consequences of a belief, although they will often arise as a consequence of a belief. To be more precise: For any deliberative context (DC) in which p speaks in favor of φing, CFA are the outcome that a reasonable person would assign to φ-ing given not-p. In our cases, LCOE and HCOE, j – that Jason Stanley is a proponent of pragmatic encroachment – speaks in favor of φ-ing, that is inviting him directly. However, the CFA vary between LCOE and HCOE. In the former case, an imbalanced speaker lineup is an undesirable outcome, but not a disaster, while in HCOE it would be disaster. I believe this difference in CFA is what many mean when they refer to a difference in stakes. However, the notion of CFA is not burdened by the ambiguity that the notion of stakes may evoke. As I will explain in Chapter 6 my preferred explanation of PE-K works through CFA. Much like COE, I believe that we have an intuitive grasp of this notion and that this is sufficient for proper theorizing even if it does not come with mathematical precision. We intuitively grasp whether an outcome counts as desirable or as disastrous. That is all we need for an explanation of PE-K.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Pragmatic Encroachment in Epistemology

131

The objections bemoaning the unclarity of stakes all involve the machinery of decision theory. This might be an indicator that they are implicitly driven by a demand that all notions involved in the PE-K debate should be translatable into decision-theoretic terms, for the sake of clarity and precision. This demand could also be applied to PE-K*, which relies on COE and CFI. While I am personally not particularly optimistic that this demand can be met, I also think it is not ruled out from the get-go. In Chapter 3, I argued that COE are also sensitive to factors other than CFA, such as the availability of alternative courses of action. It might be possible to translate the availability of further actions into vocabulary currently used in EUT, which would help capture COE.19 Recent work on Good’s theorem (see Good 1967), which holds that agents ought to acquire further evidence before making a decision as long as they can do so at negligible costs, might suggest that one can translate CFI into decision-theoretic terms as well.20 So, perhaps a translation of the notions of PE-K* is possible. But I will not engage in such a translation project. That is because the absence of such a translation does not seem objectionable to me. In Chapter 3, I introduced COE and CFI as fundamental and claimed that they cannot be further defined. They may not be part of ordinary language and hence technical notions, but they are nonetheless intuitive notions, or at least not less intuitive notions than those of standard EUT. For example, the notion of an outcome of an action might be vague, because one’s action might have consequences in the far future that are not reflected in the immediate outcome of the action. Hence, what counts as the outcome of an action might be unclear, as it is not clear whether only immediate consequences matter or not. But the notion of an outcome of an action is still clear enough to be suitable for further theorizing. The same can be said for COE and CFI; these may be technical notions, but yet they are intuitive. It would not speak against COE and CFI if they could not be translated into EUT notions, neither does the fact that they are not EUT notions. After all, it is not as if the only intelligible notions are EUT notions. Therefore, even if the objections to PE-K that criticize the notion of stakes as unclear are sound, my argument for PE-K is not affected by these objections. When I used the notion of stakes, I only used it as shorthand for a certain 19

20

This might be possible because one can add more than one possible action to a decision table and more than just two states of the world for which one has credences. But I will not explore this possibility here. See Buchak (2012), Ahmed and Salow (2019), and Das (2020) for critical discussions of Good’s theorem.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

132

Beings of Thought and Action

factor, CFA, which influences COE. All the heavy lifting in my argument for PE-K is done by the notions of COE and CFI. I have given an account of these notions that makes them sufficiently clear, or so I have argued here. I want to close this section with a number of further clarifications which are partly driven by objections. First, the notion of COE, as it was introduced in Chapter 3, assumes that these costs are relative to the standard of a reasonable person. The same, I suggest, holds for CFA. The relevant notion of consequences is an assignment of consequences of a failed action that a reasonable person could accept. Hence personal indifference cannot drive down CFA, while merely being anxious about some rather far-fetched consequences cannot raise them. Invoking the standard of the reasonable person also helps to deal with certain seeming counterexamples to PE-K*. One might assume that the principle makes the loss of knowledge-level justification too easy to come by. Do I not know that the pill on the table is my vitamin pill? If my partner might have replaced it with a cyanide pill, COE are extremely high (painful death), and CFI (getting the pill tested in a lab) comparably lower. Hence, PE-K* suggests that I don’t know that the pill on the table is my vitamin pill. But this objection only works if the COE are actually high, and on my proposal that depends on whether the reasonable person would assume that they are high. I believe that, assuming all other things are normal (my partner has no reason to kill me and has shown no inclination to do so), the reasonable person would not assume that COE or CFA in this context are extremely high. Therefore, the mere possibility of radically bad consequences is not enough to eradicate knowledge.21 Here is an objection that aims at the principle RP, but which also helps us to get clearer on the notions of CFI and COE.22 Suppose I consider treating the proposition that I have hands as a reason to assist somebody who has asked me to help move a heavy object. Absolutely nothing hangs on giving or receiving this help. If am wrong about having hands, I cannot provide the help I said I would give. This would at least be mildly embarrassing for me and mildly inconvenient for the other party. Consequently, there are some COE, even though they are minimal. However, I can also get more evidence for the proposition that I have hands. I can easily rule out the rather remote, but possible, scenario that my hands have recently been amputated without me noticing – I simply have 21

22

Thanks to Wolfgang Barz and Mikkel Gerken for pressing me to address objections along these lines. Thanks to an anonymous reviewer from Cambridge University Press.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Pragmatic Encroachment in Epistemology

133

to look down and check whether my hands are still there. Isn’t such simple further inquiry pretty much for free so that in this case, despite COE being low, CFI do not exceed COE? But then, according to RP, it would not be rationally permissible to treat the proposition that I have hands as a reason to offer somebody help. This seems odd, if not false. This objection assumes that no matter how low COE may be, there is some further inquiry available with which one can strengthen one’s epistemic position, even if only minimally, and which basically comes for free, so that CFI are still below COE. However, I think this assumption can be resisted. First, notice that one can say that while there are COE, they are negligible. Second, and more importantly, we can plausibly say that any further inquiry, no matter how simple, comes with further costs, even if, as in the example, it consists in simply looking down. So there are CFI involved. But we can say that they are negligible. Given this simple redescription of the case, it no longer seems correct that COE exceed CFI. Since COE and CFI are both negligible, they are balanced, and hence it is permissible to treat the proposition that one has hands as a reason. While this means that I ultimately simply disagree with the previous description of the case, it nonetheless highlights two important issues that are worth addressing. First, it highlights the need for a further restriction to assignments of COE and CFI. Again, an appeal to the standard of a reasonable person is helpful. I contend that the reasonable person assigns equally high CFI and COE for the lowest possible costs. This ensures that the COE for the most inconsequential actions cannot be higher than the CFI for the most trivial further inquiry. In other words, the reasonable person sees the CFI of the most trivial further inquiry as equally bad as the COE of a failed but most inconsequential action. One might still worry that even this stipulation will not avoid similar cases. One can imagine cases in which COE ever so slightly exceed CFI, and they bring with them the same odd consequences. This brings me to the second issue. RP, and derived principles, feature mathematical symbols that one might take to imply that CFI or COE are numerically precise notions. If they were, the aforementioned problematic cases surely exist. However, I think that we are best advised to think of both CFI and COE as being measured on an ordinal scale. This should not be surprising given just how much incommensurability and aggregation is involved in these notions, as I mentioned in Chapter 3. To expect precise numerical values is therefore an unrealistic idealization. However, as my redescription of the case suggests, we can think of CFI or COE as being measured on an ordinal

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Beings of Thought and Action

134

scale. We can say that the costs are negligible, low, moderate, high, extremely high, and so on. And we can make intuitive comparisons of various costs along such an ordinal scale. By acknowledging that it is most plausible that COE and CFI are measured on an ordinal scale, one can avoid the possibility that any small difference between COE and CFI can lead to an imbalance that affects rational permissibility. Small differences can still result in imbalances along an ordinal scale. But when, for example, CFI are moderate and COE are high, it does not seem odd that a small difference between them can affect rational permissibility. Another aspect worth mentioning is that since COE, CFI, and CFA are all notions that apply to a deliberative context, there is not really such a thing as intrinsic high stakes propositions, for example, that my family is safe, which seems to be suggested in Howell (2005). It could be true that whenever this proposition is at the forefront of my mind, I am in a deliberative context in which the COE are high. But even then, it remains strictly true that it is due to being in a deliberative context that the COE are high. However, there are not always high COE or grave CFA associated with the proposition that my family is safe. Suppose that I am in my kitchen, it is a peaceful Sunday morning and I am deliberating about whether to have Darjeeling or Earl Grey for breakfast tea and that my decision depends on which can be found in the cupboard. In such a context, the proposition that my family is safe is entirely irrelevant to my deliberative context, and hence not associated with high COE or grave CFA. Finally, I do not believe that my talk about high or grave CFA can be made more precise. It is simply unreasonable to assume that there is a specific value above which CFA count as high. For some, even the loss of a penny may reasonably be seen as a grave CFA, while this would not hold for others. Nonetheless, I believe we have a sufficiently clear intuitive grasp on whether CFA are high in a particular deliberative context that suffices for relying on this notion in further theorizing.

5.5

Competing Explanations of Pragmatic Encroachment on Knowledge

My argument in favor of PE-K does not commit one to an explanation of PE-K. However, such an explanation matters a great deal: I agree with Schroeder (2012: 266) that it remains hard to accept that PE-K could be true and that we would like to have a more elaborate explanation of how it could be true. What we want is what I call an explanation behind the mechanisms of PE-K which, for example, could account for how the

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Pragmatic Encroachment in Epistemology

135

characteristic shifts of knowledge due to varying practical circumstances come about. The derived principle PE-K* leaves us with at least three ways to implement the mechanics that allow for pragmatic encroachment. The first is probably the most obvious. According to PE-K*, it is a necessary condition on having knowledge-level justification that COE do not exceed CFI. We might take this at face value and hold that this is another necessary condition that is also a genuine constituent of the knowledge relation, like belief, truth, justification, and the ever-elusive anti-Gettier condition. Not meeting this condition explains why one lacks knowledge, just like a lack of belief, truth, or justification explains a lack of knowledge. Let’s call this the genuine constituent explanation. While it is probably the most obvious choice, it is not the one I will endorse. My reasons for this will emerge in Chapter 7, where I will discuss all three suggested explanations of PE-K. However, we can distinguish between necessary conditions and what makes a claim true. A necessary condition might be merely an indicator and it need not be that this indicator also makes it true that the indicated property obtains. It could be that the facts about how CFI relate to COE merely indicate whether one has knowledge-level justification; however, they do not make it true that one has or lacks knowledge-level justification. This gap provides the intellectualist with one last-ditch opportunity to avoid pragmatic encroachment, as I have already acknowledged. They could hold that what makes it true that one has or lacks knowledge-level justification is indeed an overlooked truth-relevant factor that happens to covary with practical factors. They would then have to argue that, in all the relevant cases, the practical factors are explanatorily inert, and that what makes it true that one has or lacks knowledge-level justification is a truthrelevant factor. But it is hard to see how one could make good on this proposal.23 The problem is not merely that it is unclear what the truthrelevant factor is; for case pairs such as LCOE and HCOE it was stipulated that there is no difference in truth-relevant factors. It is hard to see how the intellectualist could argue that such a stipulation could not reasonably be made for such case pairs. For all others, this gap provides an opportunity to provide an explanation of PE-K that is not directly tied to the relation between CFI and COE. All that matters is that the relevant mechanism works through 23

At least on one reading, Nagel (2008) offers a proposal along these lines; however, see Fantl and McGrath (2009: 45) for a critical reply.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Beings of Thought and Action

136

Shifting Thresholds View (STV) 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 LCOE

HCOE Degree of justification

Figure 5.1 Shifting thresholds view (STV)

a practical factor so that it counts as an explanation of PE-K and provides verdicts in line with PE-K*. In the following, I will introduce two such explanations of pragmatic encroachment. I am not claiming that these are the only two possible explanations.24 According to the first of these explanations, the shifting thresholds view (STV),25 the threshold for having knowledge-level justification is not constant but can shift relative to practical circumstances.26 While the threshold for knowledge-level justification depends on practical circumstances, the subject has a stable degree of justification independently of her practical circumstances. To make this more vivid, consider the following illustration of STV (Figure 5.1). On the x-axis, we have two different contexts that vary in practical circumstances, LCOE and HCOE. They have all the features given in Section 5.1. The y-axis measures the degree of justification. For the sake of illustration, suppose that justification can be measured on a scale between 0 and 1, where 1 is the maximal degree of justification. The striped columns 24

25 26

For example, Schroeder (2012) provides another explanation in terms of reason for belief and reasons for withholding. I have criticized this account in Mueller (2017b). This is endorsed by Fantl and McGrath (2009), Grimm (2011) and (2015), and Hannon (2017). I am talking of practical circumstances here because there are several ways to implement the idea of shifting thresholds. For example, one could say that the threshold is sensitive to what one can permissibly treat as a reason for action, or that the threshold rises as the COE rise (with the limitation that one cannot have knowledge-level justification when COE are higher than CFI).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Pragmatic Encroachment in Epistemology

137

symbolize one’s degree of justification, which is constant in both LCOE and HCOE and sits at 0.85. However, according to STV, the threshold for a degree of justification that suffices for knowing shifts with practical circumstances. The solid bar going from left to right in Figure 5.1 indicates the threshold for knowledge-level justification. Since the threshold shifts upwards as practical circumstances change, one’s degree of justification suffices for knowing in LCOE, where the threshold sits at 0.8 but does not in HCOE, where the threshold sits at 0.9. In the current literature, we find two diverging views about how to embed STV. According to the first suggestion, the practical adequacy threshold defended in Fantl and McGrath (2009), one’s degree of justification must be practically adequate to amount to knowledge-level justification. Fantl and McGrath (2009: 26) hold that to know that p, p must be probable enough to be properly put to work as a basis for belief and action. “Probable” here refers to what I have called one’s degree of justification. Setting aside the concerns about the practical adequacy condition I raised in Section 5.4, we can assume that practical adequacy, that is, “being properly put to action,” simply means that one can rationally rely on p in one’s decision making. Since Hannah can rely on j in LCOE, but not in HCOE, the threshold for knowledge-level justification was shifted upwards. According to the second suggestion, the communal threshold introduced in Grimm (2015) but spelled out in greater detail in Hannon (2017), the threshold for knowledge-level justification is set by the practical concerns of the members of one’s community when it comes to relying on someone’s testimony. How good one’s epistemic position must be in order to count as meeting the threshold for knowledge-level justification depends on whether one can serve as an informant in one’s community, that is, whether one’s community can rely on one’s testimony, given its usual practical concerns. Hannon (2017: 617) explicitly argues that the threshold for knowledge can be raised when the individual faces a decision in which the COE are far above usual practical decisions in one’s community. As I will explain in Chapter 7, I believe that the communal threshold account is in a better position to deal with certain cases than the practical adequacy threshold account. For now, though, I merely introduce STV, which allows for at least two different explanations of how PE-K works, as outlined earlier. While STV is currently quite popular, one could equally explain PE-K if one moves to what many consider a more radical position. Instead of holding that merely the threshold values required for knowing vary with

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Beings of Thought and Action

138

Total Pragmatic Encroachment (TPE) 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 LCOE

HCOE Degree of justification

Figure 5.2 Total pragmatic encroachment (TPE)

practical factors, one could argue that one’s degree of justification is what is affected by practical factors. The pragmatic encroaches epistemology thoroughly, not just on knowledge via a variable knowledge-level justification threshold. Let us call this position Total Pragmatic Encroachment (TPE). Again, an illustration is helpful in order to grasp TPE (Figure 5.2.) Although TPE is not committed to it, we assume that the knowledge threshold remains constant, thus the gray bar has the same height in both scenarios. According to TPE, one’s degree of justification changes with practical factors. Thus, the striped column does not remain constant across the two scenarios; it sits at 0.87 in LCOE and thus above the threshold for knowledge, which is at 0.85 and stays constant for all cases. In HCOE, one’s degree of justification is at 0.8, which is below the threshold for knowledge-level justification, and so one fails to know in HCOE. Both STV and TPE are equally compatible with PE-K* and offer a potential explanation of PE-K. In principle, it would be possible to combine TPE and STV. One could maintain that practical factors affect both one’s degree of justification and the threshold value required for knowing. However, each explanation on its own suffices to explain PEK. Therefore, unless there is some further advantage to be enjoyed by a combinatorial view, there is no need to go for this. Since I am not aware of any such advantage, I will set this possibility aside.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

Pragmatic Encroachment in Epistemology

139

Here are some preliminary considerations in favor of TPE. It might be seen as the more radical and hence stronger thesis, as it holds that there is not only pragmatic encroachment on knowledge but also on justified belief and degrees of justification. But this does not make TPE a stronger thesis than STV. According to TPE, pragmatic encroachment on knowledge is just a side effect of pragmatic encroachment on degrees of justification. It does not postulate an additional form of pragmatic encroachment; STV and TPE merely differ regarding the level at which they locate pragmatic encroachment. Nonetheless, I agree that TPE is a more dramatic move away from orthodoxy. While STV leaves at least degrees of justification untouched by practical factors, TPE does not. However, once we have arguments against our intellectualist proclivities for knowledge, it is not clear why we must uphold them for degrees of justification (even if we have no principled argument that shows that degrees of justification are sensitive to practical factors). Furthermore, that STV leaves degrees of justification untouched by practical concerns threatens to diminish interest in one of the key epistemological notions – knowledge. As I have admitted, pragmatic encroachment is a thesis that one would rather avoid. Since there is an underlying pure epistemic notion, degrees of justification, under the pragmatically muddled notion of knowledge, one might be tempted to denounce knowledge as a central notion of proper epistemology. One might say that proper epistemology only concerns pure notions such as degrees of justification. In light of the current resurgence of formal epistemology, which often does not make reference to knowledge, this abandonment may not even be seen as extreme, but only as a proper consequence once the pure and impure epistemic notions have been identified. Epistemology might have been a huge misnomer given that epistemologists were only ever interested in notions that are entirely determined by truth-relevant factors, which knowledge is not. By allowing for an underlying pure epistemic notion, STV owes us an answer as to why there are some epistemically pure notions and why one should bother with pragmatically polluted notions if there is an epistemically pure notion after all, as was traditionally assumed. As I said, a more detailed assessment of STV and TPE must wait until Chapter 7, where I will also return to many of the worries that have been raised against PE-K. So far, TPE is a merely theoretical option. To the best of my knowledge, no pragmatic encroacher has explicitly endorsed it and certainly not provided a more detailed account of it. Chapter 6 is dedicated to precisely this – spelling out my preferred account of TPE.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.005

chapter 6

Reasons for Belief and the Primacy of the Practical

Even if one accepts that practical factors encroach in epistemology, one wonders just how that can be and how the mechanics work that lead to the shifting knowledge-judgments that PE-K is associated with. In Chapter 5, Section 5.5, I introduced two accounts of the explanation and mechanics of PE-K, the shifting thresholds view (STV) and total pragmatic encroachment (TPE). This chapter is devoted to developing my account of TPE. While the view itself is sometimes mentioned in the literature, there is, to the best of my knowledge, no detailed account of it. While my account makes a claim to originality, it makes no claim of exclusivity regarding TPE. I believe that my account of TPE is not the only possible account, but hopefully among the more plausible versions of TPE. Before I continue to develop this account of TPE, I want to preempt a question that would otherwise arise at the end of this chapter. How does my argument for PE-K relate to my suggested explanation and mechanism behind pragmatic encroachment? Perhaps surprisingly, I believe that there is no deep theoretical relation here. But this disjointedness does not speak against either my account of TPE or my argument for PE-K, and nor against maintaining them jointly. This opportunity to disband the argument and explanation of PE-K arises because the relevant conditional in the argument for PE-K, PE-K*, might not provide a genuine constituent of the knowledge relation, as I pointed out in Chapter 5, Section 5.5. When my account of TPE is on the table, I will say more on the relation of my argument and explanation of PE-K. This is the plan for the chapter. In Section 6.1, I introduce some motivation for the kind of account of TPE that I will develop. In Section 6.2, I give an informal theory of the strength of reasons for belief. In Section 6.3, I explain how this theory plus assumptions about the function of belief lead to pragmatic encroachment on the strength of reasons for belief, which makes it that the practical has primacy over the epistemic. In Section 6.4, 140

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

Reasons for Belief and the Primacy of the Practical

141

I deal with objections and in Section 6.5, I suggest how to handle cases of ignorant and apparent high CFA.

6.1 Pragmatic Encroachment: From Knowledge to Reasons for Belief In Chapter 5, Section 5.1, I began with the case-based argument for PEK. I have backed up this case-based argument with a principle-based argument in favor of PE-K. The success of this argument should lead us to reconsider our skepticism about the case-based argument for PE-K. If you had the relevant intuitions about shifting knowledge from LCOE to HCOE, then these intuitions were vindicated by the principle-based argument, which also suggests that the shift is due to a change in practical factors. If there are other case pairs that are about epistemic evaluations other than knowledge, I believe we have at least some reason to assume that they are also sensitive to practical factors, even if we have no principle-based argument to back this up. In fact, that there are such case pairs has been acknowledged in the literature. To stay true to it, I will now switch back to the perhaps familiar bank cases, although they are structurally identical to the cases I gave in Chapter 5, Section 5.1, LCOE and HCOE. Wedgwood (2008: 7) notes that we could construct bank cases that do not involve the term “knows” but, instead, the term “good reason for believing,” such as the following. Low Stakes Good Reason (LGR) Hannah and her wife Sarah are driving home on a Friday afternoon. They plan to deposit their paychecks. It is not important that they do so, as they have no impending bills. As they drive past the bank, they notice that the queues inside are very long. Hannah points out that she was at the bank last Saturday, and then claims that this gives her a good reason for believing that the bank will be open tomorrow. High Stakes Good Reason (HGR) Hannah and her wife Sarah are driving home on a Friday afternoon. They plan to deposit their paychecks. Since they have an impending bill to pay, and very little money in their account, it is very important that they deposit their paychecks by Saturday. Hannah points out that she was at the bank last Saturday. Hannah claims that she has good reason to believe that the bank will be open tomorrow. Sarah wants Hannah to consider that banks sometimes change hours. Still, Hannah claims that she has a good reason for believing that the bank will be open tomorrow.

Note that these cases involve the term reason as a count noun, so they are about the quality of an individual reason, not about an all-things-considered

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

142

Beings of Thought and Action

notion. This variation of the bank cases, for me, at least, can elicit a change in judgment from LGR to HGR similar to the original bank cases. I agree with Hannah’s claim in LGR, but I believe her claim in HGR to be false. While Wedgwood does not endorse the following claim, if we take the change between the cases at face value, as in the cases about knowledge, this suggests that there is pragmatic encroachment on the quality of one’s reasons, PE-R.1 Pragmatic Encroachment about Reasons for Believing (PE-R) The strength of one’s epistemic reasons for believing partly depends on practical factors.

Skeptics about PE-K might think that this further variant of pragmatic encroachment just introduces further absurdity that is best avoided. However, quite to the contrary, I believe that this variant suggests an appealing explanation of PE-K. I take pragmatic encroachment on reasons for belief to be fundamental. Thus, I will offer an explanation of how there could be pragmatic encroachment on the quality of one’s reasons for belief or, as I will say, their strength. Since what one knows depends on one’s reasons for belief, this explanation can also cover the appearance of pragmatic encroachment on knowledge even if, ultimately, there is only pragmatic encroachment on reasons for belief. A few words on PE-R. Following Kelly (2014), I assume that the terms “evidence” and “reason for belief” can be used interchangeably. Thus, I am open to accept that the views I develop could be said to be a form of pragmatic encroachment on evidence. However, some are concerned that the equation of evidence and reasons for belief cannot be maintained. Whatever the final verdict is, I see myself mainly committed to forwarding a thesis about reasons for belief. The notion of reason for belief is to be understood as an epistemic as opposed to a practical reason for believing, such as the benefits of believing. For the sake of brevity, I will often drop the qualification “for believing” when I talk about such reasons. PE-R was anticipated in Stanley (2005). In passing, Stanley (2005: 201) mentions that practical factors might influence the quality of one’s evidence. But no further elucidation is offered. What I offer in this chapter is 1

This is not meant to deny that there are a number of other ways one could account for these case pairs. For example, one might argue that they support some form of contextualism about “good reasons for belief,” just as the original cases provided some support for contextualism about “knowledge.” I simply assume that if the original case pairs provide some evidence for pragmatic encroachment on knowledge, then these provide some evidence for pragmatic encroachment on the quality of one’s reasons.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

Reasons for Belief and the Primacy of the Practical

143

an account of this variety of pragmatic encroachment, although I prefer to put this in terms of reasons for belief instead of evidence. In later writings, Stanley picks up the idea that there is pragmatic encroachment on the quality of one’s evidence and backs it up with empirical findings (Sripada and Stanley 2012). While there is no further development of the theory, the results are worth mentioning. They suggest that the intuitions I tried to elicit by using LGR and HGR are not merely an armchair confabulation, but shared by laypeople. And they also suggest that this intuition really is about the strength of an individual reason. Sripada and Stanley’s study involves a case about food allergies that is structurally just like the case pair LGR and HGR I gave. They asked participants to rate the quality of the evidence on a 7-point scale and they found that participants rated the quality of evidence to be lower in high stakes cases. Sripada and Stanley (2012: 14) acknowledge that they only found a moderate effect, yet this study adds at least some support for PE-R. Before I turn to PE-R, I want to give a more general motivation for my kind of theory. In the theory of reasons for action we find an analogue to the theory of reasons for belief that I will propose. Dancy (2004) argues for a situation-sensitivity of reasons for action. The fact that Emma needs my help can be a strong reason for me to help her in a situation where there is nobody else around. However, the fact that Emma needs help might be a very weak reason to help Emma when I am in a situation where I have important prior commitments that conflict with helping Emma and when many other people around could help Emma. That the strength of a reason for action is not inherent to the consideration that is the reason, but rather something that depends on a wider array of circumstances, seems widely accepted. I will propose an analogue in the epistemic domain. A consideration that counts as a reason for belief does not inherently carry a certain strength value. The strength of a reason for belief is sensitive to a particular situation. This by itself does not imply pragmatic encroachment just yet. However, additional assumptions about the function of belief suggest that there is pragmatic encroachment on the strength of one’s reasons for belief. As a starting point, I suggest that we look at knowledge and reasons for belief from the perspective of inquiry, that is the search for an answer to a question through the gathering of reasons.2 I hold that all propositional knowledge can be construed as knowing an answer to a whether-question: 2

A similar general approach to reasons can be found in Hieronymi (2005); she argues that reasons are considerations that have a bearing on certain questions.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

Beings of Thought and Action

144

If S knows that p, then S knows whether p. For some cases, it might be a strain to say that there was any sort of inquiry going on. For example, I might come to know that it is raining by visual perception, even though I was not concerned with an inquiry into the weather conditions. Still, I thereby know the answer to the question whether it is raining. The purpose of this detour will become more clear in the following section, as my guiding idea for giving a theory of the strength of reasons is to look at how we find answers to questions in inquiry by using available reasons.

6.2

The Strength of Reasons for Belief

My aim in this section is to offer an informal theory of the strength of reasons for belief. I will look at some intuitive orderings of the strength of various reasons. Given these orderings, I will try to reverse engineer what drives our strength judgments. Here is our first case. Lame Murder Mystery V was murdered last night. Inspector Lewis is asked to find the killer. In the course of his investigation, he acquires the following evidence. There are four possible suspects: the gardener G, the maid M, the butler B, and V’s nephew N. No one else could possibly have entered the house that night and killed V. N has a motive. It is not known whether anybody else has a motive. The murder weapon was found in N’s room, but B also has access to N’s room; besides that nobody else has. As it turns out there is a surveillance camera that caught N killing V.

There are different reasons for believing that N is the murderer, but they vary in strength. The intuitive ordering by rising strength is: (1) N has a motive to kill V. (2) The murder weapon was found in N’s possession. (3) N is on tape killing V. But why is this? We might think of Lewis’s murder inquiry as about asking the question, “Who killed V?” According to a standard semantics for questions proposed by Hamblin (1958), the meaning of a question is the set of possible answers. In our case, where we have a very limited set of suspects and thus a limited set of possible answers, the meaning of the question “Who killed V?” is given by the following set: {N killed V; B killed V; G killed V; M killed V}

After possessing reasons (1–3), Lewis knows that N is the killer and thereby comes to know the answer to the question he was inquiring about. The reasons point him toward the correct answer to his question, but not

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

Reasons for Belief and the Primacy of the Practical

145

all reasons are equally good indicators of what the correct answer is. A natural suggestion is thus that the strength of a reason is its potential to settle a question: Basic Idea In situation S, the strength of a reason r to believe p is r’s potential to settle the question whether p. In S, the question whether p is settled in favor of p iff all not-p alternatives in play in S are (sufficiently) ruled out.3

I will spell out the need for all the details and some nonobvious consequences contained in Basic Idea below, with further cases. But let us begin with Lame Murder Mystery. What it takes to settle the question, as the example suggests, depends on whether one can rule out certain alternative possible answers. (3) rules out anybody but N as the killer, while (1) does not rule out any option and (2) only rules out G and M. Therefore, (3) is a stronger reason than (2), while (2) is stronger than (1). I am relying on a liberal understanding of “ruling out” (hence, the “sufficiently” in parenthesis in Basic Idea). Accordingly, an alternative can count as ruled out if its probability is sufficiently diminished. Note that I am not saying that (1) is not a reason because it fails to rule out anything. My proposal is silent on whether a consideration is a reason – another approach will have to settle this question. I am only saying that (1) is a weaker reason than (2) because it rules out fewer possible answers than (2). While this initial case suggests that the strength of a reason is related to ruled-out alternative possible answers to the question under inquiry, this is, as we will see, not the entire story. The following case pair involves a single piece of evidence that varies in strength from Case 1 to Case 2. Case 1 Inspector Lewis is called to a murder scene. He finds a blond hair close to the victim. There are four possible suspects; besides them, nobody else could have committed the crime. They can be identified by their hair color: Blond, Brown, Black, and Red. Case 2 Everything is as in Case 1, except that the four suspects cannot be uniquely identified by their hair color because there are two blondes. The suspects are Blond, Blond*, Brown, Black. 3

As the talk of alternatives suggests, my thinking here is influenced by the relevant alternatives theory of knowledge, as held, for example, by Dretske (1970). I cannot go into a detailed comparison, but the main divergence is that I am not putting forward a theory about knowledge, but a theory of reasons for belief. I believe that I could develop my account in another framework, for example, in terms of truth-tracking across possible worlds so that the strength of a reason is related to how well it tracks truth across sets of possible worlds. However, I will not explore this alternative framework here.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

146

Beings of Thought and Action

Intuitively, the blond hair is a reason to believe that Blond did it; however, it is also intuitively a weaker reason in Case 2 than in Case 1. How do we explain this difference in strength? In both cases, Lewis, upon finding the blond hair, considers whether Blond did it. The set of possible answers to this question is: {Blond did it; ~Blond did it}

What varies between the cases is how “ ~Blond did it” can come about. Just as the question whether p can be conceived as the set of possible answers, we can think of the ways in which ~p can come about as a set. We can call this set the set of alternative possibilities, and its members simply alternatives. For the aforementioned cases, we get the following meaning of questions and set of alternative possibilities: Case 1 Possible answers: {Blond did it, ~Blond did it} Set of alternative possibilities: ~Blond did it = {Brown did it, Black did it, Red did it} Case 2 Possible answers: {Blond did it, ~Blond did it} Set of alternative possibilities: ~Blond did it = {Blond* did it, Brown did it, Black did it}

Assuming that the blond hair is a clear indicator of the hair color of the guilty party, and that we have only four possible suspects, the blond hair rules out all of the following: that Brown did it, that Black did it, and that Red did it. So, in Case 1, the blond hair can settle the question of who did it. It rules out all alternatives in the set “~ Blond did it.” But this does not hold in Case 2 where the reason is weaker. While the blond hair rules out that Brown or Black did it, it does not rule out that Blond* did it. This case pair demonstrates that the strength of a reason is not constant, but situation sensitive. The strength of a reason is not something inherent to the consideration that is a reason, but depends on a wider array of circumstances. Hence, the need to index the strength of a reason to a situation, as I did in Basic Idea. This case pair might suggest that the strength of a reason is determined by how many alternatives can be ruled out but, as I said before, this is not the entire story. Adding a third case to the mix shows that things are not that simple. In a variation of Case 2, Case 2*, Lewis receives information, at a later time, that there is a further suspect with blond hair, Blond**. Accordingly, while the set of possible answers stays the same, the set of alternative possibilities does not.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

Reasons for Belief and the Primacy of the Practical

147

Case 2* Possible answers {Blond did it, ~ Blond did it} Set of alternative possibilities: ~ Blond did it = {Blond* did it, Blond** did it, Brown did it, Black did it}

Intuitively, the strength of the reasons to believe that Blond did it is reduced from Case 2 to Case 2*. In Case 2*, the blond hair is equally a good reason to believe that Blond* did it or that Blond** did it, while in Case 2, it is compatible only with one other alternative, thus making it likelier that Blond did it. However, this cannot be captured by the idea that the strength of reasons is best conceived in proportion to the number of alternatives that are ruled out. The alternatives that are ruled out in Case 2 and Case 2* are the same. In both cases, there are two alternatives that are ruled out, namely, that Brown did it, and that Black did it. What changes between Case 2 and Case 2* is the number of alternatives that are not ruled out. Therefore, the strength of a reason, its potential to settle a question, is dependent not only on the numbers of alternatives that are ruled out. Its strength also depends on the number of alternatives that are not ruled out. A nonobvious idea contained in Basic Idea is that the strength of one’s reasons not only depends on what it can do but also on what it cannot do. In a nutshell: The more alternatives that are not ruled out, the weaker one’s reason, as they are less potent to settle the question. That the strength of a reason does not only depend on the things it can do but also on the things it cannot do, is just a special instance of a phenomenon that frequently occurs in evaluative assessments. How one is assessed as a candidate on the job market depends not only on the qualities one has but also on the qualities one does not have. Candidate C might have an excellent teaching portfolio. However, he might lack a good publication record. Depending on the situation, this will affect how he is assessed as a potential fit for the job. If the only relevant hiring qualification is teaching experience, then C is a good candidate for the job. But if the hiring requirements include teaching experience and a publication record, C would be a lesser candidate, although nothing about C has changed. The same holds for reasons. The blond hair is the same in all the cases, yet, in some cases, it becomes a weaker reason because it does not have the quality of ruling out certain alternative possibilities that must be considered. Most real-world situations are more complex. Indeed, most real-world murder scenarios are more complex because they often do not come with a limited set of suspects. More generally, most real-life scenarios do not

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

148

Beings of Thought and Action

come with a limited set of alternative possibilities. And most reasons that we have cannot rule out all the possible alternatives that one could imagine. Hence the need for the restriction to alternatives in play in S, in Basic Idea. In order to settle common questions of everyday life, which is what our reasons allow us to do, they need not be able to rule out many alternatives. That is because if we ask ourselves whether p, there may not always be obvious restrictions, but restrictions nonetheless as to what notp alternatives are in play. In ordinary inquiry, a very limited number of not-p alternatives need to be ruled out in order to settle the question and to know the answer to whether p. When I ask myself whether it is 1 p.m. yet, I usually take a look at my watch. Whatever my watch says, that is sufficient to settle the question whether it is 1 p.m. yet. However, there are many ways in which not-1 p.m. can come about even if my watch says that it is 1 p.m. Perhaps there was a recent switch to daylight saving time and I have forgotten to adjust my watch, or the battery might have died at 1 a.m. last night. But normally, we do not consider such alternatives. Our ordinary inquiries are not very deep inquiries. That is why we can settle questions based on the reasons that we have. Because there are few possibilities to consider, there are few alternatives that are not ruled out by our reasons. More schematically speaking, when we ask ourselves whether p, the set of possible not-p alternatives is usually restricted. In most ordinary situations, at least, not every not-p possibility needs to be ruled out in order to settle the question whether p. If we take the skeptic seriously, we are willing to consider a lot more not-p possibilities that we would never take seriously in ordinary situations. Given the reasons that we have, a lot of notp possibilities are not ruled out and thus, according to my proposal, our reasons become weaker. However, we need not concede to the skeptic that generally our reasons are weak, because we need not always consider all not-p possibilities as the skeptic might demand of us. Remote notp possibilities are not in play in ordinary situations, and thus our reasons are stronger because there are fewer or no not-p alternatives that are not ruled out by our reasons. But what does it mean to say that an alternative is in play? We can invoke the reasonable person standard as an answer to this question. An alternative is in play in S if a reasonable person would consider it in S. A reasonable person may be willing to engage with the skeptic in the epistemology classroom and assume that skeptical alternatives are in play. But when they wonder what time it is, they will discard these alternatives when they look at their watch to find out the time.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

Reasons for Belief and the Primacy of the Practical

149

Consequently, I propose that our situation determines which and how many alternatives are in play and need to be considered in inquiry. It is in this sense that my account makes the strength of reasons sensitive to one’s situation. We could also conceive of a situation along the familiar lines of the notion of a conversational common ground. What we take for granted or which possibilities we take seriously can vary as the conversational context or, as I would say, the situation, changes. As Greco (2015: 191) has pointed out, we can take this model and apply it to internal monologues as well – for example, Inspector Lewis’s internal monologue as his situation in Case 2 changes to the one presented in Case 2*. Here is a quick summary of the theory developed so far. I suggested that the strength of a reason is its potential to settle the question we are inquiring about. We noticed that a single reason may settle a question in one situation, but not in another. Since the only thing that changed between these situations was which not-p alternatives were in play, we concluded the following: The strength of a reason is partly determined by what it cannot do, that is, by which alternatives it cannot rule out. Which alternatives cannot be ruled out depends partly on the situation because the situation determines which alternatives need to be considered. So changes in the situation can lead to changes in the strength of a reason.

6.3

Pragmatic Encroachment on Reasons for Belief and the Primacy of the Practical

So far, I have introduced a general theory of the strength of reasons. This theory was put forward without pragmatic encroachment in mind. In this section, I explain how this theory allows for pragmatic encroachment on the strength of reasons based on the idea that our situation includes our practical situation and that the state of belief serves a special function. I have talked about reasons for belief; now, I suggest that we take a closer look at the attitude of belief and its function. I agree with Holton (2008: 36) who holds that belief is ultimately a practical attitude, in the following sense. We are beings that have cognitive limitations. We are unable to do all our reasoning, both epistemic and practical, in terms of credences, as this goes beyond our cognitive capacities.4 From a practicality standpoint, it makes sense to assume certain propositions outright, even if we acknowledge that we could be wrong. Holton thus writes: “all-out beliefs enable us to resolve 4

The idea that the purpose of belief is to help us deal with our cognitive limitations is also found in Smithies (2012: 278), Tang (2015), and Staffel (2019).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

150

Beings of Thought and Action

epistemic uncertainty in order to facilitate action. They allow us to reduce an unmanageable amount of information to a manageable amount by excluding certain possibilities from our practical reasoning” (Holton 2008: 37). The idea that we can greatly simplify practical reasoning by excluding or ignoring certain possibilities has long been acknowledged. Harsanyi (1985: 6) writes that treating a proposition as certain, when one is aware that it is not certain, greatly simplifies decision problems.5 The point is implicit in Savage (1954) and his problem of planning a picnic: Often the set of consequences of one’s action is uncertain. The point has recently been made explicit in Ross and Schroeder (2014). Suppose that you are deliberating which train to take to return a DVD to the store. In your deliberation, you might rely on the belief that if you take train A, you will be able to return the DVD on time; however, you ignore the possibility that you might get mugged on the way to the train or that the train might not make it in time because of an accident. In other words, you believe that you will not get mugged, or that there will be no accident, although you might acknowledge that there are such possibilities. For beings like us, that is beings with cognitive limitations, ideal Bayesian reasoning done entirely in terms of credences is simply not an option. Doing so would require us to take into account too many possibilities given our cognitive abilities. Beings like us have a need for another mental state, belief, that comes with lower computational demands. The function of belief is therefore closing off uncertainty and settling on the truth of a proposition that we can then employ in further reasoning. And to do that, we consider only a number of possibilities, not every logical possibility. This is not to say that beings like us do not also have a need for a more fine-grained doxastic state like credence. Settling on the truth of a proposition and closing off uncertainty will not always be the fitting epistemic response given our reasons. We sometimes must act in situations in which we cannot rationally close off uncertainty, where forming an outright belief might not be supported by our reasons. Yet, in such a situation, a credence, understood as rational credence that is based on our reasons, might be available. This might be when our reasons leave too many alternatives uneliminated to close off uncertainty, or even when there are few alternatives in play, but our reasons fail to eliminate these alternatives.6 Yet they may make certain alternatives sufficiently unlikely 5

6

Harsanyi talks in terms of acceptances, not of beliefs, but this point remains unaffected by the terminology. One example of such weak reasons might be statistical evidence, which might make it likely that p, but which is entirely compatible with not-p.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

Reasons for Belief and the Primacy of the Practical

151

and support a certain credence. Therefore, the function of credence is complimentary to the function of belief. It enables both a doxastic representation of the world and decision making in the absence of belief, that is, when belief is not sufficiently supported by our reasons for believing. While the given account of the function of belief is well established, little is said about how it integrates with the considerations that rationalize beliefs, that is, our reasons. If belief has the function described, then it is natural to assume that reasons for belief must act in a way that supports this function in order to rationalize belief. The account of the strength of reasons developed in Section 6.2 is a good fit for rationalizing belief given its function. As suggested earlier, our ordinary inquiries are often not very deep, and take into account only a very limited number of alternatives. As explained in Section 6.2, if the strength of our reasons is partly determined by the number of alternatives in play, then it is easily explained how, in ordinary situations, our reasons for belief are strong enough to rationalize belief. If there are few alternatives in play, then our reasons will suffice for closing off uncertainty, even though they only rule out a fairly limited number of alternatives and not every logical possibility. Given that there are not many alternatives in play, our reasons are strong enough to rationalize belief. If the function of a mental state works hand in hand with the reasons that rationalize that mental state, one might wonder if there are reasons for credences on top of reasons for belief. I believe that the foregoing does not require us to multiply reasons for doxastic attitudes. My account of reasons for belief can explain why certain credences are rational to have, and it can explain why stronger reasons license a higher credence and weaker reasons a lower credence. If the strength of reasons for belief is partly determined by how many alternatives are not eliminated, then it makes sense to say that many uneliminated alternatives call for a lower credence and more eliminated alternatives call for a higher credence. So even though I have presented an account of reasons for belief, it can cover at least one other doxastic attitude – credence. Even if these two doxastic attitudes differ in their function, one need not develop a separate account of reasons for credences. I will return to some ramifications my account of reasons has for credences but, for now, let us focus on beliefs. Settling on a belief and closing off uncertainty need not be permanent. Rational belief should not make us dogmatic. It is possible that we receive further reasons that have a bearing on whether p and, in some situations, it is entirely rational to reconsider whether believing p and thereby closing off uncertainty is rational and whether the degree of justification that one’s belief has is

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

152

Beings of Thought and Action

sufficient to act upon it. Holton makes this point by comparison with intending: “Knowing when to conclude practical deliberation and form an intention, and when to open the deliberation up again, is an open-ended practical skill, one that is highly sensitive to the environment in which one finds oneself. The same is true of belief formation” (Holton 2014: 15). I suggest it is not only receiving further reasons that have a bearing on whether p that can bring it about that one reconsiders a belief. So can a change in one’s practical situation. This has been acknowledged quite independently of discussion of pragmatic encroachment. What Harsanyi (1985: 9) labels “practical certainty” (or, at other times, “acceptance”) is what I understand to be belief: If a person has decided to assign persistent practical certainty to a given composite statement s*, he may later come to re-examine this decision . . . [O]ne possible reason is that he is now facing a decision problem D in which the penalty . . . he would incur by acting on the assumption that s* . . . was true when in fact it was untrue would be unusually high.

I agree with Harsanyi, and it seems that one can easily translate this into the idiom I introduced in Chapter 5, Section 5.4. Our practical situation will not always stay constant. We might enter from a nondeliberative context to a deliberative context, or the kind of deliberative context we face may change suddenly. The relevant notion here is not costs of error (COE), but that of CFA, which influence COE. CFA are simply what Harsanyi calls the “penalty” if one’s belief turns out to be untrue. When CFA are high, this can bring it about that one reconsiders a belief and whether, given one’s reasons, it would be rational to close off uncertainty and, furthermore, if its degree of justification is sufficient to be acted upon. If settling on a belief need not be permanent and can be influenced by practical factors, then reasons for belief must also reflect this in order to rationalize belief changes. The theory of the strength of reasons I offered can reflect this. Plausibly, CFA can affect how many alternatives we consider. And this can affect the strength of our reasons and, subsequently, whether we give up on a belief that we hitherto held, and our degree of justification if we continue to hold the belief. That this is actually happening is also suggested by psychology literature that has made its way into the philosophy literature (for example, Nagel (2008)). Kruglanski and Mayseless (1987: 835) write that fear of invalidity, that is, fear of having a false belief, leads to “general openness to judgmental alternatives, in the interest of avoiding a costly mistake,” and to “intensify the generation of alternative hypotheses.” Buckwalter and Schaffer (2015: 222),

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

Reasons for Belief and the Primacy of the Practical

153

in reference to Kruglanski, assume that the standard story of the psychological effects of stakes or, as I would say, elevated CFA, is that subjects with a heightened fear of false beliefs entertain more alternative hypotheses. Thus it would not strike me as surprising if subjects in scenarios like HCOE or HGR, where CFA are high, entertain more alternative hypotheses than their counterparts in LCOE or LGR. While this is suggested by findings in psychology, none of the testing scenarios in the studies are an exact match to the cases I have given, therefore some caution is called for. In any case, an appeal to the psychological states and mechanisms that we actually have is bound to raise the following objection. As Hume assumed, you cannot derive an “ought” from an “is.” Reasons – and that includes their strength – are a normative notion. Consequently, it is misguided to derive any normative consequences from our actual psychology. Whether we actually consider more alternatives has no bearing on the question whether we ought to consider more alternatives. While the worry is clear, it is not too worrisome for my project. I can agree that we cannot derive an “ought” from an “is,” but at the same time hold that, at least sometimes, an “is” is indicative of an “ought.” To elucidate this, we should simply consider how we would evaluate a subject like Hannah in HCOE or HGR if she were to behave as in the low CFA counterpart cases. To me, Hannah would appear irresponsible and it seems that she would not be rational in acting on her belief, in case she maintains her belief. Furthermore, it seems that it would not be rational for Hannah to close off uncertainty in this situation. And if she cannot rationally close off uncertainty, then Hannah ought to consider more alternatives that she hitherto could ignore. It is simply rational to consider more alternatives when CFA are high before one closes off uncertainty. In HCOE or HGR Hannah ought rationally to consider more alternatives than in LCOE or LGR. I will say more about the way in which it is rational to consider more alternatives when CFA are high. But before that, I want to explain how all the considerations adduced over the course of the chapter so far provide an explanation of how pragmatic encroachment works. In LGR, Hannah’s memory of a previous visit to the bank on a Saturday rules out at least one alternative in which it could come about that the bank is not open this Saturday, namely, that it is never open on Saturdays. The shift in her practical situation between LGR and HGR is similar to the shift that we see in the murder investigation examples in Section 6.2 when the list of suspects increases from one situation to another. Due to the change in her practical situation, it is rational to reconsider the belief and see whether one can rationally close off uncertainty. Intuitively, Hannah ought no longer to

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

154

Beings of Thought and Action

take it for granted that the bank is open on Saturday. Her memory of the previous visit still rules out that the bank is never open on Saturdays. But given her CFA, it is rational to consider certain alternatives that are not all that far-fetched, for example, that the bank might recently have changed hours and is no longer open on Saturdays. Her practical situation demands that Hannah considers such alternatives. But in HGR, while Hannah’s memory of the previous visit continues to rule out the alternative that the bank is never open on Saturdays, it does not rule out other alternatives that may be in play, such as that the bank recently changed hours. Relative to these scenarios, we can represent Hannah as inquiring about the question whether the bank is open on Saturday, and relative to each scenario, the set of alternative possibilities changes. LGR Possible answers: {the bank is open; ~the bank is open} Set of alternative possibilities: ~the bank is open = {the bank is never open on Saturdays} HGR Possible answers: {the bank is open; ~the bank is open} Set of alternative possibilities: ~the bank is open = {the bank is never open on Saturdays; the bank might have changed hours}

The list of alternatives in which the bank is not open not only increases in HGR, but it also contains more alternatives that are not ruled out by Hannah’s reason. This provides a simple explanation of the shift between LGR and HGR observed in Section 6.1. Hannah’s memory of the previous visit can rule out that the bank is never open on Saturdays. But it cannot rule out anything from the increased list in HGR, such as a change in hours. According to my proposal, this change in alternatives that are not ruled out results in a weakening of Hannah’s reason for belief. Since the change is brought about by practical factors, there is pragmatic encroachment on the strength of one’s reasons. The change in alternatives that occurs between LGR and HGR can be modeled in several different ways; however, I take no stance here on how it is done. As mentioned in Section 6.2, an additional alternative can be modeled along the lines of updating a conversational context. But it could also be done in a precise formal framework that may satisfy the penchants of formal epistemologists, namely, in terms of a credence function that ranges over different possibility spaces.7 7

I suppose that the framework developed in Clarke (2013) and used to defend the view that Belief = Credence 1 could be repurposed to model my account of the strength of reasons. We simply need to

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

Reasons for Belief and the Primacy of the Practical

155

To summarize: PE-R is explained by my account of the strength of one’s reasons for belief, according to which the strength of a reason need not be constant, but is situation sensitive. And while it might be clear that one’s situation includes one’s practical situation, the function of belief, to close off uncertainty, bolsters the view that practical factors play a role in closing off uncertainty. As I have argued, it is highly plausible that one reconsiders beliefs when CFA are high. That practical considerations can and should influence our inquiries should not be surprising. For beings like us, who are both cognitive and practical but also beings with limited cognitive resources, our practical needs can shape our inquiries. They not only determine which questions we attend to but also how deep our inquiries into these questions are and what it takes to end inquiries in a satisfactory manner. In situations in which CFA are low, our inquiries are usually not very deep, which is entirely rational. In situations in which CFA are high, our inquiries automatically deepen and we consider more alternatives, which again is entirely rational given what is at stake.8 This explanation can be extended to PE-K. In order for a true belief to amount to knowledge, a certain degree of justification is required. If the degree of justification in a case like HGR is lower due to weaker reasons, then, assuming one’s justification falls below the threshold required for knowing, one fails to know. However, this still leaves room for the possibility that one’s degree of justification is sufficient to justify and rationalize believing, even if the belief does not amount to knowledge. When I say that pragmatic encroachment on the strength of reasons explains pragmatic encroachment on knowledge, I do not mean that the former explains a separate mechanism from the latter. On my account, there is only pragmatic encroachment on the strength of reasons, but this affects every notion that is sensitive to reasons, such as knowledge, and also epistemic justification, as I will explain in Chapter 7. assume that credences not only encode degrees of belief but also degrees of justification which depend on the strength of one’s reasons. This assumption is based on the following passage:

8

. . . being offered this bet changes the practical importance of p for me. Before the bet was offered, I had no reason to think the falsity of p might lead to my losing my home (or at least, we can easily choose p so that this is the case). The dramatically increased practical importance of p can lead me to take erstwhile-ignored possibilities seriously; I may be more careful about what I am willing to rule out. Thus, my credences will no longer be given by the function Cr1(·) but by a new function Cr2(·), defined over the expanded space of possibilities. (Clarke 2013: 10) I am open to the idea that one’s practical situation is not restricted to matters of self-interest. Moral considerations might oblige one to consider more alternatives in certain situations. So my account is open to what Pace (2011) has dubbed moral encroachment.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

156

Beings of Thought and Action

Finally, I would like to explain in which way this account is revisionary and in which way it is not. I surmise that a lot of the intuitive resistance to any form of pragmatic encroachment in epistemology is based on a general aversion to any form of practical reasons for believing of the Pascalian sort (see Schroeder 2012 for a similar assessment). A belief might be beneficial for one to have, but the utility of the said belief does not provide any form of epistemic justification. I agree that one ought to resist pragmatic encroachment if it turned out that it allows for Pascalian encroachment, by which I mean that reward for beliefs can affect one’s epistemic standing. But my account of pragmatic encroachment does not allow for that. The influence of practical factors on our reasons I suggest is not Pascalian in nature. It does not suggest that there are genuine practical reasons for belief. On my account, a practical factor, CFA, makes it rational to consider more alternatives, which makes it possible that one’s reasons become weaker. From that, it does not follow that any other practical factor, like the benefits of believing, can make it rational to consider more or fewer alternatives, which would make it possible to strengthen or weaken one’s reasons for believing in a Pascalian manner. Therefore, for whatever reason one might want to reject my account, I believe it cannot be based on the worry that it allows for an illegitimate influence of Pascalian considerations in epistemology. In fact, what makes the number of alternatives in play sensitive to CFA, namely, that we are beings of thought and action, cannot suggest that the benefits of believing could have an equal effect. The consequences of failed actions matter, as, in the worst case, our life may depend on them. A failed action might be due to having a false belief. It is thus highly sensible to consider more alternatives when CFA are high, as one then guards against failed action due to false belief in a wider range of circumstances. Since our evidence is the best guide as to whether a belief is true or false, our beliefs should follow only the evidence. But since the rewards of believing are tied to actually believing, often independently of the truth or falsity of what we believe, it makes no sense to follow the evidence. In fact, in order to reap the rewards offered for belief, we may sometimes have to believe contrary to our evidence. This contradicts the results of taking CFA into account, as beliefs that we have contrary to our evidence will be especially at risk of leading to failed actions, a risk which the sensitivity to CFA was supposed to minimize.9 9

Therefore, I believe that my account is what Worsnip calls a moderate pragmatism that escapes the issues he raises in Worsnip (2020).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

Reasons for Belief and the Primacy of the Practical

157

While my account does not have any Pascalian implications, it is certainly revisionary. I have argued that it is rational to consider more alternatives when CFA are elevated. However, I have not specified whether it is practically rational or epistemically rational. I believe that the fitting answer is that it is practically rational to consider more alternatives. It is for practical purposes that one ought to consider more alternatives to avoid a failed action with grave consequences of such failure. Even though it is merely practically rational to consider more alternatives, this is entirely compatible with having epistemic consequences. The account I have given is thoroughly pragmatist as it implies a certain kind of primacy of the practical over the epistemic. The primacy can be brought out in terms of layers. The traditional picture has it that there is at bottom an epistemic layer, on which it is determined what it is rational to believe and which credences it is rational to have. On top of this epistemic layer is a practical layer, on which it is determined what it is rational to do based on the epistemic layer. What I suggest is that the epistemic layer, traditionally conceived, is not rock bottom. There is another practical layer underneath it. Thus, the traditional picture ought to be replaced with the following schema (see Figure 6.1).10 In Figure 6.1 the arrows indicate the order of explanation. By indicating the explanatory order, the arrows also show what comes prior. Hence, while the arrows point up, they point to downstream effects. The determination of which alternatives need to be considered at the bottom layer is a practical affair (even if not entirely) and influenced by CFA. What changes from LGR to HGR is not which alternatives are possible. The possibility that the bank recently changed hours is one that also holds in LGR. We can say that it is a possibility given what one knows about the world, namely, that banks change their hours, and this particular bank is not any different from banks in general such that it is not subject to this possibility. Note that such knowledge is itself not subject to the influence of CFA in this deliberative context, or in HGR. In both contexts, if it was false that such possibilities obtained, then going to the bank could not have a disastrous outcome. Therefore, CFA are low for propositions about such possibilities. If asked, one would acknowledge that a change in hours is 10

While I use layers to bring out what I call the primacy of the practical, the key idea could be expressed without the assumption of an additional layer of practical rationality. One might say that the epistemic layer is more complex than traditionally assumed and that it consists of two levels (the two bottom layers in Figure 6.1). What matters to me is that the bottom level is not entirely epistemic but, in an important sense, practical, which lends itself to a kind of primacy of the practical.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

Beings of Thought and Action

158

The primacy of the practical

2nd layer of practical rationality

determination of what it is rational to do (sensitive to CFI and COE, the latter being sensitive to CFA)





Layer of epistemic rationality

determination of the rationality of beliefs/credences in p given the not-p alternatives considered (can influence CFI)





1st layer of practical rationality

determination of which not-p alternatives ought to be considered (sensitive to CFA)

Figure 6.1 The primacy of the practical

a genuine possibility in LGR and in HGR. In LGR, though, this is not an alternative that is genuinely entertained, while it is in HGR. The idea behind the primacy of the practical is that there is no purely epistemic story to tell as to why that is. It is not that certain not-p possibilities have become more likely when one moves from LGR to HGR. While likelihoods may influence which additional not-p alternatives are in play as one moves from one scenario to the next, the switch itself that changes the number of alternatives in play is, I contend, due to practical considerations. Practical considerations thus have, in an important sense, primacy over the epistemic. They determine which not-p alternatives are in play and thus determine the strength of one’s reasons for believing that p. Again, an analogy with skeptical arguments helps to illustrate my point. The possibility that we are radically deceived by an evil demon is one that we will acknowledge if asked, but we will not usually take it seriously. The situation changes when we seriously engage with the skeptic and are willing to genuinely entertain the evil demon possibility. But that decision seems to depend on practical considerations, not on epistemic ones. I decide to take the skeptic seriously in the epistemology classroom but, after I leave class, I decide not to worry about the possibilities raised by the skeptic. This is a practical decision; it is not that I have discovered an epistemic solution to the skeptical problem after leaving the classroom. When one moves from a situation like LGR to HGR, one may not necessarily entertain the possibility that the bank might have recently changed hours. But one rationally ought to consider this possibility.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

Reasons for Belief and the Primacy of the Practical

159

Given dire practical consequences of failed action, it makes good practical sense to consider more possibilities, those one acknowledges anyway but which one would discard in a different practical context, in order to secure successful action in the widest range of possible circumstances. I believe that there is no straightforward story to tell as to which alternatives one rationally ought to consider. That will depend on one’s environment and what usually goes on in it. But we can avail ourselves of the reasonable person standard once more. In one environment, it might be reasonable to consider the possibility that the bank might be closed due to a blizzard. If one lives in Los Angeles, one can reasonably discard this possibility, while one might instead take the possibility of earthquakes and their consequences into account. For most of us, at least, while we might acknowledge the possibility of a sudden attack of Martians that results in the destruction of the local branch of the bank, it is not a possibility that we would find advisable to take seriously. As the primacy of the practical suggests, which alternatives are in play can be explained by the reasonable person standard. However, the reasonable person is also a practically reasonable person who either ignores or attends to certain alternatives relative to their practical situation. All of this seems sensible given that we are beings of thought and action with limited cognitive capacities. Given our limitations, it makes good practical sense not to take into account just any possibility to minimize computation efforts when CFA are low. But it also makes sense to take more possibilities into account when CFA are high. However, it is natural to wonder what my appeal to practical rationality in the determination of alternatives amounts to. To say that practical rationality determines what to do seems clear enough and we seem to have at least something of a grasp of how this works (via the appreciation of reasons for action). I admit that it is a lot less clear how practical rationality determines which and how many alternatives must be considered in a deliberative context. While I cannot offer a full account, I think that a decent starting point is that unusually high CFA are simply a practical reason to consider more alternatives than usual. Certainly, more needs to be said, but this is not the place to do so. I think of the schema introduced here as a useful framework that has certain advantages over competitors, as I will explain in Chapter 7. For now, I am more concerned with the question whether it is worth developing this framework further, and I simply accept that there are further questions it may give rise to that I cannot answer here. I end this section with a quick look at how one could generalize the schema to cover nondeliberative contexts as well. Here, one can take

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

160

Beings of Thought and Action

inspiration from two other pragmatic encroachers, Hannon (2017) and Locke (2017).11 One could say that the CFA for the kind of deliberative contexts one usually faces determines how many alternatives are in play such that the strength of one’s reasons in a nondeliberative context suffices for justified belief. Of course, usually is in need of clarification. But this makes it clear enough how the primacy of the practical accounts for the strength of one’s reasons in nondeliberative contexts. To sum up: On the account given, one’s practical situation can influence one’s epistemic situation by influencing how many and perhaps even which alternative possibilities are considered.12 This can influence the strength of one’s reasons and thus whether the degree of justification for believing suffices for knowing. While this account is revisionary in the sense that it postulates a kind of primacy of the practical over the epistemic, it is not revisionary in another sense that merits being highlighted. On my account, practical factors can influence the strength of one’s reasons. But it allows one to hold on to the orthodox view that one’s degree of justification, and any other epistemic notion that is sensitive to it, for example knowledge, is entirely determined by one’s reasons and their strength. Perhaps this is not conservative enough to convince all pragmatic encroachment skeptics, but it might placate some that my explanation leaves certain strands of orthodox epistemology untouched.

6.4

Objections

Now that my full account of pragmatic encroachment is on the table, I want to consider three objections to it. Two of them stem from the existing literature on pragmatic encroachment. While they are not explicitly aimed at my account, they certainly apply to it and need answering. Additionally, I will explain how my account differs from superficially similar accounts that go by the label of contextualism or contrastivism. The first objection is that it seems that cases can change in CFA, yet the number of alternatives is fixed. Can my account allow that the strength of a reason may vary in such cases?13 For example, consider Case 1 in Section 11

12

13

Hannon (2017) argues that one’s epistemic position meets the threshold for knowledge when one can reliably serve as a source for actionable information for many members of one’s epistemic community. Locke (2017) holds that what you know depends on how you can rationally act in normal choice situations. This idea can be elicited independently of pragmatic encroachment. For example, Whiting (2014b: 228) assumes that the epistemic perspective is contained and dictated by the practical perspective in his argument against practical reasons for believing. Thanks to Alexander Dinges for raising this objection.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

Reasons for Belief and the Primacy of the Practical

161

6.2 in which inspector Lewis is called to a murder scene where there are only four possible suspects individuated by their hair color and there is a blond hair at the crime scene. I offered the following sketch of Lewis’s situation when he considers whether Blond is the perpetrator. Case 1 Possible answers: {Blond did it, ~Blond did it} Set of alternative possibilities: ~Blond did it = {Brown did it, Black did it, Red did it}

But let us keep all the suspects fixed and let us suppose that Lewis is not called to a murder scene, but to a bicycle theft. We would still sketch Lewis’s situation as we did for Case 1. Intuitively, the CFA are much higher in a murder case than in a bicycle theft case. So, if anything, one would expect that one’s reason for believing in the bicycle theft case are stronger reasons for believing than in the murder case. But it does not seem that my account can capture that there is such a difference between a bicycle theft and a murder, since we cannot introduce further alternatives to be considered in the murder case – after all, the set of suspects was fixed in both cases qua stipulation. So it cannot be that one’s reason becomes weaker in the murder case compared to the bicycle theft because practical factors require the consideration of further alternatives. There simply are no further alternatives to consider as the range of possible suspects was fixed. This is a legitimate worry, but it can be dealt with by elaborating on the notion of alternative possibilities. For any whether p question, the set of possible answers consists of two options, p and not-p. And for each of these possible answers, there will be a set of alternative possibilities. In Case 1, I have listed the not-p alternatives. While in some sense, the notp alternatives are limited as the set of suspects is fixed, there is another sense in which the set of alternative not-p possibilities is larger. That is because the not-p alternatives can be more fine-grained. In Case 1, I have gone for the most coarse-grained presentation. However, each of the three alternatives can be spelled out in more fine-grained ways. For example, Brown did it with her left hand, Brown did it with her left hand and a smirk on her face, Brown did it with her left hand, a smirk on her face and afterwards placed misleading evidence at the scene, and so on. Clearly, the first and second option just mentioned will probably not matter much for Lewis, but the third could very much, or any other alternative in which the true perpetrator placed misleading evidence at the scene. What matters is that even when there is in some sense a set of fixed alternatives, there are nonetheless often further, more fine-grained

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

162

Beings of Thought and Action

alternatives. This allows my account to deal with this worry. Even when the set of suspects might be fixed in the murder case and in the corresponding bicycle theft case, the set of alternatives need not be the same in each case. For example, we could say that in the murder case it makes good sense to consider the alternative that somebody else did it and planted misleading evidence at the scene. Since the CFA are high, such alternatives deserve consideration. In a case in which the CFA are low, where the crime under consideration might be a bicycle theft instead of a murder, it might be admissible to not consider possible alternatives that involve misleading evidence. Hence, my account leaves room to account for a difference in the strength of the reasons in these two scenarios that differ in a practical factor. The second objection is posed by Comesaña (2013: 257f.) and is based on the following case. Coin Flip Stop Jeremy and Matt are on the same train. For Jeremy, the stakes are high; his future career depends on the train stopping in Foxboro. For Matt, the stakes are low. He is indifferent about whether the train stops in Foxboro. The conductor will decide whether the train stops in Foxboro through a coin flip. The coin is fair. Given heads, the train will stop in Foxboro. Given tails, the train will not stop in Foxboro.

Comesaña holds that Matt has more or less the same reason for believing that the train will stop in Foxboro as he has that it won’t stop there, because he has more or less the same reason for believing that the coin will land heads as he has for believing it will land tails. Given Jeremy’s practical situation, Comesaña assumes that it follows from pragmatic encroachment on the strength of reasons that Jeremy has less reason to believe that the train will stop in Foxboro than Matt. This would lead to the following odd consequences. Whether the train stops in Foxboro is entirely determined by the coin flip. Jeremy, by having less reason to believe that the train will stop in Foxboro, must have less reason for believing that the coin will land heads, and thus more reason to believe that the coin will land tails. This would mean that one can have a stronger reason for believing that a fair coin will land on a certain side due to certain practical factors. Since this is a clearly an absurd consequence, Comesaña suggests that we reject pragmatic encroachment on the strength of reasons. On my account, a change in practical situation cannot always necessitate a change in the strength of one’s reasons. A change is possible only if a practically relevant further alternative can be introduced. And even if the set of alternatives can often be enlarged by considering more fine-grained

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

Reasons for Belief and the Primacy of the Practical

163

possibilities, as suggested earlier, the alternatives in this case stay constant for both Matt and Jeremy. The coin will either land heads or tails. There is exactly one alternative to the coin not landing heads: the coin landing tails. Surely there are infinitely many ways this can come about, for example, the coin landing heads after rolling on the floor for ten seconds, rolling on the floor for eleven seconds, and so on. But it is hard to see how it could make good practical sense to consider such finer-grained alternatives. In Coin Flip Stop, the difference in practical situation between Matt and Jeremy does not make it that it is practically sensible for Jeremy to consider more alternatives than Matt as, in this case, there are no further alternatives to be considered that would seem practically relevant. Therefore, on my account of pragmatic encroachment on reasons, it does not follow that Jeremy has less reason to believe that the coin will land heads than Matt. It might be objected that there are indeed more alternatives to the coin landing heads than landing tails. The coin might land on its side, or it might dematerialize midair due to some abnormal quantum physics effect – I am sure I can count on you to come up with more absurd examples. While we can acknowledge these further alternatives, they are not troublesome for my account. One could say that these more extravagant alternatives need not be considered because a reasonable person would not consider them. If they are not alternatives that are in play, they cannot lead to a weakening of one’s reason. If a case could be made why it makes practical sense for Jeremy to consider these alternatives and not Matt, then I don’t think there is anything very odd in saying that Jeremy has less reason to believe that a fair coin will land on a certain side than Matt.14 A third objection is due to Locke (2017). Locke holds that one’s degree of justification for believing and the strength of one’s reasons for believing should be correlated, with which I agree. While Locke acknowledges that it seems intuitively correct to assume that there is a direct correlation between the strength of reasons to believe and the extent to which those reasons can eliminate alternatives, he argues that this assumption is mistaken. One might worry that this objection also applies to my account. Even though I have argued that the strength of one’s reason is partly determined by the number of alternatives that are not eliminated, it also holds true on my account that the strength of a reason is correlated to the extent it can rule out alternatives. 14

In Chapter 8, I will argue that if there were such a difference between Matt and Jeremy, this difference would disappear if Matt and Jeremy started deliberating together.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

164

Beings of Thought and Action

In order to demonstrate that there is no straightforward correlation between the strength of reasons to believe and the extent to which those reasons can eliminate alternatives, Locke provides a case akin to the following, which is to be contrasted with HGR. HGR Plus Testimony (HGR+T) Everything is as in HGR. However, while standing in the bank’s parking lot, Hannah overhears someone saying that the manager of the bank was discussing closing the bank on Saturday for emergency repairs.

In HGR+T, Hannah has the same reason for believing that the bank is open on Saturday as in HGR. And what she overhears actually helps her to rule out another alternative that was in play in HGR, namely, that the bank recently changed hours. That the bank manager considers closing the bank this Saturday for emergency repairs rules out that there was a recent change in hours. Yet it seems that Hannah’s reasons for believing that the bank is open on Saturday do not gain in strength, although she can eliminate more alternatives in HGR+T than in HGR. Isn’t this a problem for my account? It is not. While it may sound odd, what Hannah overhears may indeed count as a reason for believing that the bank is open on Saturday, as it rules out one alternative that was deemed relevant in HGR, as was illustrated in Section 6.3. HGR Possible answers: {the bank is open; ~the bank is open} Set of alternative possibilities: ~the bank is open = {the bank is never open on Saturdays; the bank might have changed hours}

Yet what Hannah overhears introduces a new alternative possibility. So her situation in HGR+T looks like this: HGR+T Possible answers: {the bank is open; ~the bank is open} Set of alternative possibilities: ~the bank is open = {the bank is never open on Saturdays; the bank might have changed hours, the bank might be closed due to emergency repairs}

On my account, the strength of one’s reason for believing is partly determined by how many alternatives the reason cannot rule out. And this actually stays constant across HGR and HGR+T. In both cases, one important alternative is not eliminated. In HGR it is that the bank recently changed hours; in HGR+T it is that the bank might be closed for emergency repairs. But if the number of alternatives in play that are not eliminated actually stays constant, then my account does not force me to

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

Reasons for Belief and the Primacy of the Practical

165

say that one’s reasons for believing are stronger in HGR+T than in HGR. Therefore, Locke’s case is no threat to my account. Locke judges that one’s degree of justification for believing that the bank is open is lower in HGR+T than in HGR. Can that judgment also be captured by my account? It can, even if not directly by my account but with further assumptions that do not contradict my account. What Hannah overhears is an instance of something that seems at first highly unusual. It is both a reason for believing that p, but at the same time a reason for believing that not-p. That the bank manager discussed closing the bank this Saturday equally suggests that it might be open on Saturday, but also that it might be closed. I cannot offer an account of how reasons for believing that p and reasons for believing that not-p determine a final degree of justification for believing that p. But if Hannah has a reason for believing that not-p in HGR+T, which she did not have in HGR, then it can plausibly be said that her overall degree of justification for believing that p is lower in HGR+T than in HGR. Hence, Locke’s judgment can be captured by my account. Finally, I would like to preempt a number of objections that could arise because my account might be superficially similar to other views. To do that, I will quickly point out how my account differs from those views. Perhaps the objections to those other views could be revised so that they also apply to my account. I acknowledge this possibility, but I will leave it to my opponents to make this case, if such a case can indeed be made. On my account, the strength of a reason is situation sensitive. However, I have pointed out that changes in situation could be modeled along the lines of conversational context shifts. So isn’t my view just a form of contextualism about evidence, which is subject to the criticism made in Brown (2016)? Brown criticizes contextualism about evidential support, as assumed in Neta (2003), which holds that whether e provides evidential support for p depends on the context. Whatever the merits of this thesis may be, and whatever similarities to my account it may have, this is not a thesis I have defended. My account merely holds that one’s situation, or, if you like, your context, can influence the strength of your reasons for believing. I did not claim that one’s situation or one’s context influences whether a consideration is a reason for believing. On my account, the strength of a reason is dependent on which alternatives are in play. So isn’t my view just a form of contrastivism about evidence, as suggested in Snedegar (2013), according to which evidential statements are not binary but ternary, as in e is evidence for p rather than for q? Simply put, the answer is no. On my account, statements about reasons still describe a binary

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

Beings of Thought and Action

166

relation. If you like, you can think of the set of alternative possibilities as a set of contrast propositions. But this would be a very different form of contrastivism to the one envisioned in Snedegar (2013). Hence, my account avoids objections like those in Brown (2016) that are leveled against this form of contrastivism. My discussion is most likely not exhaustive of all possible objections. Nonetheless, I hope that dealing with those objections that I am aware of is helpful in laying out my account further.

6.5

Ignorant and Apparent High CFA

The problem I discuss in this section is a variant of the internalism/ externalism debate, understood as the distinction between what is and what is not accessible to a subject, applied to the notion of CFA. As mentioned in Chapter 5, I stay neutral on whether CFA have an internalist or externalist reading. In the following, I outline possible responses given one’s theoretical inclinations, demonstrating that my theory itself does not force one to take a specific side. It is possible that CFA are high while one is not aware that they are high, and also possible that one has reason to think that CFA are high while they are actually not high. The following two cases portray both of these possibilities. Ignorant High CFA Hannah and her wife Sarah are driving home on a Friday afternoon. They plan to stop at the bank on the way home to deposit their paychecks. Since they have an impending bill to pay, and very little in their account, it is very important that they deposit their paychecks by Saturday. But neither Hannah nor Sarah is aware of the impending bill, nor of the paucity of available funds. Looking at the queues, Hannah says to Sarah, “I know the bank will be open tomorrow, since I was there just two weeks ago on a Saturday morning. So we can deposit our paychecks tomorrow morning.” (See Stanley (2005: 5)). Apparent High CFA Hannah and her wife Sarah are driving home on a Friday afternoon. They plan to stop at bank on the way home to deposit their paychecks. Since they have an impending bill to pay, and very little in their account, it is very important that they deposit their paychecks by Saturday. But unbeknown to them, their tax return arrived and they already have sufficient funds in their account to cover the mortgage payment. So the stakes are not actually high. Looking at the long queues, Hannah says to Sarah, “I know the bank will be open tomorrow, since I was there just two weeks ago on a Saturday morning. So we can deposit our paychecks tomorrow morning.” (Adapted from Schroeder (2012: 270)).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

Reasons for Belief and the Primacy of the Practical

167

Stanley (2005) and Schroeder (2012) both hold that Hannah fails to know in Ignorant High CFA. For the less discussed Apparent High CFA case, Schroeder (2012) holds that Hannah fails to know. I concur with all of these judgments. In order for my explanation of PE-K to be fully general, then, it must also account for these cases, and this is where things get more complicated. For the standard high CFA case, I said that it is rational to consider more alternatives. But given Ignorant High CFA, this line can only be maintained if rationality is an externalist notion, as it is not accessible to Hannah that she faces high CFA. For the externalist about rationality, this case poses no problem, as they can maintain that it is rational to consider more alternatives, even when the fact that makes it so is inaccessible to Hannah. But externalism about rationality is a minority position.15 In order to not tie my explanation of PE-K to externalism, one must find an internalist-friendly account of Ignorant High CFA. Here is my suggestion. High CFA that one is ignorant of have the effect of a defeater one is not aware of, for example, abnormal lightning conditions that undermine regular perception.16 We can say that Hannah’s practical situation in Ignorant High CFA makes it that there are reasons for her to consider more alternatives, just like in the abnormal lightning conditions case. However, due to her ignorance, these are not reasons that she has and they do not render her irrational if she fails to react to them. Nonetheless, these reasons work as a standard undercutting defeater, which undermine having knowledge that one would otherwise have had. The internalist, on the other hand, while struggling with Ignorant High CFA, can easily explain the lack of knowledge in Apparent High CFA. Even if Hannah errs about her CFA, what is accessible to her makes it rational to consider more alternatives than in a low CFA scenario and thus her reasons for belief are weaker. Arguably, how one conceives of one’s reasons makes a difference to whether one knows. Again, this can be independently elicited with cases of defeat. Suppose that Hannah falsely believes that the lightning conditions are abnormal, which she takes to undermine her ordinary perceptual capacities. In such a case, Hannah’s perceptual capacities, even if they are working normally, no longer provide her with knowledge. Apparent High CFA poses a problem for the externalist. Hannah’s actual practical situation does not provide her with reasons to consider 15 16

See chapter 8 of Williamson (2000) for an endorsement of externalism about rationality. I owe the analogy with defeat to Schroeder (2012).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

168

Beings of Thought and Action

more alternatives; it only appears to provide reasons. Consequently, there are fewer alternatives to be considered and her reasons are stronger than she takes them to be. Why then does Hannah fail to know? Externalists can mimic the internalist’s position. Given how Hannah conceives of her situation, she must compute her reasons in a way that does not license having the degree of justification that her reason would maximally allow for. But if she computes her reasons correctly, given how she conceives of them, she will end up with a degree of justification insufficient for knowledge-level justification, and hence she fails to know. Let us sum up: We must take CFA to be either an internalist or an externalist notion. Thus, we get all the problems that come with internalist and externalist accounts of reasons, rationality, and justification. Taking either side here leads to problems given Ignorant High CFA and Apparent High CFA. An internalist conception of a practical situation can easily account for Apparent High CFA, but runs into a problem with Ignorant High CFA (the converse holds for externalism). I have outlined strategies for either position to deal with the critical cases, hence my explanation of PE-K is not hostage to taking a specific side in the internalism/externalism debate.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.006

chapter 7

Assessing Potential Explanations of Pragmatic Encroachment

This chapter does not require an elaborate introduction. Its sole purpose is to assess rival explanations of PE-K. I want to be upfront about having a fairly modest aim. I will argue that my preferred account of total pragmatic encroachment (TPE) is a viable and attractive alternative option for explaining pragmatic encroachment. I will do so by raising issues for rival accounts, but I am not claiming that I have knockout arguments against rival views. Here is how I will proceed. In Section 7.1, I raise the problem of forced choice against the genuine constituent explanation and the practical adequacy threshold explanation. In Sections 7.2, 7.3, and 7.4, I raise a number of problems for the shifting thresholds view (STV) and argue that TPE avoids them. In Section 7.5, I deal with the objection to TPE that concerns the (in)stability of rational belief and rational credences. In Section 7.6, I turn to three more general objections to demonstrate that my account of TPE can handle them, and I explain how the sum of views I argued for coheres. I close with a short summary.

7.1

The Problem of Forced Choice

My argument in favor of PE-K in Chapter 5 works via PE-K*. However, as I mentioned there, PE-K* allows for at least three different explanations of PEK. The first is what I called the genuine constituent explanation. We could interpret the consequent of PE-K*, that is, how costs of error (COE) relate to costs of further inquiry (CFI), as a genuine constituent of the knowledge relation, like belief or truth. Or we could interpret the consequent as merely an indicator property. If this indicator property is not instantiated, then this indicates a lack of knowledge-level justification. The reading of the consequent of PE-K* as an indicator property opens up various avenues for explaining PEK, such as STV or TPE. But we have not yet seen why we should prefer such explanations over the simpler genuine constituent explanation. 169

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

170

Beings of Thought and Action

Consider the following variation of the bank case, which is adapted from Schroeder (2012: 278). I will argue that such cases speak against the genuine constituent explanation and favor the indicator reading of the consequent of PE-K*. Forced Choice (FC) Hannah is out driving on Saturday with her wife Sarah and it is twenty minutes to noon. They have not yet deposited their paychecks which they received the day before. Since they have an impending bill, and very little in their account, it is very important that they deposit their paychecks that day. Both know that banks, if they are open at all on Saturday, close by noon. Their only chance to deposit their paychecks is by driving straight to the bank. Sarah is worried, but Hannah tries to calm her down. She says: “Don’t worry, I was at the bank last Saturday. I know that the bank is open today.”

The main feature of FC is that Hannah, due to it being Saturday already, is in a forced choice situation. She cannot inquire further about whether the bank is open; she must drive there and see whether it actually is open. Even if she was to inquire further, it is guaranteed that she will not be able to deposit the paycheck since by then the bank will definitely be closed. Does Hannah, in FC, know? For me, the answer is no. But I suspect that this intuition is not universally shared. Schaffer (2006: 91), who offers a similar case, holds that the subject knows. Based on Schroeder (2017: 369), I assume that for this specific case Schroeder would also hold that Hannah knows. Before backing up the intuition that Hannah fails to know in FC, let us consider what the COE and the CFI are here. COE are high given the costs of failing to deposit the paychecks in time. However, CFI are also whatever the costs of failing to deposit the paychecks in time are. That is because further inquiry, due to time being of the essence, would also lead to a failure to deposit the paychecks in time. Even if the bank is open on Saturday, further inquiry is not an option for Hannah. So while both COE and CFI are high in FC, they are identical and thus balanced. Thus it is not true that COE exceed CFI, and hence the consequent of PE-K* is not negated and one cannot argue by PE-K* and modus tollens that Hannah does not know. If the consequent of PEK* is understood as a genuine constituent of the knowledge relation, Hannah, in FC, would satisfy it. Therefore, the genuine constituent explanation does not capture the intuitive verdict that Hannah fails to know in FC. But as I mentioned, this intuitive verdict may not be shared by all. However, there are theoretical reasons which directly speak against the

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

Potential Explanations of Pragmatic Encroachment

171

genuine constituent explanation. Hopefully, we can all agree that knowledge cannot be acquired in certain ways. One such way is what I call running down the clock. It should not be that, holding other things equal, one can improve one’s epistemic standing from not knowing to knowing by just letting time go by. However, if the genuine constituent explanation were true, one could gain knowledge by just running down the clock. Take a variant of FC but assume that it is Friday, which removes the situation of forced choice. Here, Hannah does not know that the bank is open on Saturday, so we assume that COE do exceed CFI, since on Friday Hannah can easily inquire whether the bank is open on Saturday. But then Hannah decides to wait until the next day to make up her mind about what to do. On the next day, she would be in a position as described in FC, and then COE no longer exceed CFI. If the genuine constituent explanation were true, then it would seem that Hannah now knows, as the consequent of PE-K* is no longer negated and she therefore presumably satisfies all the conditions for knowing.1 But this seems clearly absurd. Therefore, we should reject the genuine constituent explanation, as it seems to entail the absurd consequence that one can sometimes gain knowledge by simply running down the clock. Clearly, these considerations do not impugn PE-K* itself. They merely speak against the genuine constituent explanation of PE-K and they suggest that we should understand the consequent of PE-K* as an indicator. An indicator need not be a perfect indicator. Smoke is a good indicator of fire, but not a perfect one. Similarly, the relation of COE and CFI often indicates whether one lacks knowledge-level justification, but it will not do so in all cases, for example, in FC. This does not undermine my previous argument in favor of PE-K via PE-K*. The argument did not rely on the idea that the consequent of PE-K* is a genuine constituent of the knowledge relation, but rather that it is a necessary condition on having knowledge-level justification, one that is incompatible with intellectualism. As I explained in Chapter 5, by opting for the indicator reading of the consequent of PE-K* we make room for at least two different explanations of PE-K – STV and TPE. I will get to how these explanations deal with FC in due course. Meanwhile, notice that FC also explains why my argument in favor of PE-K and my explanation of it in terms of TPE are disjointed. The argument in favor of PE-K via PE-K* works through how COE relate 1

One could maintain that there is another pragmatic factor at work that explains why Hannah fails to know. However, this would suggest that the genuine constituent explanation cannot explain all the relevant cases, which I consider a drawback.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

172

Beings of Thought and Action

to CFI. As I pointed out earlier, these two factors do not indicate that Hannah fails to know in FC and hence an explanation of PE-K cannot be provided in terms of the relation of these two costs. However, once the similarity between a standard case of elevated consequences of failed action (CFA) and FC is pointed out, as I did above by simply switching the day, the most natural suggestion is that PE-K must be explained in terms of CFA, as this is what stays constant between the standard case and FC. My favored account of TPE accords with this suggestion. Given elevated CFA in FC, one is still in a situation in which it is practically rational to guard against a larger array of error possibilities than in a case with lower CFA, even if, in FC, there is no time to rule out further alternatives. My favored account of TPE can then account for FC in the very same manner that it can account for cases like HCOE (Chapter 5, Section 5.1) or HGR (Chapter 6, Section 6.1). Hannah’s reason for belief in FC, her memory of a previous visit to the bank on a Saturday, is weaker than in a corresponding lower CFA scenario because there are more alternatives in play that are not ruled out by her reason than in the lower CFA scenario. Given weaker reasons, Hannah’s degree of justification falls short of knowledge-level justification and hence she fails to know. In Chapter 5, Section 5.5, I introduced two accounts of STV that differ according to how they set the shifting threshold for knowledge-level justification – the practical adequacy threshold account and the communal threshold account. The former seems to fall prey to the same issue I raised for the genuine constituent explanation. Take the scenario I already outlined, which starts with Hannah on Friday afternoon in a case like HGR and ends with Hannah in a case like FC on Saturday at almost noon. Call the starting point t1, and the endpoint t2. I held that Hannah fails to know at both t1 and t2. All pragmatic encroachers agree that Hannah does not know at t1 and, as I contend, it is rather plausible that running down the clock is not a way to gain knowledge, hence Hannah still does not know at t2. The problem for proponents of the practical adequacy threshold account is a failure to diagnose that Hannah at t2 fails to know. At t2, Hannah’s strength of epistemic position for the proposition that the bank is open on Saturday is practically adequate. At t2, it seems perfectly fine to rely on the proposition that the bank is open in her reasoning, instead of inquiring further about whether this is true. This also holds true on a more technical understanding of “practical adequacy,” such as in the principle PAK, discussed in Chapter 5, Section 5.4. The act that maximizes expected utility conditional on the bank being open on Saturday is going to the bank. And this is the same act that maximizes expected utility given

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

Potential Explanations of Pragmatic Encroachment

173

Hannah’s actual epistemic position. That is because at t2, postponing the visit and gathering more information guarantees the worst outcome. But if practical adequacy, on either understanding of it, also sets the threshold for knowledge-level justification, then Hannah seems to know at t2. Therefore, proponents of the practical adequacy threshold account of STV cannot offer an explanation of why Hannah fails to know at t2. This leads to the following dilemma: Either they must admit that Hannah knows at t2 and thus their explanation of PE-K allows for gaining knowledge just by running down the clock, or they must offer some other explanation of why Hannah fails to know at t2. But this would make their explanation of PE-K disjointed. Since my preferred account of TPE faces no such dilemma in the case outlined, it seems preferable over the practical adequacy version of STV. It is important to note that the problem I outline here is not necessarily a problem for STV, as it can be understood in terms of a communal threshold account. Hannon (2017), who endorses the communal threshold account, explicitly argues that the threshold for knowledge can be raised when an individual faces a decision in which the costs of error are far above usual practical decisions in their community. An example would be Hannah at both t1 and t2. Since the threshold remains constantly high between t1 and t2, this version of STV can avoid the questionable judgment that Hannah knows at t2. Hence, not every version of STV is committed to allowing that one can gain knowledge just by running down the clock. To sum up: cases like FC suggest that the indicator reading of PE-K* is preferable over the genuine constituent reading and the genuine constituent explanation of PE-K. The indicator reading opens up the possibility of various explanations of PE-K, like STV or TPE. I have explained how my preferred account of TPE can handle FC and I have argued that the practical adequacy version of STV struggles with FC.

7.2

The Problem of Conjunctive Ascriptions

I shall now turn to problems that affect both versions of STV. I will start by presenting the problem as a general problem for PE-K, and then I will explain why it is troublesome for STV, but not for TPE. The issue is that (PE-K) seems to allow for odd sounding knowledge ascriptions in modal and temporal embeddings, and in conjunctive ascriptions.2 I will focus on 2

The modal and temporal cases are discussed in Stanley (2005); Blome-Tillmann (2009) added the problem of conjunctive ascriptions to the debate.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

174

Beings of Thought and Action

conjunctive ascriptions here, as I believe this is where problems for STV arise. My preferred account of TPE can handle modal and temporal embeddings as presented in Stanley (2005), hence I will not discuss them here. Since practical factors can differ from person to person, so can whether one knows according to PE-K. Here is a version of the classical bank case in which this might occur. Hannah and Lucy have the same evidence about the hours of the bank and both were at the bank last Saturday. This Friday, they both want to avoid the long queues and they think about whether they know that the bank will be open this Saturday. For Lucy, CFA are miniscule, while for Hannah CFA are momentous. If an endorsee of PE-K had to evaluate both, given that Hannah’s situation amounts to a usual case for which they deny knowledge, she might utter the following sentence: Conjunctive Ascription (CA) Hannah and Lucy have the same evidence about the hours of the bank, but only Lucy has enough evidence to know that the bank will be open, Hannah does not.

PE-K seems committed to the truth of CA, but CA sounds false. Conjunctive ascriptions are a general problem for any account of PE-K. The problem that applies specifically to STV is that its proponents cannot do much more than bite the bullet here. For STV, the threshold for having knowledge-level justification varies with practical circumstances. Due to the difference in practical circumstances, there is a difference in the threshold for knowledge-level justification for Hannah and Lucy, which is true on both the practical adequacy and the communal threshold account. Since Hannah does not meet the higher threshold that is in play given her practical circumstances, she fails to know, while Lucy does meet the threshold given by her practical circumstances and, hence, Lucy knows. It is hard to see how one could hold that practical circumstances affect the threshold required for knowledge-level justification, while staying clear of utterances such as CA. Therefore, STV is committed to accepting oddities like CA as true. Proponents of STV might attempt to soften the blow by arguing that any proper epistemic theory must accept that two persons can have the same evidence, yet differ in whether they know. For example, take two persons, one in barn façade county, the other in ordinary circumstances. Both may have the same perceptual evidence of there being a barn in front of them, yet only one of them knows. While CA may sound odd, any proper epistemic theory should allow for true instances of CA for the

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

Potential Explanations of Pragmatic Encroachment

175

simple reason that while two persons might have the same evidence, they can still differ in knowledge due to being in knowledge precluding circumstances that do not affect their evidence. While this reply makes a sensible point, I think it nonetheless misses the mark. The problem with CA is not that it assumes that there is a difference in knowledge while there is sameness in evidence. The problem is that CA holds that the evidence is sufficient for knowing in one case, but not in another. The notion of evidence sufficient for knowing is not a modal necessary condition on knowledge – for example, a safety condition – that is not fulfilled in barn façade county. Even when in barn façade county, the evidence itself is sufficient for knowing, as it provides just as much justification as in a normal case.3 But knowledge is precluded because another condition on knowledge is not met. The problem that CA raises is that, setting aside other necessary conditions for knowing, two subjects could have the same evidence, and yet that evidence might not be sufficient for knowing in both cases. Hence, the previous reply which attempts to make CA plausible by alluding to other necessary conditions of knowledge misses the mark. STV is committed to oddities like CA and it does not seem that the oddity can easily be explained away. Proponents of STV might simply have to bite the bullet here. They might dig in their heels, accepting that the very idea of moving thresholds for knowledge-level justification must, given sameness of evidence, allow for scenarios in which two subjects have the same evidence, but only one of them has enough evidence to know. Furthermore, I think one can rightly say that linguistic oddity is not a conclusive indicator of falsity. Thus I believe that CA raises at best a problem for STV, but it does not amount to a clear objection. Nonetheless, CA amounts to a drawback for STV. The point in favor of my preferred account of TPE is that it is not committed to conjunctions such as CA. In fact, my account has the resources to explain why such conjunctions are false. On my account, Hannah and Lucy do not actually have the same evidence, or the same reason for believing that the bank is open on Saturday, at least in one important respect. They both have the same proposition as a reason, namely, that they were at the bank on a previous Saturday. However, as I have argued, the strength of a reason is not something that is inherent to the consideration that is the 3

Not even adopting a radical externalism about evidence like “E=K” would be helpful to evade this sort of reasoning, as this would undercut the general strategy. If E=K were true, then in the fake barn scenario there would be no sameness of evidence across the two scenarios.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

Beings of Thought and Action

176

reason. This makes room for the possibility that there is a difference in the strength this reason possesses in Hannah’s case and in Lucy’s case, respectively. In Hannah’s practical situation, it is rational to consider more alternatives than in Lucy’s practical situation. Because there are more alternatives in play in Hannah’s situation that her reason cannot eliminate, Hannah’s reason is weaker than Lucy’s. Since there is this difference in the strength of their respective reason for believing, Hannah and Lucy do not have the same reason for believing. And this explains why Lucy knows but Hannah does not know that the bank is open on Saturday. Note that the oddness of CA is not because Hannah and Lucy differ in what they know. That two subjects can differ in what they know is not odd. The oddness arises because we assume that the subjects have identical reasons for belief and yet differ in what they know. But on my account of TPE, this oddness cannot arise. Two subjects who differ in their CFA will also often differ in the strength of their reasons for believing.4 Hence, at least for the class of cases under discussion, my account of TPE avoids a commitment to CA. And thus it avoids a problem that STV incurs.

7.3

The Problem of Pragmatic Encroachment on Justified Belief

So far, I have focused on pragmatic encroachment on knowledge and on reasons for belief. However, in their earliest paper on pragmatic encroachment, Fantl and McGrath (2002) begin with a case pair that exemplifies the case-based strategy and that is about justified belief, not knowledge. About their high stakes case, and contrary to their low stakes case, Fantl and McGrath (2002: 68) write that: “Intuitively, in Train Case 2, you do not have good enough evidence to know that the train stops in Foxboro. You are not justified in believing that proposition.” What this verdict implies is that there is pragmatic encroachment on justification, hence PE-J.5 Since the two train cases Fantl and McGrath 4

5

The qualification “often” is important here, as two subjects will not necessarily differ in their strength of reasons just because they differ in practical circumstances. I have argued that this makes good sense in my discussion of the Coin Flip Stop case in Chapter 6, Section 6.4. I assume that the passage is not meant to imply pragmatic encroachment on evidence. I also assume that the relevant remark “do not have good enough evidence to know” is meant to imply STV. The threshold for knowledge-level justification has gone up and the evidence is no longer sufficient to provide knowledge-level justification; hence, one’s evidence is not good enough to know. Fantl and McGrath (2009) hold that there is no pragmatic encroachment on justified credences. But this need not indicate that they have abandoned their earlier view, as one might say that practical factors partly determine whether one’s credence suffices for justified belief.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

Potential Explanations of Pragmatic Encroachment

177

provide vary only in practical circumstances, the difference in practical circumstances makes a difference to whether one is justified in believing.6 Granted that case-based arguments might not always elicit the relevant intuitions for all, I believe that the intuitive case for PE-J is strengthened given that we have a principled argument in favor of PE-K. If our intuitions appear trustworthy in the cases about knowledge, it is hard to see why we should assume that they lead us astray in the cases about justification. Furthermore, there is also a principle-based argument for PE-J given in Way and Whiting (2016), but I shall set it aside here. In any case, I assume that, at least for most pragmatic encroachers – which proponents of STV are – there is sufficient reason to assume that PE-J is true. The challenge I raise here is whether proponents of STV can account for PE-J. Before I turn to STV, I submit that my account of TPE can easily explain PE-J. On my account, one’s practical circumstances can make a difference to the strength of one’s reasons for believing. But given the orthodox assumption that one’s degree of justification is entirely determined by the strength of one’s reasons, it follows that if one’s reasons become weaker in a new practical situation, then there will be a corresponding loss in one’s degree of justification. And this loss of degree of justification can make it that a belief that was hitherto a justified belief fails to be a justified belief in this new practical situation. Hence, my account of TPE has the necessary resources to capture the shifting ascriptions of justified belief in cases that motivate PE-J. The drawback of STV I want to highlight is that if PE-J is true, then STV does not offer a full account of pragmatic encroachment in epistemology, or can only offer an account by accepting other controversial views. STV is a thesis about knowledge-level justification. Arguably, though, having a justified belief does not necessarily require having knowledgelevel justification. One can have a justified belief, even if the degree of justification for this belief falls short of knowledge-level justification. Therefore, the fact that a change in practical factors can shift the threshold for knowledge-level justification upward could only explain the lack of justified belief if the degree of justification required for having a justified belief was equal to knowledge-level justification. Since this seems to be false, the proponent of STV will require a separate mechanism to account for PE-J. My point here is not that this gap cannot be filled, although 6

See also Jackson (2019) for the claim that the standard case pairs are not merely about pragmatic encroachment on knowledge but about justified belief.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

178

Beings of Thought and Action

I suspect that to do so will not be an easy task,7 but rather that there is this gap in the first place. If pragmatic encroachment in epistemology is not limited to knowledge, then STV on its own seems unsuited to accounting for those further forms of pragmatic encroachment. Of course, defenders of STV could offer a separate account of PE-J. But considerations of simplicity will favor an account that covers both PE-K and PE-J, like my preferred version of TPE. However, Fantl and McGrath (2009), two of the main proponents of STV, deny my claim that the thresholds for justified belief and knowledgelevel justification are not identical by arguing for the Equivalence Thesis (ET). Equivalence Thesis (ET) P is knowledge-level justified for you iff you are justified in believing that p.

Under the assumption of ET, the thresholds for justified belief and knowledge-level justification are equivalent and always march in step as practical circumstances change. Hence, one not only fails to know in a case like HCOE, but one also lacks a justified belief. If ET is correct, proponents of STV need not be worried about explaining PE-J. Due to ET, their account of PE-K also becomes an account of PE-J. Whether the explanation outlined of the PE-J position is plausible heavily depends on ET. The right to left direction is not intuitively compelling. Fantl and McGrath do concede that utterances such as “I believe that the meeting is on Tuesday, but I don’t know that” are potential counterexamples to the right to left direction. Such an utterance might be taken to imply that one has some justification for believing that the meeting is on Tuesday, but not enough for knowledge-level justification. Given how natural and ubiquitous such utterances are, the right to left direction of ET seems false. However, Fantl and McGrath, who are well aware of such utterances and their implications, offer an argument for ET. I will not discuss this argument here, as I want to make a fairly modest point. If PE-J is true, for which a decent case can be made, then STV cannot account for this form of pragmatic encroachment, or can only do so by adopting a controversial commitment like ET. In contrast, TPE offers a natural explanation of PE-J without adopting any further controversial 7

The natural starting point to filling this gap is to hold that the threshold for a degree of justification to suffice for a justified belief can vary with practical factors. However, I believe this extension will face difficulties in accounting for how this threshold for justification is set. While defenders of STV have a convincing account when it comes to knowledge-level justification, they will not be able to provide the same account for justified belief. Otherwise, they would be equating these two thresholds.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

Potential Explanations of Pragmatic Encroachment

179

commitments. Controversial commitments might be backed up with further arguments. But a point in favor of TPE is that it avoids these commitments in the first place. Since I do not intend to argue that STV is untenable, but that TPE is a viable and attractive alternative, I think this modest point is sufficient to support what I intend to argue for.

7.4

The Problem of Pragmatic Encroachment on Reasons for Belief

In Chapter 6, I argued for PE-R and I explained how my preferred account of TPE accounts for PE-R. Proponents of STV are not in a position to explain PE-R. That is for a number of different reasons. STV is a thesis about knowledge-level justification, but it is unclear how to extend it to account for how the strength of one’s reasons for believing is sensitive to practical factors. Invoking a further shifting threshold to account for PE-R also seems hopeless. Setting aside the issue of how to account for this separate shifting threshold, it is unclear how one could invoke the notion of a threshold when it comes to the strength of reasons. The strength of a reason is a notion that does not require a threshold. Strength is a scalar notion, while a threshold refers to a specific point on that scale. Therefore, there does not seem to be much hope for accounting for PE-R in terms of thresholds. Hence, STV cannot deliver a complete account of pragmatic encroachment, while my account of TPE offers a unified explanation of PE-K, PE-J, and PE-R. Another reason why SVT will not be able to explain PE-R is that it seems to be committed to denying PE-R, for the sake of consistency. In Chapter 6, I mentioned briefly that if PE-R is true, then there will be further downstream effects on all other epistemic notions that are sensitive to one’s reasons for believing. Degree of justification is a notion that is sensitive to one’s reasons. Therefore, proponents of STV must avoid endorsing PE-R because their explanation of PE-K assumes that only the threshold for knowledge-level justification is affected by practical factors, not degrees of justification (see Chapter 5, Section 5.4). Sometimes, the best defense is a good offense. Defenders of STV might argue that the case pair LGR and HGR (Chapter 6, Section 6.1) involve judgments about whether a proposition is a good reason for believing. But shifting assessments of this need not be indicative of a shift in the strength of a reason. What may have shifted is whether the reason is good enough for believing, which could depend on where the threshold for rational belief sits. In effect, this response denies that the relevant intuition is about

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

180

Beings of Thought and Action

the strength of the reason itself. I already admitted in Chapter 6 that this strategy might undermine the cases as I have presented them. However, some experimental findings by Sripada and Stanley (2012) that involve explicit ratings of the quality of evidence seem to corroborate PE-R. There is yet another way in which proponents of STV may want to go on the offense. They may want to argue that there are well-known issues about making degrees of justification sensitive to practical factors, as PE-R suggests. By rejecting PE-R and thereby having degrees of justification that are pure and unaffected by practical factors, STV can avoid these problems. I am willing to grant that PE-R, at least prima facie, creates problems concerning rational credences. But I will get to that in the next section. Here, I shall stick with the theme that sometimes a good offense is the best defense. While it might seem appealing that STV leaves degrees of justification untouched, this creates two puzzles I briefly mentioned in Chapter 5. First, that there is an underlying pure notion seems to be puzzling. If both PE-K and PE-J are true, if the pragmatic does encroach the epistemic, then why does it fall short of encroaching on degrees of justification? STV owes us an answer to this. Second, why should one bother with pragmatically polluted notions such as knowledge, if there are epistemically pure notions like degrees of justification? I don’t want to pretend that these two puzzles cannot be answered by STV, but I think they deserve an answer. Meanwhile, note that my account of TPE avoids these two puzzles. For anybody left wondering why the label “total pragmatic encroachment” is a fitting one, we can now offer some clarification. On my account, there is pragmatic encroachment on the strength of one’s reasons for belief. But given that one’s reasons determine one’s degree of justification, whether one is justified in believing and whether one’s degree of justification suffices for knowledge-level justification, all these other epistemic notions – degree of justification, justified belief, and knowledge-level justification – will be sensitive to practical factors. While there is at bottom only one form of pragmatic encroachment, this form affects all other epistemological notions that are sensitive to reasons for belief. Practical factors encroach epistemology thoroughly, if not totally. Since there are no pure epistemic notions or, in any case, we have not seen that there are any, one cannot shift the focus of epistemology to these pure notions.8 8

It might be said that it is a drawback for my account that there are no pure epistemological notions. In fact TPE is, in total, the most absurd form of pragmatic encroachment. However, remember that the point here is to evaluate different explanations of PE-K. So, in some sense, the debate between STV and TPE is an in-house debate between pragmatic encroachers. Those who wish to resist encroachment altogether must find some way to resist my argument for PE-K in Chapter 5.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

Potential Explanations of Pragmatic Encroachment

181

Finally, partly in defense and partly in preparation for the next section, I would like to point out that PE-R, and hence my account of TPE, sits well with certain findings about degrees of belief, or credences. Norby (2015: 72) argues against what he calls the storage hypothesis. The storage hypothesis holds that the degree of belief in a proposition is a stable and persistent state. Norby holds that epistemologists in particular should be aware of cases in which our degree of belief varies. In the epistemology seminar, one will be willing to assign a nonzero probability to the possibility that one is a brain in a vat. Thus one will have a degree of belief in the proposition that one has hands that falls short of certainty. But in most situations outside the epistemology seminar, one’s degree of belief in one’s having hands will be absolute certainty. Norby takes this as an indicator that degrees of belief are constructed given a certain occasion. When we form a judgment about whether p, the degree of belief constructed depends on what possibilities are retrieved from memory on that occasion. In the epistemology seminar, the brain in a vat possibility is retrieved from memory and thus one has a degree of belief in the proposition that one has hands that falls short of certainty. In ordinary situations in which the brain in a vat alternative is not brought to mind, one will have a degree of belief that amounts to certainty.9 My account of TPE sits nicely with this view of shifting credences due to shifts in alternative possibilities. In fact, one might say that my account provides a story as to why the observed shifts are a rational response to one’s reasons. Such shifts are rational because they reflect that the degree of justification has changed. Similarly, Gao (2019) argues in favor of what she calls credal pragmatism. According to credal pragmatism, one’s credence in p is sensitive to practical factors. She argues for this thesis by using familiar cases from the pragmatic encroachment literature, though she focuses on a variant in which the subjects self-report their confidence. Gao holds that it is very natural to self-ascribe a lower degree of confidence in a proposition when the CFA are elevated, compared to a scenario in which the CFA are low. Just as with Norby’s view, one might say that my account of TPE provides Gao’s thesis about one’s credences with a story about why the shifts she argues for are rational given one’s reasons: They simply reflect that one’s degree of justification has changed because one’s reasons have become weaker. 9

Besides this, Norby gives other examples in which degrees of belief change in unexpected ways that involve the phenomenon of “unpacking”; see Norby (2015: 80f.).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

Beings of Thought and Action

182

While proponents of STV may want to criticize the suggested shifts in degrees of justification that follow from PE-R, my explanation in terms of TPE sits nicely with certain findings about the shiftiness of our doxastic states. Nonetheless, there are some well-known criticisms about the rationality of such shifting doxastic states. In the next section, I will introduce and address them.

7.5

The (In-)Stability of Rational Belief and Degrees of Justification

My preferred account of TPE may avoid some of the pitfalls associated with PE-K generally, or its rival explanation, STV. But it certainly has one objectionable feature. This is that degrees of justification can shift in response to varying practical factors. On my preferred account of TPE, this occurs because the strength of one’s reasons is partly determined by practical factors. As a consequence, practical factors affect one’s degree of justification, which will in turn affect whether a belief counts as rational. And since any rational credence will reflect one’s current degree of justification, practical factors influence the rationality of credences. Therefore, on my account of TPE, rational belief and rational credence are unstable in the sense that a variation of a practical factor might alter whether a credence or belief is rational. That is because even if the propositions available as one’s reasons remain the same across a variation in practical factors, their strength might not, which can affect whether a credence or belief is rational. This consequence is objectionable because it leaves rational agents vulnerable to diachronic Dutch Book cases, that is, a series of bets or decisions that will amount to a net loss for the rational agent. This objection is raised in various forms, sometimes pertaining to rational belief, as in Reed (2012) and Schroeder (2018), sometimes pertaining to rational credences (or epistemic probabilities, which one might think of as a species of rational credences), as in Greco (2013), Rubin (2015), and Schroeder (2018). Here is a very simple case involving credences, based on Greco (2013). Diachronic Dutch Book (DDB) A bookie offers you a bet on whether the bank is open this Saturday. You know that it is, and thus have a high rational credence in this proposition. The bet offered pays $2 if the bank is open, and costs $2 if it is not. We assume that given your high credence, the act that maximizes expected utility is taking the bet. So you take the bet. The bookie then offers you a second bet: it pays $0.01

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

Potential Explanations of Pragmatic Encroachment

183

dollar if the bank is open, but costs you your life savings if it is not. All of a sudden, you are in a context with high CFA and, according to my theory, it is rational to revise one’s credence downward. Suppose that given your new rational credence, not taking the second bet is the action favored by the new expected utility calculation. Then the bookie offers you the chance to cancel your earlier bet, for a small fee, and you take it, as cancelling the bet now has the highest expected utility given your new rational credence. Thus, the bookie, without having any information that you lack, has extracted a small fee from you for nothing in return, by offering you a number of deals, each of which you regarded as fair at the time.

The problem here is that it does not even take a very clever bookie to extract the small fee from you for absolutely nothing in return. The bookie does not know more than you; he only has the ability to offer you bets with high CFA. Given the belief/credence revisions TPE recommends, one becomes predictably exploitable, even by one’s own lights. One knows that if one revises credences as outlined, then one will be exploitable by any bookie that is able to offer bets with high CFA. While it is tempting to answer this objection by upholding that the vulnerability to Dutch Books is not indicative of any genuine epistemic failing, this reply is not very convincing given my overall project. If rational beings of thought and action revise their beliefs about the world in the way my account suggests, the account implies that being a rational being of thought and action entails being easily and predictably exploitable. And since such exploitability does not exactly seem to be a hallmark of rationality, the belief revisions my theory recommends do not seem very rational. While I understand the concern, I will argue that this concern is not weighty enough to abandon TPE; in fact, the features that make for vulnerability to Dutch Books actually also have a positive side. But first, I will present another case, posed in terms of rational belief and based on Schroeder (2018), which raises a similar problem. Ferris Wheel (FW) You are sitting in a Ferris wheel. The safety belt which holds you in your seat does look a bit flimsy. When getting in at the bottom of the ride, you believed that the seatbelt would hold if you leaned out, but your degree of justification puts you just barely over the threshold for rational belief. While you are at the bottom of the wheel, CFA are low. But then the ride starts and when you are at the top of the wheel CFA are high. Before actually leaning out one should consider more possibilities, for example, that the seatbelt might not only look flimsy, but that it actually is flimsy. Since this alternative is not ruled out by your reason for belief, there are more alternatives uneliminated, which weakens

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

184

Beings of Thought and Action

one’s reason. This also results in a diminished degree of justification, which is just below the degree of justification of rational belief.

The problem FW raises is that at the bottom of the wheel, you rationally form a belief even though you know that, soon, you should rationally cease to believe. This seems to violate a reflection principle on belief. As Schroeder puts it, you are committed to something that you know you will be rationally required to give up very soon. This seems to be highly odd, given that you do not receive any new evidence or can expect to blamelessly forget some of your previous evidence. My account of TPE is objectionable because it holds that you are rational in violating this reflection principle. But there is a simple rejoinder to the problem of FW which can also be tweaked to deal, in part, with DDB. The notion of a deliberative context should not be seen as point-like or at least not restricted to one’s current time. Take the standard high CFA bank case, but suppose it’s Thursday night and Hannah contemplates whether she should deposit her paycheck on Friday afternoon or whether she can delay the deposit to Saturday morning. On Thursday night, when Hannah is not yet in a position to act, she is in a deliberative context with high CFA, even if her decision pertains to an act in the future. It is because on Thursday night Hannah knows that her decision about what to do on Friday is one for which CFA are high. She must take into account certain possibilities now, on Thursday, that she would otherwise ignore. This seems to be entirely within the spirit of the said reflection principle. The reflection principle rightly holds that it is not rational to hold a belief which you expect to abandon in the near future. But my account of TPE can easily accommodate this. If you expect to be in a situation in which CFA will be high, then even now one must take into account the additional possibilities that will be relevant when the time comes to act on the belief. This can easily explain why it is not rational to believe that the seat belt will hold you even at the bottom of the Ferris wheel. You can soon expect to be in a situation in which CFA will be very high, hence, even now, you must take into account certain alternatives that will then become relevant. Hence, throughout the Ferris wheel ride, whether it is at the bottom or at the top, it is not rational to believe that the seatbelt will hold you (given that it looks flimsy). Contrary to what some might assume, FW is not a case of unstable rational belief at all. But, more importantly, the issue that FW was intended to raise was that my account of TPE allows for predictably unstable rational beliefs. But, as I have just explained, this

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

Potential Explanations of Pragmatic Encroachment

185

charge is not true. While I have made this point in terms of belief, it equally applies to credences. Although this line of reasoning will not completely avoid the problem of DDB, it can soften its blow. DDB is a highly artificial scenario that most of us do not encounter and have little reason to believe that we will encounter. Hence, at least from our own perspective, we have no reason to expect to be confronted with a bookie that would offer the series of bets. And hence, when we ordinarily form beliefs, we do not violate a reflection principle because we simply have no reason to expect that such circumstances as DDB will arise. The clever bookie, we might concede, might be a genuine possibility, but so is the brain in a vat. In practice, we need not bother with taking either possibility seriously. Hence, it is not true that cases such as DDB suggest that if one forms belief or credences in the fashion that my account of TPE suggests, one will violate a reflection principle. However, there remains a caveat. It can rightly be said that the issue in DDB is not a violation of a reflection principle, but rather that one is predictably exploitable by someone else, even when one has no reason to predict that one will actually be exploited. While this is true, there are two reasons why I believe that vulnerability to Dutch Books is not a dramatic issue that decisively speaks against TPE. First, we are all in the same boat. Even the clever bookies will be predictably exploitable if they confirm to my preferred account of TPE. Because we are all equally vulnerable to exploitation, the risk that one will actually be exploited seems to be minimal. And if you are, you can easily get your loss back from the bookie as long as his credences and beliefs confirm to my account of TPE. Second, and more importantly, the rational instability that my account of TPE allows for is fitting and beneficial in the environment we actually inhabit. DDB is a highly artificial scenario in another sense. In DDB, we are allowed to go back on a decision that we have made. We are able to opt out of the first bet after our credence changed, for a small fee, but that might seem acceptable given our new credence and the heightened risk of a loss. But ordinary decision-making and action works quite differently. Once we act, there is no going back. But, on the bright side, no CFA can actually be incurred unless one acts. Suppose you are in a low CFA scenario: You have a rational belief or credence that the store carries Sichuan pepper and this is what rationalizes the intention to go to the store to get Sichuan pepper. Then suppose this suddenly turns into a high CFA scenario. According to my account of TPE, a lower credence might be rational now and this lower credence might no longer be sufficient to

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

186

Beings of Thought and Action

rationalize your intention to go to the store. But there is no guaranteed loss here. Making up one’s mind such that one intends to go to the store incurs no costs at all, but neither does changing one’s mind later on when one’s practical situation changes. If changing one’s mind incurred a small fee, then the situation would be analogous to DDB, but that is obviously not what our real-life scenarios are like. Moreover, for beings as vulnerable as us, even if changing one’s mind came with a small fee, this may well be a fee worth paying. When the CFA are high, it makes sense to guard oneself against such failure and it can make perfect sense that extra safety measures, even if they come with extra costs, are worth paying for.10 Hence, whatever rational instability my account of TPE allows for, this instability does not seem to amount to a decisive objection. In the literature, there are two further issues that are discussed under the label of stability. I will briefly mention how my account of TPE can handle them. Grimm (2015: 118) discusses the following issue under the label of stability. His worry is that what it takes to know does not radically differ from case to case. However, as mentioned in 7.4, Grimm argues for an explanation of PE-K based on a version of STV. If the threshold for knowledge-level justification moves, then it does seem like the demands one must satisfy in order to know could vary radically from case to case. My preferred explanation of PE-K, though, is TPE, according to which the threshold for knowledge-level justification stays constant. Since I am not committed to a moving threshold for knowledge-level justification, it seems that my version of TPE can avoid this stability problem. Nonetheless, it can fairly be said that the demands on knowledge vary indirectly on my version of TPE. A proposition that counts as a good enough reason and provides knowledge-level justification in one case might not necessarily do so in another. But one can admit that and still avoid the demands on knowledge varying wildly from case to case. I assume that knowledge requires a fairly high degree of justification. So for any proposition that one knows, one will already have a fairly high degree of justification, and hence fairly good reasons for belief. On my account of TPE, the threshold for knowledge-level justification does not get pushed downward. It also does not mean that one’s reasons suddenly become extremely strong when CFA are low. But it might be that if the CFA rise and make it rationally incumbent to consider hitherto ignored possibilities, then one’s reasons for belief might not rule out these additional alternatives 10

Staffel (2019) comes to a similar conclusion; she argues that incoherencies that result from violations of conditionalization may well be a sensible trade-off.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

Potential Explanations of Pragmatic Encroachment

187

and hence become weaker. It is only in such cases that one’s knowledge could vary. But I think it is an exaggeration to claim that on my account of TPE conditions for knowledge will vary wildly. They will vary, that is an immediate consequence of sensitivity to practical factors, but I fail to see why they would have to vary wildly. Finally, Ross and Schroeder (2014: 277) discuss the following issue under the label of stability. They argue that for a fully rational agent, evidentially irrelevant changes in one’s preferences should not elicit changes in belief, and that a number of proponents of STV fail to respect this stability condition. However, my account does respect this stability condition. It is not a change in one’s preferences that can bring out a change in what it is rational to believe, but a change in the strength of one’s reasons. Such a change can happen due to a change in practical circumstances, but ultimately, what explains why a rational agent should change her belief is a change in her reasons. In other words, on my account of TPE, some practical changes can result in evidential changes, and a fully rational agent should respect these changes by changing their beliefs accordingly. However, that does not conflict with Ross and Schroeder’s stability condition, which only holds that evidentially irrelevant changes should not bring about changes in belief. With that, my account of TPE can fully concur.

7.6 Multiple Decision Contexts and Destabilizing Trios In this final section, I want to consider three more objections against PE-K in general. I engage with these objections to explain how my preferred account of TPE deals with them, but also to demonstrate how the sum of the views I argued for coheres. The first objection is called the horse/cart objection and concerns the proper order of explanation of the rationalization of belief and action. The origin of the horse/cart objection is Reed (2012), who accuses Fantl and McGrath (2009) of putting the cart before the horse by using a dialogue that concerns whether one should invest in a certain stock: a: If Stock will go up in value, I should invest in it. So, should I invest in Stock? b: That depends – do you know Stock will go up in value? a: That depends – should I invest in it?11

The problem here is that on Fantl and McGrath’s view, one should only invest in Stock if one’s epistemic position toward the proposition that 11

Taken, with slight modification, from Reed (2012: 470).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

188

Beings of Thought and Action

Stock will go up in value is practically adequate. Intuitively, knowledge is supposed to be useful in deciding whether it would be rational to do something. But on Fantl and McGrath’s view, knowledge becomes useless for certain decisions, as the dialogue here suggests. One cannot settle the question whether one ought to invest without settling the question whether one knows, but settling each question seems to require settling the other first. Moreover, it seems that Fantl and McGrath face a serious violation of the proper order of explanation in decision-making. Reed (2012) holds that if Fantl and McGrath’s practical adequacy condition on knowledge holds, then it is undetermined whether a belief amounts to knowledge, unless it is determined whether acting on that belief is rational. But we do not determine whether we know that p by determining whether it would be rational to act on p. That seems absurd. Instead, we determine whether we know and then we determine how it is rational to act. Fantl and McGrath (2012b: 489) reply to Reed that the relevant conditional involving practical adequacy “implies nothing about what the proper explanatory order is – whether knowledge explains justified action or whether justified action explains knowledge.”12 Hence, their view is not committed to a reversal of the ordinary order of explanation. I concur with this, but the horse/cart objection, although originally employed against a very specific combination of views, is sometimes leveled against, or at least associated with, PE-K more generally. Besides Reed, one can find a very similar complaint in Ichikawa et al. (2012). They argue that PE-K has the consequence of reversing the order of explanation given by belief– desire psychology. They hold that if PE-K were true, we must first determine how to act, and then what to believe. Locke (2017:654f.) even seems to see reversing the order of explanation to be at the heart of PE-K: Where proponents of pragmatic encroachment disagree with traditionalists is primarily over the order of explanation . . . According to proponents of pragmatic encroachment, however, the fact that it is rational for you to act as if p in certain choice situation [sic] is part of what makes it the case that you know that p . . . According to the reasoning-disposition implicature account, you know that p in part because you normally face certain kinds of choice situations.

Locke assumes that proponents of PE-K generally are committed to reversing the order of explanation. No detailed argument is given to support this assertion. But Locke thinks that his own preferred account, 12

The quoted passage refers to their principle KJ (discussed in Chapter 5, Sections 5.3 and 5.4. However, as I have pointed out, KJ is equivalent to PAK for all our concerns.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

Potential Explanations of Pragmatic Encroachment

189

the reasoning-disposition implicature account, is a variant of PE-K because it reverses the order of explanation in the manner he deems characteristic of PE-K. I will not discuss Locke’s reasoning-disposition implicature account here. If it is, as Locke himself holds, committed to a reversal of the order of explanation, then I think that this is an objectionable feature of the account. However, if Locke’s claim that PE-K generally is committed to reversing the order of explanation is correct, then my own preferred account of TPE is committed to such a reversal. Since Locke provides no argument for the thesis that PE-K generally is committed to a reversal of the order of explanation, the easiest way to proceed is to explain why my preferred account of TPE has no such consequence. According to the structure of the primacy of the practical developed in Chapter 6, Section 6.3, decision-making is a two-stage process. Given this structure, we do not need to determine an answer to the practical question what to do in order to find an answer to the epistemic question what to believe. Rather, how we go about answering the epistemic question is influenced by certain practical factors, though not by the answer to the practical question what to do. This practical question only gets answered on the second layer of practical rationality after the epistemic question of what one should believe has been answered on the layer of epistemic rationality. How one answers this question will be determined by practical factors, which determine which alternatives it is rational to take into account in one’s decision context. This is the first layer of practical rationality. But to stress the point, the practical factor that influences finding an answer to the what-to-believe question is not the answer to the what-to-do question but CFA. Therefore, my preferred account of TPE is not committed to an objectionable reversal of the order of explanation. The second objection to PE-K concerns a case in which one is simultaneously making more than one decision. Often a single proposition can be simultaneously used in reasoning about more than one action, as in the following case from Anderson (2015: 352). Dinner Alli tells her husband Tim that she is going to a coffee shop for the evening and won’t be home until late. On the basis of her testimony, Tim considers two actions he might take: first, make pizza for dinner – Alli doesn’t like pizza, so Tim only has pizza when she’s not home; and, second, invite his brother for dinner. Tim’s brother recently had a huge disagreement with Alli and Alli made it very clear that she didn’t want to see Tim’s brother for a while. Tim decides to make pizza but to not invite his brother over.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

190

Beings of Thought and Action

Anderson uses this case to spell trouble for Jason Stanley, who is committed to PE-K and the Reason–Knowledge Principle (RKP). The RKP holds that knowledge is necessary for it to be appropriate to treat a proposition as a reason. In Dinner, the proposition that Alli will be home late is used for two decisions, one a high stakes decision about whether to invite his brother over, the other a low stakes decision about whether to have pizza. It seems to be appropriate to rely on the relevant proposition in making the latter decision, but not in the former. So it seems as if Tim both knows and does not know that Alli will be home late. But this cannot be. So either PE-K or the RKP must be false. I have already rejected the RKP in the Prologue. But it is worthwhile spelling out how my alternative to the RKP, the Contextualist Justification Norm for Practical Reasoning (CJN) handles this case in conjunction with my version of TPE. In Chapter 3, I argued that permissibly treating a proposition as a reason requires a contextually determined degree of justification for believing that proposition. Given different actions associated with different COE, different degrees of justification are needed for it to be permissible to rely on a proposition in practical reasoning. This explains why Tim can treat the proposition that Alli will be home late as a reason to make pizza, as here COE do not exceed CFI, but not to invite his brother, as here COE do exceed CFI. My account of TPE can say that Tim fails to know because his reasons for believing the relevant proposition have been weakened because his practical situation includes the question whether to invite his brother over. This question turns this into a deliberative context in which CFA are high. So Tim’s practical situation makes it rational to consider more alternatives to “Alli will be home late.” The number of alternatives not ruled out thereby increases and renders Tim’s reasons for belief weaker. Therefore, we can say that Tim fails to know, and we can say that the degree of justification for Alli being home late is not sufficient for inviting his brother over. But it is still high enough to make pizza. So the sum of my views do not conflict and they can lead to all the right verdicts. This mechanism is also sufficient to account for all the other cases in Anderson (2015), but I cannot go through each one of them. Cases involving multiple decisions/actions are also helpful for further refining the notion of CFA, which are relative to a particular decision. Brown (2014b: 182) considers what she has calls the unity approach. According to this approach, as CFA rise, the epistemic standard for knowing any proposition rises. But its skeptical implications tell in favor of avoiding the unity approach. Dinner, and my treatment of it, favors

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

Potential Explanations of Pragmatic Encroachment

191

Brown’s other suggestion, the relevance approach (Brown 2014b: 186) according to which only those propositions on which one’s practical reasoning depends can be affected by one’s practical circumstances. When a proposition is relevant to more than one decision, it is always the decision with the highest CFA that determines how circumspect one must be in the consideration of alternatives. Finally, Anderson and Hawthorne (2019a) offer case trios that are designed to show that the standard judgments that drive PE-K seem to involve a number of different factors and that pragmatic encroachers must get clear on which factor is the driving force behind PE-K: A: The train is leaving and one has to decide whether to get on it. One wants to go to Foxboro. It would be a mild inconvenience to get on the wrong train. One has excellent evidence that it is the last Foxboro train of the day (and it is). One has no opportunity of double-checking. B: The train is leaving and one has to decide whether to get on it. One wants to go to Foxboro. One has excellent evidence that it is the last Foxboro train of the day (and it is). One can with only minimal effort check the train timetable before getting on. It would be pretty disastrous to get on the wrong train. C: The train is leaving and one has to decide whether to get on it. One has excellent evidence that it is the last Foxboro train of the day (and it is). It would be pretty disastrous to get on the wrong train. The only way to double-check is to pay the draconian person ten thousand dollars (payment would be even more of a disaster than getting on the wrong train). (Anderson and Hawthorne (2019a: 254)

I shall set aside what Anderson and Hawthorne assume that proponents of PE-K must say about each of these cases and jump straight to what the sum of my views says. For A, it seems plausible that one knows that the train goes to Foxboro; after all one has excellent evidence. Since COE are low and CFI are high (as one cannot double-check), the consequent of PEK* is not negated. For B, COE exceed CFI and thus the consequent of PEK* is negated, hence one lacks knowledge-level justification and one fails to know. C is an interesting case because it provides another reason to stay clear of the genuine constituent reading of PE-K*. For in C, COE do not exceed CFI, and thus PE-K* would not support the verdict that one fails to know. Yet I have been explicit that the relevant factor in my explanation of PE-K is CFA. And CFA do not differ between B and C; hence I can maintain that one fails to know in C as well. C is one instance in which PE-K* fails to indicate a lack of knowledge. The reason is that C is a case in which CFI are artificially inflated. In

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

192

Beings of Thought and Action

Chapter 3, I explained that usually, when one’s epistemic position is already strong, further inquiry comes with a cost, as it will be harder to strengthen one’s position. Thus one might assume that when CFI are high, one must have a strong epistemic standing. But, as C demonstrates, CFI can be high for quite different reasons. Therefore, one should avoid making an explanation of PE-K dependent on CFI, as I have. In sum then, the case trio Anderson and Hawthorne give poses no issue for the sum of views I have argued for.13

7.7 Summary In this chapter, I dealt with a large number of cases and positions, so a final summary might be helpful for all those who, quite understandably, might have lost track of where I was going. I argued that my preferred account of TPE offers a viable and attractive explanation of PE-K, or rather of pragmatic encroachment in epistemology more generally. It can handle cases like Forced Choice and it does not hold that running down the clock can be a way to gain knowledge, unlike the genuine constituent explanation or the practical adequacy version of STV. Unlike STV, it also avoids the problem of Conjunctive Ascriptions. TPE can also easily explain PE-J, unlike STV, which must commit to the Equivalence Thesis to do so. Since it is based on PE-R, TPE has a straightforward explanation of PE-R. Proponents of STV must deny PE-R, which also raises questions about why some epistemological notions are pure and others are not, and why epistemologists ought to care about the impure ones. Finally, I have argued that the instability of degrees of justification my account of TPE commits me to is not objectionable, that it is not committed to a questionable reversal of the order of explanation, and I demonstrated how the sum of my views can handle more complex cases. While the final verdict is reserved for the reader, I believe I have at least provided reasons to take the underexplored view, TPE, much more seriously. 13

Anderson and Hawthorne (2019a: 255) provide another case trio but I fail to see why pragmatic encroachers are committed to holding that one lacks knowledge in the second case of this trio. Hence, I omit a discussion of this trio because I do not see the inconsistency that Anderson and Hawthorne want to press encroachers on.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.007

chapter 8

Social Beings

Throughout this book, I have returned to the fact that we are beings of thought and action and how my views fit with this double nature and our needs as such beings. However, until now, I have excluded entirely a further important fact about us: Undeniably, we are also social beings. We live in groups. We often think and act on our own, yet these thoughts and actions are socially embedded. And we also often think and act together. However, this raises a number of issues. We may all be created equally but, in most societies, there will be what one could call natural and unnatural inequality. Natural inequality is the sense that our situations differ, and we all care and hold dear different things. For me, it might be entirely unimportant whether the bank is open on Saturday or not, while this question may be of the utmost importance to you. And then there may be a less natural kind of inequality. In most societies, resources are not distributed equally among all members. While the consequences of failed action (CFA) may be massive for me, the same CFA might not even get you to blink, because, for example, you have the financial resources to absorb CFA that would be devastating to me. In this chapter, I will spell out how, on my views, these differences affect how we think and act individually, but also together. I will consider both kinds of inequality, though I start with massive wealth inequality. In Section 8.1, I introduce the problem of wealth and explain how my account of pragmatic encroachment at least avoids certain instances of it. In Section 8.2, I introduce the problem of joint deliberation, which my account cannot easily avoid. In Section 8.3, I offer a view which I call Epistemic Communism to complement my account of pragmatic encroachment in order to deal with the problem of joint deliberation. In Section 8.4, I address how my account deals with the phenomenon of epistemic injustice. 193

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.008

194

Beings of Thought and Action

8.1 The Problem of Wealth Russell and Doris (2008) hold that indifference or simply being filthy rich should not be properties that can make it that a true belief amounts to knowledge. Their explicit target is Jason Stanley’s version of PE-K; however, their cases are a challenge for any version of PE-K. The following case is to be contrasted with the standard bank case for which it is said that the stakes are high, for example, a case like my HGR in Chapter 6. Trust Fund Baby (TFB) Richboy Richie, the trust fund baby, is wondering whether to brave the Friday afternoon lines or return to the bank Saturday (late) morning, and deposit a cheque he has just received from his parents. His roommate, Tad, lounging in the passenger seat of Richie’s Hummer, points out that banks sometimes do change their hours, and given that their rent is due, failure to make a deposit will likely result in yet another bounced cheque to their landlord, whose patience has already been strained to the breaking point. Richie responds, “Chill, dude, I know the bank will be open, I was there last week, and even if I bounce a cheque, my parents and I can buy that dump of an apartment building.” Inhaling deeply, Tad nods his agreement. (Russell and Doris 2008: 432)

To see the problem (and, later, its solution) clearly, remember that the debate about PE-K is usually framed in terms of stakes. I will adopt the term “stakes” to make the problem of wealth explicit. Hannah in HGR, which one could consider to be a paradigm high stakes case, and Richie in TFB resemble each other in many core features. Hannah and Richie both remember their last visit to the bank and they both must make the deposit in time, otherwise they will be late with an important payment. However, given his trust fund heritage, Richie has enormous financial resources that Hannah lacks. If Richie is threatened with eviction, he can just buy the building. But should we think that this difference in financial power could lead to a difference in epistemic status between Hannah and Richie? It seems that Ritchie, unlike Hannah and despite the similarity of their situations, is simply not in a high stakes situation. One’s financial power can decide whether something is a high stakes case. But if what is at stake can influence whether one knows, then one’s financial power can influence whether a situation is potentially knowledge undermining. If PE-K is right, then Richie would know, because he is like Hannah in a corresponding low stakes case (see case LGR in Chapter 6, Section 6.1), not like Hannah in HGR. But it seems highly implausible that one’s financial power has any influence on one’s epistemic status, knowledge in particular. Knowledge seems insensitive to such factors as the wealth of a putative knower. If PE-K

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.008

Social Beings

195

is committed to the knowledge ascriptions I just outlined, it must allow that some rather unusual factors, for example, one’s wealth, can be a knowledge-making factor. This seems rather absurd. Jason Stanley, the target of Russell and Doris, does not even attempt to deny that financial power can influence one’s epistemic standing; in fact, he wholeheartedly endorses it. For Stanley, TFB does not unearth a bug, but a feature of the theory. In Stanley (2015: 306), he writes: “Understandably, they [Russell and Doris] present this as an objection, but to me it is a welcome consequence of the view, one that does needed explanatory work in the political realm.” I surmise that most epistemologists will probably less cheerfully endorse this suggested consequence. I consider myself to be among these epistemologists. Fortunately, I believe that my account of PE-K has the resources to deal with the problem of wealth that TFB raises. However, to be upfront, my account of total pragmatic encroachment (TPE) will not entirely avoid that economic inequality can lead to epistemic inequality, as I will explain in Section 8.4. But for now, let’s stick with TFB. First, we should switch back to the notions of costs of error (COE)/CFA that were central to my account of pragmatic encroachment. This will help explain why Ritchie in TFB is not like Hannah in what others may call a low-stakes case. For Ritchie, the CFA are just as high as for Hannah in a case like HGR. It is true that Ritchie can easily offset these consequences through his wealth, although this will generate further massive costs for him (even though they may not be devastating due to his wealth). Ritchie’s capacity to offset his CFA is what makes it appear that the stakes for Ritchie are low and that he is like Hannah in LGR, and not like Hannah in HGR. But talk of stakes seriously leads us astray here. When we think of TFB in terms of CFA, it becomes obvious that the CFA for Ritchie in TFB and for Hannah in HGR are the same. They both risk losing their home. Unlike Hannah, Ritchie can easily offset these costs by simply buying another house. But since Ritchie is, in terms of CFA, just like Hannah in HGR, and the significant notion for my version of pragmatic encroachment is CFA, they should both be treated alike. They both fail to know. My version of TPE can explain why Ritchie lacks knowledge-level justification. The explanation is actually the same as for Hannah in HGR. Given that Ritchie and Hannah both have the same CFA, it is incumbent on both to consider more alternatives than in a case in which CFA are low. The fact that Ritchie can absorb the CFA he could potentially occur does not make a difference. I often invoked the standard of the reasonable person. I believe that Ritchie falls short of this standard, as

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.008

196

Beings of Thought and Action

I suppose a reasonable person would consider more alternatives than in a corresponding low CFA case, even if they could absorb the CFA by spending enough money.1 But once more alternatives need to be considered, there will be more alternatives left uneliminated, just like in HGR. Ritchie’s reason for belief is therefore a weaker reason than in a low CFA scenario and thus his degree of justification will fall below knowledge-level justification. Finally, TFB is also diagnosed as a case of nonknowledge by the principle PE-K* (Chapter 5, Section 5.2). Even for Ritchie, it is true that the costs of further inquiry (CFI) are not higher than the COE, the latter being driven up by high CFA. Surely, even for Richie, the costs of checking the hours is below the costs of losing his home. Since it is not the case that the CFI exceed COE, it is not the case that his belief has knowledge-level justification, and therefore it cannot amount to knowledge. In sum, my account is not susceptible to the problem of wealth that TFB raises. However, as I will show in the next section, there are further cases of inequality among subjects that raise their own problems and which will require a different treatment.

8.2 The Problem of Joint Deliberation So far we have only considered cases that involve a single subject. We will now look at cases in which there is more than one subject and where the consequences of an action affect subjects differently. This will force us to become clearer about for whom consequences of failed action matter. This is often not explicitly discussed in the literature, although it is an important question that is relevant for so called third-person cases. The early defenders of PE-K, Hawthorne (2004) and Stanley (2005), are committed to subject-sensitivity. They hold that it is the consequences for the subject of the knowledge ascription that influence whether the subject knows. More recently, Grimm (2015) and Hannon (2017) have argued that it is not just the consequences for the subject that influence knowledge ascriptions. To contrast their view with subject-sensitivity, we might call it socialsensitivity. Other advocates of PE-K have been less clear on where they stand; their views seem compatible with both subject- and socialsensitivity. In the following, I will argue that neither subject-sensitivity 1

We could bolster this assumption with the further assumption that even incredibly wealthy persons will probably want to avoid having to spend large sums of money on things they would not otherwise spend money on, if they could avoid doing so. This assumption may be open to dispute and I will leave it to my critics to dispute it if they wish, but it does not seem to be an outlandish assumption.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.008

Social Beings

197

nor current accounts of social-sensitivity are satisfying. I will propose a moderate version of social-sensitivity as their replacement. But first, let us get familiar with the troublesome cases for versions of PE-K that endorse subject-sensitivity. Consider the following case from Stanley (2005: 5) that concerns third-person knowledge ascriptions. High Attributor–Low Subject Costs Hannah and her wife Sarah are driving home on a Friday afternoon. They plan to stop at the bank on the way home to deposit their paychecks. Since they have an impending bill coming due, and very little in their account, it is very important that they deposit their paychecks by Saturday. Hannah calls up Bill on her cell phone, and asks Bill whether the bank will be open on Saturday. Bill replies by telling Hannah, “Well, I was there two weeks ago on a Saturday, and it was open.” After reporting the discussion to Sarah, Hannah concludes that, since banks do occasionally change their hours, “Bill doesn’t really know that the bank will be open on Saturday.”

While Stanley (2005: 5) concedes that we might have the intuition that Hannah’s utterance that denies Bill knowledge is true, his own theory, committed to subject-sensitivity, prevents him from endorsing this judgment. According to Stanley (2005: 97), Bill knows that the bank is open. Stanley (2005: 102) explains the false intuition that Bill does not know by holding that Hannah asks herself not whether Bill knows given his actual situation, but rather whether Bill would count as knowing if he were in Hannah’s situation – to which the answer is “no.” I do not want to dispute Stanley’s verdict that Bill knows, as I will come to the same verdict, albeit for different reasons. Yet the line of Stanley’s reasoning behind the verdict is insufficient to account for other cases. We can make this vivid if we put Hannah and another very wealthy person, Bill Gates, in the same situation, as in the following case. Joint Deliberation Hannah and Sarah are driving home on a Friday afternoon. They just picked up their friend Bill Gates, who is sitting in the back, to go for a drink. All parties in the car agree to live by the proverb “Lend your money and lose your friend.” Hannah and Sarah plan to deposit their paychecks on the way to the local bar. Since they have an impending bill coming due, and very little in their account, it is very important that they deposit their paychecks by Saturday. Arriving at the bank, they notice the long queues and all agree that it would be annoying to queue instead of spending time together having drinks. So they consider whether it would be an option to return on Saturday morning. Bill says “I was at the bank two weeks ago on Saturday, so I/you/we know that the bank is open on Saturday. Let’s just go, you can come back tomorrow.” Hannah notes that she was at the bank two weeks ago, too. But Sarah reminds them both that

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.008

198

Beings of Thought and Action

banks occasionally change their hours. Hannah concedes, “I guess I don’t know that the bank will be open on Saturday,” while Bill insists, “Well, I do know that the bank will be open on Saturday, trust me!”

Just as in High Attributor–Low Subject Costs, I think there is a strong intuition that Bill Gates does not know, despite his claiming otherwise. To me, this intuition persists no matter which pronoun we insert in Bill Gates’s first utterance in the case here. Just as it seems that Gates is wrong to claim that he knows, it seems wrong to claim that Hannah and Sarah know, or that they all know. Additionally, Gates’s insistence on having knowledge strikes me as highly insensitive to the situation at hand. Yet, if subject-sensitivity is correct, then it follows that Bill Gates does know, as his CFA are low. The problem is exacerbated by plausible links between knowledge and rational action and the usual assumptions about the transmission of knowledge by testimony. As I argued in Chapter 4, the following principle seems to hold. Knowledge-Level Justification Sufficiency (KJS) If one’s degree of justification for p in a deliberative context (DC) suffices for knowledge-level justification, then it is rationally permissible to treat p as a reason for action in DC.

We can set aside complications about group knowledge2 and focus just on a single subject, Bill Gates, not the whole group in the car. Now if Bill indeed knows that the bank is open while Hannah does not, then it would seem fine if she simply relied on Bill Gates in joint deliberation about what to do. Or she could hand him their paychecks and ask him to make the deposit for them tomorrow, since he knows, while Hannah does not. But it seems highly irrational for Hannah to follow through on either of these options. And, likewise, it does not seem permissible for Bill Gates to offer to deposit the paycheck for Hannah the next day, because he knows that the bank will be open. So if it were true that Bill Gates knows, then KJS would be false. Given the centrality of KJS to my case for PE-K, this would be a devastating result. Here is another peculiar thing about this situation of joint deliberation. A common way to acquire knowledge is through the testimony of others 2

On most accounts of group knowledge, it is possible for a group to know that p even if not all members of the group know that p. On such accounts, it is therefore possible that the group of Hannah, Sarah, and Bill would count as knowing that p, even if only Bill knows. Even if the group knew, this would still point to a breakdown of KJS. It does not seem permissible for them to rely on the proposition that the bank will be open in reasoning and to simply go for drinks straightaway.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.008

Social Beings

199

given that they possess knowledge. However, the transmission of testimonial knowledge must fail in this case if Bill actually knows, yet Hannah does not come to know given Bill’s repeated assertion that the bank will be open on Saturday. In sum, Joint Deliberation brings out three problems for a conception of PE-K that is based on subject-sensitivity. First, there is a clash with intuitions about the truth value of Bill Gates’s knowledge claim. Second, if Gates knows, principles like KJS seem to break down in Joint Deliberation. Third, if Bill knows, then there is an odd breakdown of the usual transfer of knowledge through testimony. I shall call this set of related problems the problem of joint deliberation. Until now, I have been silent on whether my account of TPE is an account of pragmatic encroachment that is based on subject-sensitivity, that is, that in all circumstances it is the subject’s CFA that determine their epistemic standing. The problem of joint deliberation clearly demonstrates that I would be well advised to stay clear of subject-sensitivity. However, this naturally raises the question of what an alternative to subjectsensitivity could look like. I will develop one in the next section.

8.3

Epistemic Communism

In recent years, epistemologists have explored ideas from Edward Craig’s (1990) book Knowledge and the State of Nature. The central idea of Craig’s approach to epistemology is to investigate which role the concept of knowledge plays in our conceptual repertoire, that is, which use the concept of knowledge has for us. Craig identifies flagging reliable informants as the key function of the concept of knowledge. Let us imagine we are back in some original state of nature. When we consider whether certain berries are poisonous or edible, we value somebody with reliable information on the matter and it is useful to flag this person as a knower. I shall set aside concerns with Craig’s specific idea that the purpose of knowledge ascriptions is to flag reliable informants. Inspired by Craig, others have proposed further functions distinctive of knowledge, such as evaluating the blameworthiness of certain behavior (see Beebe 2012). I will not debate whether knowledge serves just one specific function or whether a specific function is actually performed by the concept of knowledge. I think the crucial insight of Craig and others is not that knowledge serves some function or other, but rather they draw attention to the fact that knowledge is deeply embedded in social practices and that the concept knowledge is highly useful for us as social beings. Whatever the precise

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.008

200

Beings of Thought and Action

function is, if knowledge plays a role in social practices, then our theorizing about knowledge should take this into account. Individualist epistemology, to use a term coined by Goldman (1999), which is concerned with the epistemic standing of individuals, might still have important social components.3 In this section, I explore this idea, which I shall call Epistemic Communism.4 The following communist manifesto does not aspire to completeness, but it captures the key idea. Epistemic Communism The epistemic standing of a belief is sensitive not only to facts about the individual S who holds the belief but also to social aspects.

I will spell out how Epistemic Communism can help various accounts of PE-K to deal with the problem of joint deliberation. Views of PE-K that allow for social-sensitivity, for example, those of Grimm and Hannon, fall under my label Epistemic Communism and they are motivated by the same Craigean considerations that I have put forward. Nonetheless, the label Epistemic Communism is wider than suggested in the views of Grimm or Hannon. It is wider in the sense that it does not specify which epistemic standing is sensitive to social aspects. In the works of Grimm and Hannon, knowledge seems to be the only epistemic standing that is sensitive to social aspects. I take the relevant epistemic standing to be the strength of one’s reasons for believing, staying true to my explanation of PE-K in terms of TPE. However, this influence will then have further effects on all epistemic notions that are sensitive to reasons for belief, such as epistemic justification and knowledge. I will later spell out which social aspect one’s epistemic standing is sensitive to. In order to do see how Epistemic Communism can help with the problem of joint deliberation, it will be helpful to consider another classic case pair - Cohen’s airport case (see, for example, Cohen (1999)). Airport Case A Mary and John are at Los Angeles airport contemplating taking a flight to New York. They want to know whether the flight has a layover in Chicago. They overhear someone ask if anyone knows whether the flight makes any stops. A passenger, Smith, replies, “I do. I just looked at my flight itinerary and there is a stop in Chicago.” 3

4

Goldman (1999) notices that the boundaries between individual and social epistemology might not be clear-cut. I borrow the label “Epistemic Communism” from Dogramaci (2012) and (2015). However, this should not be seen as an endorsement of Dogramaci’s views.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.008

Social Beings

201

Airport Case B The setup is the same as the aforementioned scenario except now Mary and John have a very important business contact they have to meet at Chicago airport. Mary says, “How reliable is that itinerary, anyway. It could contain a misprint. They could have changed the schedule since it was printed, etc.” Mary and John agree that Smith doesn’t really know that the plane will stop in Chicago on the basis of the itinerary. They decide to check with the airline agent.

Airport Case B is similar to High Attributor–Low Subject Stakes, but there is no direct communication going on between Mary and Smith. Mary does not ask Smith directly, they simply overhear him talking to somebody else (and we do not know whether that person has high CFA or not, but we assume that they do not). We should ask ourselves whether Mary’s claim that Smith does not know in Case B is true. There are at least three possible answers that bring with them various burdens: (A) Smith does know that p. Subject-sensitivity is true. Mary’s claim is wrong, so we must account for why an intuitively correct knowledge denial is false. (B) Smith does not know that p. Social-sensitivity is true. Why did Mary’s practical concerns lead to a loss of knowledge for Smith who is not aware of Mary’s concerns (and need not be aware of them)? (C) It is both true that Smith knows that p and that Smith does not know that p. From Smith’s perspective, it is true that Smith knows that p, however, from Mary’s perspective it is not true that Smith knows that p. This view is only tenable if knowledge ascriptions are either context-sensitive, for example, utterances involving indexical expressions, or some sort of relativism holds for knowledge ascriptions. Opinions diverge on which option is preferable. I will set aside option (C), which represents the classical contextualist treatment of “knows.” I do so not because I think that there is anything wrong with (C) per se. But as the astute reader might have noticed, my account so far has avoided any form of context-sensitivity in the sense of subtle shifts in the meaning of epistemic terms. The situation-sensitivity of reasons for belief introduced in Chapter 6 is not to be mistaken for context-sensitivity of “reasons.” I want to explore whether the more complex cases I am considering in this chapter must really push us towards contextualism or whether an invariantist treatment of them is available. If there is, then there will be considerably less pressure to adopt contextualism. Option (A), as pointed out in Section 8.2, is endorsed in Stanley (2005) where we are also offered an explanation of why our intuitions lead us

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.008

202

Beings of Thought and Action

astray in this case. Option (B) is endorsed by Hannon (2013) and Grimm (2015). I think that both (A) and (B) are unsatisfying. Joint Deliberation has already highlighted the problem with option (A). If Smith can know, then it seems that Bill Gates engaged in deliberation with Hannah knows as well, which leads to the problem of joint deliberation. The problem with option (B) is that one can lose knowledge if a person with high stakes happens to be eavesdropping. Grimm (2015) is quite explicit about this consequence, although he does not seem to be bothered by it. He says the following about a case such as the airport case in which Hannah is in a position like Smith: You might therefore innocently take Hannah to know because you think no one with elevated stakes might actually appeal to her belief, but someone else who is actually shouldering such stakes might be listening in all the while and take her not to know. On the view here, you would be mistaken in your judgment, and the eavesdropper would be right. (Grimm 2015: 134)

On option (B), then, not only does one lose knowledge in many circumstances but one would lose and gain it in absurd ways. Imagine you and a friend want to catch a flight at the airport. You say you know that your flight leaves from gate three. It is a scenario in which CFA are low for you. Your reason for belief is strong enough to know given your practical situation. But as you utter the sentence, a subject with high CFA happens to walk by and overhears your utterance. In this moment, on Grimm’s account, your knowledge claim is false. Odd enough. But now imagine that your friend, busy checking his luggage, did not hear what you said and asks you to repeat it. The subject with high CFA is now at the airline counter and no longer able to hear you. You repeat what you just said. All of a sudden, you would be speaking truly, because the high CFA subject is no longer eavesdropping. But this seems to be a bizarre way to lose or gain knowledge. While I suggested that knowledge might be a lot less stable than we traditionally think, I think it should not be as unstable as Grimm’s variant of option (B) suggests. The problems of options (A) and (B) give us an idea of what a better account should look like. The problem with option (A) and subjectsensitivity is that it is too narrow. It is not only the CFA of a subject that seem to influence their epistemic standing. The problem with option (B) is that it is too wide, making it too easy for other people’s CFA to influence one’s own epistemic standing. However, a case like Joint Deliberation indicates how we can plausibly restrict the range of subjects whose CFA can have a bearing on one’s epistemic standing. Joint Deliberation indicates

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.008

Social Beings

203

that those subjects with whom we are engaged in joint deliberation, those with whom we intend or have reason to cooperate with and coordinate, are those whose CFA can affect one’s epistemic standing. In Joint Deliberation, the fact that Bill Gates is engaged in joint deliberation with Hannah, for whom CFA are high, makes it incumbent on Bill Gates to consider certain alternatives that his own CFA do not require him to consider. The by now familiar explanation of PE-K via TPE then works as before. Bill Gates reasons rule out fewer alternatives and hence his reason in Joint Deliberation is weaker than if he was alone, and so he fails to know what he would have known if he was not engaged in joint deliberation. In the airport case, the fact that Smith is not engaged in any form of joint deliberation with Mary makes it that Mary’s high CFA have no effect on Smith. Therefore, he maintains his knowledge even if Mary is in his immediate vicinity and overhears his conversation. The same holds for High Attributor–Low Subject Costs, as it is not the case that Bill is in joint deliberation with Hannah, although they are communicating directly. So on the proposed variant of social-sensitivity, the CFA relevant for determining which alternatives need to be considered need not necessarily be one’s own CFA. Whether one is in a deliberative context in which the CFA are high is dependent not only on the consequences of the decision oneself incurs but on all those subjects that are affected by the outcomes of the decision with whom one is in joint deliberation. This raises the question of what precisely counts as joint deliberation. I have no complete account to offer here, but I think to be in joint deliberation with another person one must at least have an inkling that one is deliberating together. For that to be the case, the action need not be done together and need not affect all parties. To be part of joint deliberation, it is sufficient to function as an advisor in deliberation. But again, one must have an inkling that one is functioning as an advisor – which cannot be said about Bill in High Attributor–Low Subject Costs. I have intentionally used the vague term “inkling” to avoid a commitment about what it takes epistemically to be in joint deliberation. Let us turn to two open problems. First, one must explain why our intuitions might suggest that Smith in Case B fails to know. I think that a projectivist strategy similar to the one employed by Stanley is on the right track. What explains why it makes sense to Mary and to us to say that Smith does not know is that we evaluate Smith from Mary’s perspective. The reason for belief that Smith provides is insufficient for knowing given Mary’s practical situation. That this is merely projection would also

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.008

204

Beings of Thought and Action

explain why we are less inclined to deny knowledge to Smith when the scenario is told from his perspective, as in Airport Case B*. Airport Case B* A fellow passenger asks Smith if he knows whether the plane will stop in Chicago. Smith replies, “I do. I just looked at my flight itinerary and there is a stop in Chicago.” Nearby, Mary, for whom CFA are high, is eavesdropping on this conversation. Does Smith know that the plane will stop in Chicago?

I feel less inclined to deny Smith knowledge in Case B* than in Case B, although the situation is unchanged and merely described from a different perspective. Hence, a projectivist strategy promises to explain why we mistakenly tend to deny Smith or Bill knowledge in High Attributor– Low Subject Costs. The second problem is what I call knowledge laundering through epistemic closure.5 Mary, who might be an epistemologist and thus aware of the mechanisms in play, should not come to know that the plane will stop in Chicago after hearing Smith declare that he knows. As an epistemologist, Mary might recognize the pattern in play and thus come to know that Smith knows that p. But if Smith knows that p, then p is the case, so Mary can infer that p and thereby come to know that p. Let us call this process knowledge laundering through epistemic closure, which should not be a legitimate way to acquire knowledge. The idea that knowledge requires that one computes one’s reason in a certain way, introduced in Chapter 6, Section 6.5, explains why my account does not license knowledge laundering through epistemic closure. Even when she is aware of all the relevant epistemic theories, Mary is in a situation in which she cannot come to know that Smith knows. A lot more alternatives are in play in Mary’s practical situation, alternatives that she knows Smith’s reason cannot rule out. Therefore, when Mary computes the reason Smith has correctly, she cannot rationally come to hold that Smith knows. Even when she is aware of all the relevant theories, Mary could not compute the reason that Smith has in such a way that she could come to know that Smith knows. In fact, Mary’s practical situation makes it that she has good reason to believe that Smith fails to know. This reason is misleading, but there is nothing unusual about misleading reasons for belief. Since Mary cannot count as knowing that Smith knows that p, she cannot come to know that p through knowledge laundering. 5

MacFarlane (2005), Brueckner (2005), and Brown (2014a) all raise problems along these lines for PE-K.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.008

Social Beings

205

Over the course of this section, we saw how Epistemic Communism can help PE-K, including my view, TPE, to deal with the problem of joint deliberation. Of course, the fate of my account is thereby tied to Epistemic Communism. I cannot further discuss and defend Epistemic Communism here. But the increasing literature on Craig at least suggests that Epistemic Communism might not be an entirely unappealing idea. Hopefully, I have shown how Epistemic Communism might be useful in enabling pragmatic encroachers to deal with the problem of joint deliberation.

8.4 PE-K and Epistemic Injustice In this final section, I want to consider an objection to PE-K involving another phenomenon that has both an epistemic and a practical, that is, moral, dimension: epistemic injustice. Gerken (2019a) argues that PE-K cannot diagnose some paradigm cases of epistemic injustice. To make matters worse, PE-K even seems to vindicate certain epistemic judgments that one would characterize as epistemic injustices. I will first introduce the concept of epistemic injustice, then Gerken’s cases that make this charge against PE-K, and finally I will explain how my account of PE-K handles these cases and Gerken’s charge. In her seminal book, Epistemic Injustice: Power and the Ethics of Knowing, Fricker (2007) introduces and explores the concept of epistemic injustice. An epistemic injustice is characterized as a wrong done to someone in their capacity as a knower. Following Gerken, I will opt for a wider understanding of epistemic injustice, which is not tied to knowledge but also applies to other epistemic standings, such as being justified in believing. The relevant class of epistemic injustice is that of testimonial injustice, which is defined as follows. Testimonial Injustice A prejudice causes a hearer to give a deflated level of credibility to a speaker’s word.

By deflated credibility, Fricker (2007: 17) means that the hearer judges the speaker to be less credible than they actually are. This might manifest in different ways; we might for example say that the speaker does not know, or that they are not justified in believing, although they do in fact know, or are in fact justified in believing. What makes such judgments cases of epistemic injustice, as opposed to an innocent false epistemic evaluation, is that the judgment is due to a prejudice of the hearer. The prejudice might be based on gender, race, social group, class, and so on. While Fricker focuses on testimonial exchanges, the same form of epistemic injustice can

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.008

206

Beings of Thought and Action

occur in nontestimonial cases in which somebody is given a deflated level of credibility by somebody with whom they are not in a testimonial exchange. So while the following cases do not involve testimony, they could still be cases of epistemic injustice in Fricker’s sense. Let’s consider the cases for which Gerken holds that PE-K cannot diagnose them correctly as paradigm cases of epistemic injustice. I will go through them one by one and reply to them after each case has been introduced. Gerken starts with a case pair that is somewhat similar to the TFB case I gave in Section 8.1. Car Bet Richie is extremely wealthy and has spent some of his money on a car park with more than 25 cars in it. One of his cars is old, rusty, and not worth much. Richie does not really need the car, but since there is ample space in his car park, he keeps it around. One day, Richie argues with a colleague over Peru’s capital. Richie correctly remembers that it is Lima, but his colleague disagrees and offers to bet his much more expensive car against Richie’s old car. Richie’s memory is reliable, and he remains confident although he does not remember the source of his belief. Brooke is extremely poor and has spent most of her money on a car because her work is more than 25 miles away. Her car is old, rusty, and not worth much. Brooke desperately needs the car since it is the only way for her to get to work and have an income. One day Brooke argues with a colleague over Peru’s capital. Brooke correctly remembers that it is Lima, but her colleague disagrees and offers to bet his much more expensive car against Brooke’s old car. Brooke’s memory is reliable, and she remains confident although she does not remember the source of her belief. (Gerken 2019a: 5)

About this case, Gerken (2019a: 6) says that those who deny PE-K “may then assume that both Richie and Brooke know that Lima is the capital of Peru.” And we are then supposed to imagine a subset of Brooke’s case “in which Brooke is, because of her dire social circumstances, regarded as a non-knower” (Gerken 2019a: 6). Up to this point, Gerken does not say what he thinks pragmatic encroachment theories are committed to. So I will just explain how my account handles Car Bet. While Car Bet bears similarities to TFB, I do not think it is appropriate to revert to the same strategy in dealing with it. In this case, it really seems that Brooke has much higher CFA than Ritchie. And these higher CFA seem entirely due to her dire socioeconomic circumstances. Consequently, my theory commits me to saying that it is rational for Brooke to consider more alternatives than Ritchie, which leads to the familiar weakening of Brooke’s reason for believing. As a consequence, Brooke’s degree of justification will be lower than Ritchie’s.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.008

Social Beings

207

This is what Gerken (2019a: 14) calls the hardline response. He thinks it is coherent but worries that pragmatic encroachers must then reclassify paradigm cases of epistemic injustice. But this worry is unfounded, because cases like Car Bet are not paradigm cases of epistemic injustice, at least not in Fricker’s sense. If my account of TPE is true, then there is a difference in epistemic standing between Brooke and Ritchie. But it is not that this downgrading of Brooke is due to any form of prejudice, or that we judge Brooke to have a lower epistemic standing than she in fact has due to prejudice. The reason why my account assumes that Brooke has a different epistemic standing than Ritchie is because of her CFA, which make it rational to consider more alternatives. But this rational demand does not classify as prejudice. Neither Car Bet nor my treatment of it features prejudice, which is a necessary condition for epistemic injustice. Therefore, it is hard to see why one should classify Car Bet or my theory’s verdict about it as a paradigm case of epistemic injustice, at least in Fricker’s sense of the term. To be clear: This does not mean that the situation in Car Bet is not deeply unjust. But what is unjust is the unequal distribution of resources between Brooke and Ritchie. And this unequal distribution of resources has further consequences regarding epistemic standing, at least according to the theory I endorse. Of course, we can object to the theory according to which such epistemic inequalities may arise. But I think the more fitting reaction is to object to the unequal distribution of resources between Brooke and Ritchie, which has many other unjust consequences, not just epistemic inequalities. The second case Gerken offers is based on the first one. Interview Richie is extremely wealthy, but to practice his Spanish, he has applied for a job at a US company with a market in South America. It is not particularly important to Richie that he gets the job since he just wants to practice his Spanish and can easily find another job opportunity. At the interview, Richie is asked what the capital of Peru is. Richie has a reliable memory and correctly remembers it is Lima, although he does not remember the source of his belief. Brooke is extremely poor and has applied for a job at a US company with a market in South America in order to get some much-needed income. It is extremely important to Brooke that she get the job since she is in serious debt and cannot easily find another job opportunity. At the interview, Brooke is asked what the capital of Peru is. Brooke has a reliable memory and correctly remembers that it is Lima, although she does not remember the source of her belief. (Gerken 2019a: 6)

Gerken’s concern is that PE-K will hold that Ritchie knows, while Brooke does not know; after all, using the common idea of stakes, the

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.008

208

Beings of Thought and Action

stakes are very low for Ritchie but extremely high for Brooke. Thus, Gerken assumes that pragmatic encroachment will fail at “diagnosing the relevant subset of Brooke cases as exemplifying discriminatory epistemic injustice” (Gerken 2019a: 6). The relevant subset here must mean those cases in which, due to prejudice, Brooke will be treated as having a lower epistemic status than she actually has. But this charge will not stick, for reasons I have already given. My account of pragmatic encroachment assigns a lower epistemic standing to Brooke than to Ritchie. Of course, one can object to this, but then one will directly object to pragmatic encroachment, as opposed to objecting to pragmatic encroachment via epistemic injustice. And I have already granted that this difference between Ritchie and Brooke is in some sense deeply unjust, but the difference itself does not make for epistemic injustice, at least not in Fricker’s sense. And I see no hurdle for pragmatic encroachers who want to diagnose epistemic injustice when it actually occurs to Brooke. Pragmatic encroachers can say that it is an epistemic injustice to assign Brooke a lower epistemic standing than she has according to encroachment theories, if this assignment is due to prejudice. To all those who assign Brooke a lower epistemic standing than Ritchie due solely to prejudice, pragmatic encroachers can still say that they got the verdict right, but for the wrong reason. And pragmatic encroachers can say that this is a kind of epistemic injustice, albeit in a different sense to Fricker’s use of the term but, given the involvement of prejudice, in keeping with its spirit. Gerken (2019a: 13f.) does consider this last reply, making the right judgment for the wrong reason, but assumes that pragmatic encroachers must then provide an explanation of why deeming Brooke to be a nonknower based on the stakes, but not on her poverty, is not unjust. As the reader will know by now, I eschew the notion of “stakes” and I have clarified that where I have used it, it was meant to refer to the notion of CFA. Chapter 6 offers a detailed explanation of pragmatic encroachment in terms of CFA and how CFA can affect which alternatives rationally need to be considered. I will not rehash the case here, but I assume that there was nothing inherently unjust in this explanation, as I do not think that the demands of rationality can be unjust. The explanation does leave room for the possibility that certain forms of inequality can also result in epistemic inequality. This means that the epistemic will not always be a level playing field for all, and I understand that this seems counterintuitive. But this is not a reason to reject a theory supported by a principled argument. It merely implies that wealth inequality, besides being the source of many

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.008

Social Beings

209

other inequalities, can also be a source of epistemic inequality. This is certainly surprising, perhaps counterintuitive, but not so absurd as to call for outright rejection. On my account, we are equals in the quest for knowledge as long as we are in joint deliberation with others. But we are not always in joint deliberation. I understand that many epistemologists will balk at the idea that we are not all equal when it comes to the pursuit of knowledge. But in other domains, it is not surprising that given huge wealth differences, it is a myth that we all have the same opportunities. Perhaps it is time to acknowledge that the epistemic domain is not that different and that opportunities to know are not distributed equally. Returning to Gerken, he still worries that at least when the assessment of Ritchie and Brooke is done by third parties, oddities arise. Here is what the interviewer, in light of the Interview case pair, might say, and what Gerken thinks about this. “They have the same evidence. But she does not know since the stakes are high for her, and he does know since the stakes are low for him. So, I suggest we hire him.” Here the interviewer neatly sets aside what pragmatic encroachers claim to be irrelevant (poverty) and focuses on what they claim to matter (the subject’s stakes). So, the interviewer’s reasoning should be perfectly fine by the lights of pragmatic encroachment. Yet it will strike many philosophers as epistemically unjust. I think this is something that pragmatic encroachers should acknowledge and address. (Gerken 2019a: 14)

I am happy to oblige. On my preferred version of TPE, if the very first sentence is true, then it is false that there can be sameness of evidence but a difference in knowledge (see also my comments on Conjunctive Ascription in Chapter 7, Section 7.2). Therefore, the interviewer is mistaken in their reasoning. If the interviewer were aware of my account of TPE and reasoned accordingly, they would have to acknowledge an epistemic difference between Brooke and Ritchie; but as I have pointed out, I do not believe that this amounts to an epistemic injustice, certainly not in Fricker’s sense. Let us consider Gerken’s final test cases. These are the most dramatic and every pragmatic encroacher should take them seriously. Harassment (Base Case) After years of pursuing employment and many rejected applications, Brooke has been hired as a secretary in a small company. Unfortunately, her supervisor has started to sexually harass her. Brooke considers reporting that she is being sexually harassed. However, she does not have hard evidence, and she suspects she might be laid off if she testifies against her supervisor.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.008

210

Beings of Thought and Action

Harassment Testimony Although she desperately needs to keep the job, Brooke asserts that she has been sexually harassed by her supervisor. Harassment Silence Because she desperately needs to keep the job, Brooke remains silent about her supervisor’s sexual harassment. (Gerken 2019a: 7)

Gerken generally contends that pragmatic encroachers will “have difficulty in accommodating this diagnosis [that Brooke knows she was sexually harassed] insofar as they regard Brooke as a non-knower due to the fact that the stakes are high for her” (Gerken 2019a: 7). First, I think behind this charge lies the mistaken assumption that pragmatic encroachers hold that one can never know when the stakes are high. But to the best of my knowledge, no pragmatic encroacher holds this position; they usually only say, colloquially, that it becomes harder to know when the stakes are high. On my account of TPE, even if CFA are high and Brooke must consider more alternatives, this will not necessarily mean that she fails to know. Brooke’s first-hand experience seems to me to be such a reason. If Brooke experienced harassment, this experience will suffice to rule out all relevant alternatives so that her reason need not necessarily be weakened just by introducing further alternatives.6 Regarding Harassment Testimony, Gerken (2019a: 7) writes, that Brooke might be seen as untrustworthy and attempting to defraud her employer for financial gain to better her situation. Pragmatic encroachers can agree that this could happen, but it is unclear to me why that would be a problem for pragmatic encroachment. As I said, pragmatic encroachers can allow that Brooke knows she was harassed. And they can agree that everybody who discounts Brooke’s testimony for the reasons Gerken mentions is indeed committing an epistemic injustice. It is not entirely clear to me how Harassment Silence is a problem for PE-K, given that one can hold that Brooke knows. Her decision is complex and there are clearly other reasons that have a bearing on it; it might be entirely rational given that Brooke knows her employer will not simply take her word and will require further evidence before sacking her supervisor, or that she will be seen as a liar. This does not mean that Brooke’s situation is not deeply unjust, including being epistemically unjust. I just fail to see that a proponent of PE-K cannot say that it is. 6

I believe it can plausibly be said that every context in which one knows that one was harassed, one is already in the highest kind of “high-stakes” context so that one cannot construct a corresponding even “higher-stakes” case in which one then fails to know.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.008

Social Beings

211

Let us sum up. For all the cases Gerken presents, I have pointed out a line available to proponents of PE-K that avoids the charge of epistemic injustice, at least in Fricker’s sense. That does not mean that there isn’t something deeply unjust happening in a case like Car Bet (or the others). According to my version (or maybe any version) of PE-K, Brooke has a lower epistemic standing than Ritchie. The economic inequality, which is unjust, leads to an epistemic inequality, which also seems unjust (though in a different sense than the standard conception of epistemic injustice). However, I do not think that this provides us with a reason to abandon PE-K, a theory that is supported by independent arguments. The problem here is not epistemic theory, but unjust distribution of wealth. If anything, epistemic theory provides another reason to combat the underlying economic inequality that is at the root of the different epistemic standings of Brooke and Ritchie.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.008

Epilogue

This book is probably too long and too short. In closing, I want to talk about four topics that have been largely absent. These are epistemic contextualism, the threshold problem, moral encroachment, and epistemic closure. In the Prologue, I said that I would not directly compare my views on pragmatic encroachment to epistemic contextualism – the semantic thesis according to which the meaning, and hence the truth conditions, of the word “knows” vary with context. I admit that this may seem like a questionable decision. The bank cases, which I have discussed at length, were originally presented in the literature that argued for contextualism. If the bank cases can be handled by another epistemic theory, then one might surely think that this other view cannot be omitted from discussion.1 While I still feel some discomfort about this decision, I ultimately think this omission is defensible. This is for two reasons. First, my argument for pragmatic encroachment is not dependent on the bank cases that motivate contextualism. And, second, I believe that contextualism is not really in a position to avoid at least one form of pragmatic encroachment. Let’s expand on the first reason. The argument I gave in Chapter 5 is a principle-based argument; the two central principles were argued for in Chapters 3 and 4. These entail the principle PE-K*, which suggests that pragmatic encroachment is true. The bank cases play no role in motivating PE-K*. While it follows from PE-K* and other plausible assumptions that the pattern of knowledge ascriptions found in the bank cases is true, this is not an argument in favor of contextualism. In later chapters, I relied on 1

It has been pointed out that pragmatic encroachment and contextualism are not mutually exclusive views. Fantl and McGrath (2009: 55) assume that encroachers can help themselves to contextualism to handle certain third-person cases. McKenna (2013) argues for what he calls “interests contextualism.” Given such hybrid views, it seems that there is no longer a strict opposition between them. Still, contextualism is often presented as a view that can avoid pragmatic encroachment, and hence, it is still a rival view.

212

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.009

Epilogue

213

a variation of the bank cases to argue for pragmatic encroachment on reasons for belief. These cases might thus be said to motivate contextualism about “good reason for belief” instead of pragmatic encroachment. With this I concur, but I also pointed out data from experimental philosophy that suggests that what shifts is not the meaning, but the quality of the reasons themselves. I also believe that once a principle-based argument for pragmatic encroachment on knowledge is available, it would be surprising if similar intuitive shifts required a totally different response. When I used further variants of the bank cases, these cases did not carry any argumentative weight. They merely served as illustrations for how my account of TPE can handle more complex cases. Let’s expand on the second reason, and grant that the contextualist’s thesis is on the table and must be considered. And let’s further assume that even a technical term like “knowledge-level justification” is contextsensitive. The relevant principle PE-K* would then read as follows. Contextualized PE-K* If one’s degree of justification for p in a deliberative context (DC) suffices for “knowledge-level justification,” then the costs of error regarding p do not exceed the costs of further inquiry into whether p.

But such a contextualized principle does not seem to avoid pragmatic encroachment. According to Contextualized PE-K*, it is true that one cannot have “knowledge-level justification” when COE exceed CFI. That posits a practical condition on having “knowledge-level justification.” A contextualist with intellectualist leanings might argue that what makes an ascription of knowledge-level justification true is the evidence that one possesses. But this does not suffice for saving intellectualism. For that, it would have to be the case that only the evidence makes it true that one has knowledge-level justification. But according to the previous principle, one can only have “knowledge-level justification” if practical factors are aligned in a certain way. In order to make the suggested defense work, one would have to argue that practical factors merely coincidentally covary with the evidence, but that ultimately practical factors are explanatorily inert. I admit that there is this theoretical possibility (I have already done so in Chapter 5, Section 5.5). But I think it is hard to see how to make good on it, as it seems difficult to grasp how in all the relevant cases, the evidence, or another overlooked truth-relevant factor, does all the explanatory work, while the practical factors merely coincide with that truth-relevant factor. Therefore, whether one has “knowledge-level justification” appears to depend at least in part on practical factors. This seems to be a result at

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.009

214

Beings of Thought and Action

least in the vein of pragmatic encroachment, and one that I am happy to accept should it turn out that contextualism is true.2 Several pragmatic encroachers, for example Fantl and McGrath (2009) and Hannon (2017), have used their version of pragmatic encroachment to solve the threshold problem for fallibilism. The threshold problem is that of determining how high one’s degree of justification must be in order to have knowledge-level justification without arbitrariness. I have not argued that my account of pragmatic encroachment entails a solution to the threshold problem. Since I believe that no view is required to resolve all problems, it is no mark against my view if it turns out that it cannot solve the threshold problem. There is now a burgeoning literature on moral encroachment and I want to tentatively outline how my views relate to it. Moss (2018: 179) defines moral encroachment as the view that the epistemic status of a belief or credence can depend on its moral features. In a similar vein, Basu and Schroeder (2019: 201) write that “the epistemic norms governing belief must be sensitive to the moral requirements governing belief.” Moral encroachment bears obvious similarities to the thesis of pragmatic encroachment, yet I believe it differs in important ways. Sticking just to belief, the aforementioned views assume that it is moral features of the belief itself that matter. At least on my account of pragmatic encroachment, it is not really features of the belief that influence its epistemic standing, but rather the consequences of a failed action that is based on the belief. This is not really a feature of the belief itself, which provides a contrast to the cases of racial profiling that feature prominently in the moral encroachment debate, such as believing that somebody is a waiter and not a guest at a party because of their skin color and statistical facts about waiters and their skin color. While Fritz (2017) explores moral encroachment by exploiting parallels to standard pragmatic encroachment arguments and cases, the sort of racial profiling cases I have just hinted at seem quite different from the bank cases. Despite differences between pragmatic encroachment and moral encroachment, I believe that my account of TPE can be adapted to cover moral encroachment. In Chapter 6, footnote 8, I briefly mentioned that I am open to the possibility that one’s practical situation is not restricted to matters of self-interest. Here, I had primarily in mind cases in which the consequences of the action could be considered moral, for example, children losing their home because of a missed mortgage payment. 2

See also Gerken (2017: 45) for a similar assessment of how certain semantic theories still fall into the category of pragmatic encroachment theories.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.009

Epilogue

215

I think that my account can be tweaked to cover cases of racial profiling where the moral status of the belief matters, not any consequences of one’s actions. In brief, we could hold that moral features of possible beliefs, not just elevated CFA, might oblige one to consider more alternatives in certain situations. In the racial profiling case, while statistical evidence may make it very likely that a person is a waiter, one must still consider that a person may not fit the statistical pattern, which is perfectly possible. Hence, one could say that one’s reasons for believing that a person is a waiter will become weaker compared to nonmoral beliefs – for example, that the champagne served is from Reims. Based on previous parties I might have excellent statistical evidence that the champagne served is indeed from Reims. Hence, I may believe that it is. But even if it is statistically just as likely that a person is a waiter, my statistical evidence alone does not license believing, as this statistical reason for belief is too weak because there are further alternatives to be considered. This seems to dovetail nicely with ideas found in Moss (2018) , Basu (2019), and Bolinger (2018). However, I do not endorse this tweak to my account. I believe that once the consequences of believing, not just the consequences of actions, matter one will not be able to avoid the sort of questionable Pascalian implications suggested in Worsnip (2020) that I would rather avoid. Certainly, more needs to be said about moral encroachment, but this is not the place to do it. Finally, epistemic closure. By that I mean the principle that if one knows that p, and p entails q, then one is in a position to know that q (if one competently deduces q from p). Anderson and Hawthorne (2019b) argue that certain versions of pragmatic encroachment violate closure. Because epistemic closure, despite its intuitive appeal, is a contested principle,3 its incompatibility with pragmatic encroachment need not necessarily be a strike against pragmatic encroachment. But it certainly would be preferable if pragmatic encroachment were compatible with closure. Anderson and Hawthorne give a number of different cases. They either assume that PE-K is committed to PAK, which my version of PE-K is not, or they rely on a certain reading of the notion of stakes which is different from the notion of CFA I rely upon. I will translate their two most pressing cases (those that do not presuppose PAK) into my discussion and then show how my view can handle them. 3

For recent denials of epistemic closure, see Yablo (2014), Sharon and Spectre (2017), and AlspectorKelly (2019).

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.009

216

Beings of Thought and Action

Bank or Aliens It is Friday and you consider entering a bank. CFA are low and you know that the bank is open, as you remember that it was open last Saturday. Now consider the disjunction the bank is open or it is not the case that if you enter the bank, you will be tortured by aliens. The second disjunct is entailed by the proposition that the bank is open. Hence you know that it is not the case that if you enter the bank, you will be tortured by aliens. But this is a proposition for which CFA seem high, and hence you do not know. Therefore, pragmatic encroachment violates epistemic closure. (Based on Anderson and Hawthorne 2019b: 112)

My account of TPE can easily avoid this case. The notion of CFA, as introduced in Chapter 5, holds that it is about what a reasonable person can expect as the consequences of their actions. If we can really reasonably expect you to see torture by aliens as a possible consequence of your action, then your CFA are high. It is simply not possible that, at the same time, CFA are low when considering whether the bank is open. Therefore, Bank or Aliens does not outline a genuine possibility on my account. Here is another case with a less far-fetched possibility to further clarify my reply. Puppy or Object You have a high degree of justification that a box contains a puppy. You don’t particularly care about puppies, so whether this is true doesn’t really matter to you. If there is however an inanimate object in the box, that would be disastrous for you. According to PE-K it would seem that you know there is a puppy in the box, and that you don’t know there is not an inanimate object in the box, although the former entails the latter. Therefore, pragmatic encroachment violates epistemic closure. (Based on Anderson and Hawthorne 2019b: 112)

The notion of CFA I employed is indexed to a deliberative context, not just to propositions. If you are in a deliberative context in which CFA are high when you are wrong about the box not containing an inanimate object, then CFA are high for any other proposition that is entailed by it, for example, that it is not a dog. Hence, if you fail to know that there is no inanimate object in the box due to elevated CFA, then you must fail to know anything that contradicts this, such as that there is a puppy in the box, and hence you do not know that there is a puppy in the box. Therefore, whether epistemic closure is true or not, my account of pragmatic encroachment is compatible with it. There are further interesting topics that I have not touched upon, not even in footnotes. For example, pragmatic encroachment on scientific

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.009

Epilogue

217

knowledge, discussed in Miller (2014) and Gerken (2019b), or the experimental philosophy studies on evidence gathering (see Pinillos 2012). I am sure you could think of more. As I said, this book is probably too long and too short. But this is where it ends. Hopefully, this is also where the conversation starts.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.009

Glossary

Consequences of Failed Action (CFA): The consequences of a failed action due to a false belief. For any deliberative context (DC) in which p speaks in favor of φing, CFA are the outcome that a reasonable person would assign to φ-ing given not-p. Costs of Further Inquiry (CFI): The costs of further inquiry that would provide new evidence for p that either rules out a previously uneliminated not-p possibility, several not-p possibilities, or at least reduces the probability of a not-p possibility, mainly sensitive to the availability of further evidence and the evidence that one already has. Contextualist Justification Norm for Practical Reasoning (CJN): In the deliberative context (DC), it is rationally permissible for S to treat the proposition that p as a reason for action iff S’s degree of justification for believing that p is adequate relative to DC. Costs of Error (COE): The costs of error about p, sensitive to the consequences of failed action (CFA), but also to the availability of alternative courses of action. Intellectualism (INT): The thesis that whether a true belief amounts to knowledge depends exclusively on truth-conducive factors. Knowledge-Level Justification Sufficiency (KJS): If one’s degree of justification for p in a deliberative context (DC) suffices for knowledge-level justification, then it is rationally permissible to treat p as a reason for action in DC. Low Costs of Error (LCOE): A case in which the subject knows the relevant proposition and in which COE are low (my version of the standard low stakes case). Low Stakes Good Reason (LGR): A variation of the bank case for which CFA are low and in which one has a good reason for believing that the bank is open on Saturday. 218

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.010

Glossary

219

High Costs of Error (HCOE): A case in which the subject fails to know the relevant proposition and in which COE are high (my version of the standard high stakes case). High Stakes Good Reason (HGR): A variation of the bank case for which CFA are high and in which one lacks a good reason for believing. Practical Adequacy Condition on Knowledge (PAK): If one knows that p, then one’s strength of epistemic position is practically adequate. One’s epistemic position toward p is practically adequate only if the act that maximizes expected utility is the same as the act that maximizes expected utility conditional on p. Pragmatic Encroachment on Justified Belief (PE-J): The thesis that whether a belief is justified can depend on practical factors. Pragmatic Encroachment on Knowledge (PE-K): The thesis that whether a true belief amounts to knowledge does not only depend on truth-conducive factors but also on practical factors. Pragmatic Encroachment about Knowledge-Level Justification (PE-K*): If one’s degree of justification for p in a deliberative context (DC) suffices for knowledge-level justification, then, in DC, the costs of error regarding p do not exceed the costs of further inquiry into whether p. Pragmatic Encroachment about Reasons for Believing (PE-R): The thesis that the strength of one’s epistemic reasons for believing partly depends on practical factors. The Reason–Knowledge Principle (RKP): It is appropriate to treat the proposition that p as a reason for acting iff you know that p. Rational Permissibility (RP): If it is rationally permissible to treat p as a reason for action in a deliberative context (DC), then, in DC, the costs of error regarding p do not exceed the costs of further inquiry into whether p. Shifting Thresholds View (STV): The thesis that the threshold for having knowledge-level justification is not constant but can shift relative to practical circumstances. Total Pragmatic Encroachment (TPE): justification is sensitive to practical factors.

The thesis that one’s degree of

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.010

References

Ahmed, Arif, and Salow, Bernhard (2019). “Don’t Look Now,” British Journal for the Philosophy of Science, Vol. 70, No. 2, 327–50. Alspector-Kelly, Marc (2019). Against Knowledge Closure (Cambridge, UK: Cambridge University Press). Anderson, Charity (2015). “On the Intimate Relationship of Knowledge and Action,” Episteme, Vol. 12, No. 3, 343–53. Anderson, Charity, and Hawthorne, John (2019a). “Knowledge, Practical Adequacy, and Stakes,” Oxford Studies in Epistemology, Vol. 6, 234–57. Anderson, Charity, and Hawthorne, John (2019b). “Pragmatic Encroachment and Closure,” in Brian Kim and Matthew McGrath (eds.), Pragmatic Encroachment in Epistemology (New York: Routledge), 107–15. Audi, Robert (2001). “An Internalist Theory of Normative Grounds,” Philosophical Topics, Vol. 29, 19–46. Audi, Robert (2004). “Theoretical Rationality: Its Sources, Structure, and Scope,” in Alfred Mele and Piers Rawling (eds.), The Oxford Handbook of Rationality (Oxford: Oxford University Press), 17–44. Baril, Anne (2019). “Pragmatic Encroachment and Practical Reasons,” in Brian Kim and Matthew McGrath (eds.), Pragmatic Encroachment in Epistemology (New York: Routledge), 56–68. Basu, Rima (2019). “What We Owe to Each Other Epistemically,” Philosophical Studies, Vol. 176, No. 4, 915–31. Basu, Rima, and Schroeder, Mark (2019). “Doxastic Wronging,” in Brian Kim and Matthew McGrath (eds.), Pragmatic Encroachment in Epistemology (New York: Routledge), 181–205. Beddor, Bob (2020). “Certainty in Action,” Philosophical Quarterly, Vol. 70, No. 281, 711–37. Beebe, James (2012). “Social Functions of Knowledge Attributions,” in Jessica Brown and Mikkel Gerken (eds.), Knowledge Ascriptions (Oxford: Oxford University Press), 220–42. Benton, Matthew (2018). “Knowledge, Hope, and Fallibilism,” Synthese [online version]. https://doi.org/10.1007/s11229-018-1794-8. Berker, Selim (2008). “Luminosity Regained,” Philosopher’s Imprint, Vol. 8, No. 2, 1–22.

220

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.011

References

221

Blome-Tillmann, Michael (2009). “Contextualism, Subject-Sensitive Invariantism, and the Interaction of ‘Knowledge’-Ascriptions with Modal and Temporal Operators,” Philosophy and Phenomenological Research, Vol. 79, No. 2, 315–31. Bobier, Christopher (2017). “Hope and Practical Deliberation,” Analysis, Vol. 77. No. 3, 495–97. Bolinger, Renée (2018). “The Rational Impermissibility of Accepting (Some) Racial Generalizations,” Synthese, Vol. 197, 2415–31. Bovens, Luc (1999). “The Value of Hope,” Philosophy and Phenomenological Research, Vol. 59, No. 3, 667–81. Boyd, Kenneth (2015). “Assertion, practical reasoning, and epistemic separabilism,” Philosophical Studies, Vol. 172, No. 7, 1907–27. Bratman, Michael (1992). “Practical Reasoning and Acceptance in a Context,” Mind, Vol. 101, No. 401, 1–16. Brown, Jessica (2006). “Contextualism and Warranted Assertability Manoeuvres,” Philosophical Studies, Vol. 130, No. 3, 407–35. Brown, Jessica (2008). “Subject-Sensitive Invariantism and the Knowledge Norm for Practical Reasoning,” Nous, Vol. 42, No. 2, 167–89. Brown, Jessica (2012). “Practical Reasoning, Decision Theory, and Anti-Intellectualism,” Episteme, Vol. 9, No. 1, 43–62. Brown, Jessica (2014a). “Shifty Talk: Knowledge and Causation,” Philosophical Studies, Vol. 167, No. 2, 183–99. Brown, Jessica (2014b). “Impurism, Practical Reasoning, and the Threshold Problem,” Nous, Vol. 47, No. 1, 179–92. Brown, Jessica (2016). “Contextualism about Evidential Support,” Philosophy and Phenomenological Research, Vol. 92, No. 2, 329–54. Brown, Jessica (2018). Fallibilism: Evidence and Knowledge (Oxford: Oxford University Press). Brueckner, Anthony (2005). “Contextualism, Hawthorne’s Invariantism and Third Person Cases,” Philosophical Quarterly, Vol. 55, No. 219, 315–18. Buchak, Lara (2012). “Can It Be Rational to Have Faith?,” in Jake Chandler and Victoria S. Harrison (eds.), Probability in the Philosophy of Religion (Oxford: Oxford University Press), 225–48. Buckwalter, Wesley (2010). “Knowledge Isn’t Closed on Saturdays,” Review of Philosophy and Psychology, Vol. 1, No. 3, 395–406. Buckwalter, Wesley, and Schaffer, Jonathan (2015). “Knowledge, Stakes, Mistakes,” Nous, Vol. 49, No. 2, 201–34. Burge, Tyler (2003). “Perceptual Entitlement,” Philosophy and Phenomenological Research, Vol. 67, No. 3, 503–48. Chignell, Andrew (2013). “Rational Hope, Moral Order, and the Revolution of the Will,” in Eric Watkins (ed.), Divine Order, Human Order, and the Order of Nature (Oxford: Oxford University Press), 197–218. Clarke, Roger (2013). “Belief Is Credence One (in Context),” Philosopher’s Imprint, Vol. 13, No. 11, 1–18. Cohen, Stewart (1999). “Contextualism, Skepticism, and the Structure of Reasons,” Philosophical Perspectives, Vol. 13, 57–89.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.011

222

References

Cohen, Stewart (2012). “Does Practical Rationality Constrain Epistemic Rationality?,” Philosophy and Phenomenological Research, Vol. 85, No. 2, 447–55. Comesaña, Juan (2013). “Epistemic Pragmatism: An Argument against Moderation,” Res Philosophica, Vol. 90, No. 2, 237–60. Craig, Edward (1990). Knowledge and the State of Nature (Oxford: Oxford University Press). Dancy, Jonathan (2000). Practical Reality (Oxford: Oxford University Press). Dancy, Jonathan (2004). Ethics without Principles (Oxford: Oxford University Press). Das, Nilanjan (2020). “The Value of Biased Information,” The British Journal for the Philosophy of Science [online version]. https://doi.org/10.1093/bjps/axaa003. Davies, Wayne (2007). “Knowledge Claims and Context: Loose Use,” Philosophical Studies, Vol. 132, No. 3, 395–438. Day, J. P. (1969). “Hope,” American Philosophical Quarterly, Vol. 6, No. 2, 89–102. DeRose, Keith (1991). “Epistemic Possibilities,” The Philosophical Review, Vol. 100, No. 4, 581–605. DeRose, Keith (1992). “Contextualism and Knowledge Attributions,” Philosophy and Phenomenological Research, Vol. 52, No. 4, 913–29. DeRose, Keith (2009). The Case for Contextualism (Oxford: Oxford University Press). Dogramaci, Sinan (2012). “Reverse Engineering Epistemic Evaluations,” Philosophy and Phenomenological Research, Vol. 84, No. 3, 513–30. Dogramaci, Sinan (2015). “Communist Conventions for Deductive Reasoning,” Nous, Vol. 49, No. 4, 776–99. Dougherty, Trent, and Rysiew, Patrick (2009). “Fallibilism, Epistemic Possibility, and Concessive Knowledge Attributions,” Philosophy and Phenomenological Research, Vol. 78, No. 1, 123–32. Dretske, Fred (1970). “Epistemic Operators,” Journal of Philosophy, Vol. 67, No. 24, 1007–23. Eaton, Daniel, and Pickavance, Timothy (2015). “Evidence against Pragmatic Encroachment,” Philosophical Studies, Vol. 172, No. 12, 3135–43. Egan, Andy, and Weatherson, Brian (2011). Epistemic Modality (Oxford: Oxford University Press). Fantl, Jeremy, and McGrath, Matthew (2002). “Evidence, Pragmatics, and Justification,” The Philosophical Review, Vol. 111, No. 1, 67–94. Fantl, Jeremy, and McGrath, Matthew (2007). “On Pragmatic Encroachment in Epistemology,” Philosophy and Phenomenological Research, Vol. 75, No.3, 558–89. Fantl, Jermey, and McGrath, Matthew (2009). Knowledge in an Uncertain World (Oxford: Oxford University Press). Fantl, Jeremy, and McGrath, Matthew (2012a). “Arguing for Shifty Epistemology,” in Jessica Brown and Mikkel Gerken (eds.), Knowledge Ascriptions (Oxford: Oxford University Press), 55–74. Fantl, Jermey, and McGrath, Matthew (2012b). “Replies to Cohen, Neta, and Reed,” Philosophy and Phenomenological Research, Vol. 85, No. 2, 473–90.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.011

References

223

Fassio, Davide (2017). “Is There an Epistemic Norm for Practical Reasoning?,” Philosophical Studies, Vol. 174, No. 9, 2137–66. Feltz, Adam, and Zarpentine, Chris (2010). “Do You Know More When It Matters Less?,” Philosophical Psychology, Vol. 23, No. 5, 683–706. Fine, Kit (2001). “The Question of Realism,” Philosopher’s Imprint, Vol. 1, No. 1, 1–30. Fricker, Miranda (2007). Epistemic Injustice: Power and the Ethics of Knowing (Oxford: Oxford University Press). Friedman, Jane (2013). “Rational Agnosticism and Degrees of Belief,” Oxford Studies in Epistemology, Vol. 4, 57–81. Fritz, James (2017). “Pragmatic Encroachment and Moral Encroachment,” Pacific Philosophical Quarterly, Vol. 98, No. S1, 643–61. Ganson, Dorit (2008). “Evidentialism and Pragmatic Constraints on Outright Belief,” Philosophical Studies, Vol. 139, No. 3, 441–58. Gao, Jie (2019). “Credal Pragmatism,” Philosophical Studies, Vol. 176, No. 6, 1595–1617. Gerken, Mikkel (2011). “Warrant and Action,” Synthese, Vol. 178, No. 3, 529–47. Gerken, Mikkel (2012a). “Discursive Justification and Skepticism,” Synthese, Vol. 189, No. 2, 373–94. Gerken, Mikkel (2012b). “On the Cognitive Bases for Knowledge Ascriptions,” in Jessica Brown and Mikkel Gerken (eds.), Knowledge Ascriptions (Oxford: Oxford University Press), 140–70. Gerken, Mikkel (2013). “Epistemic Focal Bias,” Australasian Journal of Philosophy, Vol. 91, No. 1, 41–61. Gerken, Mikkel (2015). “The Roles of Knowledge Ascriptions in Epistemic Assessment,” European Journal of Philosophy, Vol. 23, No. 1, 141–61. Gerken, Mikkel (2017). On Folk Epistemology (Oxford: Oxford University Press). Gerken, Mikkel (2019a). “Pragmatic Encroachment and the Challenge from Epistemic Injustice,” Philosopher’s Imprint, Vol. 19, No. 15, 1–19. Gerken, Mikkel (2019b). “Pragmatic Encroachment on Scientific Knowledge?,” in Brian Kim and Matthew McGrath (eds.), Pragmatic Encroachment in Epistemology (New York: Routledge), 116–40. Gibbons, John (2006). “Access Externalism,” Mind, Vol. 115, No. 457, 19–39. Goldman, Alvin (1999). Knowledge in a Social World (Oxford: Oxford University Press). Good, Irving John (1967). “On the Principle of Total Evidence,” British Journal for the Philosophy of Science, Vol. 17, No. 4, 319–21. Greco, Daniel (2013). “Probability and Prodigality,” Oxford Studies in Epistemology, Vol. 4, 82–107. Greco, Daniel (2015). “How I Learned to Stop Worrying and Love Probability 1,” Philosophical Perspectives, Vol. 29, 179–201. Greco, Daniel, and Hedden, Brian (2016). “Uniqueness and Metaepistemology,” Journal of Philosophy, Vol. 113, No. 8, 365–95. Grimm, Stephen (2011). “On Intellectualism in Epistemology,” Mind, Vol. 120, No. 479, 705–33.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.011

224

References

Grimm, Stephen (2015). “Knowledge, Practical Interests, and Rising Tides,” in David K. Henderson and John Greco (eds.), Epistemic Evaluation (Oxford: Oxford University Press), 117–37. Hamblin, C. L. (1958). “Questions,” Australasian Journal of Philosophy, Vol. 36, No. 3, 159–68. Hannon, Michael (2013). “The Practical Origins of Epistemic Contextualism,” Erkenntnis, Vol. 78, No. 4, 899–919. Hannon, Michael (2017). “A Solution to Knowledge’s Threshold Problem,” Philosophical Studies, Vol. 174, No. 3, 607–29. Harman, Gilbert (1999). Reasoning, Meaning, and Mind (Oxford: Oxford University Press). Harris, Adam, Corner, Adam, and Hahn, Ulrike (2009). “Estimating the Probability of Negative Events,” Cognition, Vol. 110, No. 1, 51–64. Harsanyi, John (1985). “Acceptance of Empirical Statements: A Bayesian Theory without Cognitive Utilities,” Theory and Decision, Vol. 18, No. 1, 1–30. Hawthorne, John (2004). Knowledge and Lotteries (Oxford: Oxford University Press). Hawthorne, John, and Stanley, Jason (2008). “Knowledge and Action,” Journal of Philosophy, Vol. 105, No. 10, 571–90. Hieronymi, Pamela (2005). “The Wrong Kind of Reason,” Journal of Philosophy, Vol. 102, No. 9, 437–57. Holton, Richard (1997). “Some Telling Examples: A reply to Tsohatzidis,” Journal of Pragmatics, Vol. 28, No. 5, 625–28. Holton, Richard (2008). “Partial Belief, Partial Intention,” Mind, Vol. 117, No. 465, 27–58. Holton, Richard (2014). “Intention As a Model for Belief,” in Manuel Vargas and Gideon Yaffe (eds.), Rational and Social Agency: The Philosophy of Michael Bratman (Oxford: Oxford University Press), 12–37. Howell, Robert (2005). “A Puzzle for Pragmatism,” American Philosophical Quarterly, Vol. 42, No. 2, 131–36. Ichikawa, Jonathan Jenkins (2012). “Knowledge Norms and Acting Well,” Thought, Vol. 1, No. 1, 49–55. Ichikawa, Jonathan Jenkin, Jarvis, Benjamin, and Rubin, Katherine (2012). “Pragmatic Encroachment and Belief-Desire Psychology,” Analytic Philosophy, Vol. 53, No. 4, 327–43. Jackson, Alexander (2012). “Two Ways to Put Knowledge First,” Australasian Journal of Philosophy, Vol. 90, No. 2, 353–69. Jackson, Elizabeth (2019). “How Belief-Credence Dualism Explains Away Pragmatic Encroachment,” The Philosophical Quarterly, Vol. 69, No. 276, 511–33. James, William (1897). The Will to Believe, and Other Essays in Popular Philosophy (New York: Longmans, Green, and Co.). Kelly, Thomas (2014). “Evidence,” Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.). https://plato.stanford.edu/archives/win2016/e ntries/evidence. Knobe, Joshua, and Schaffer, Jonathan (2012). “Contrastive Knowledge Surveyed,” Nous, Vol. 46, No. 4, 675–708.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.011

References

225

Kolodny, Niko, and Brunero, John (2018). “Instrumental Rationality,” Stanford Encyclopedia of Philosophy (Summer 2015 Edition), Edward N. Zalta (ed.). http:// plato.stanford.edu/archives/sum2015/entries/rationality-instrumental/. Kruglanski, Arie, and Mayseless, Ofra (1987). “Motivational Effects in the Social Comparison of Opinions,” Journal of Personality and Social Psychology, Vol. 53, No. 5, 834–42. Lackey, Jennifer (2010). “Acting on Knowledge,” Philosophical Perspectives, Vol. 24, 361–82. Lackey, Jennifer (2016). “Assertion and Expertise,” Philosophy and Phenomenological Research, Vol. 89, No. 1, 509–17. Lawlor, Krista (2020). “Knowledge and Reasonableness,” Synthese [online version]. chttps://doi.org/10.1007/s11229-020-02803-z. Levy, Neil (2016). “Have I Turned the Stove Off? Explaining Everyday Anxiety,” Philosopher’s Imprint, Vol. 16, No. 2, 1–10. Littlejohn, Clayton (2009). “Must We Act Only on What We Know?,” Journal of Philosophy, Vol. 106, No. 8, 463–73. Littlejohn, Clayton (2012). Justification and the Truth-Connection (Cambridge, UK: Cambridge University Press). Littlejohn, Clayton (2014). “The Unity of Reason,” in Clayton Littlejohn and John Turri (eds.), Epistemic Norms: New Essays on Belief, Action, and Assertion (Oxford: Oxford University Press), 135–54. Locke, Dustin (2014). “Knowledge Norms and Assessing Them Well,” Thought, Vol. 3, No. 1, 80–89. Locke, Dustin (2015). “Practical Certainty,” Philosophy and Phenomenological Research, Vol. 90, No. 1, 72–95. Locke, Dustin (2017). “Implicature and Non-Local Pragmatic Encroachment,” Synthese, Vol. 194, No. 2, 631–54. Lutz, Matt (2013). “The Pragmatics of Pragmatic Encroachment,” Synthese, Vol. 191, No. 8, 1717–40. MacFarlane, John (2005). “Knowledge Laundering: Testimony and Sensitive Invariantism,” Analysis, Vol. 62, No. 2, 132–38. Martin, Adrienne (2011). “Hopes and Dreams,” Philosophy and Phenomenological Research, Vol. 83, No. 1, 148–73. Martin, Adrienne (2014). How We Hope: A Moral Psychology (Princeton: Princeton University Press). May, Joshua, Sinnott-Armstrong, Walter, Hull, Jay G. , and Zimmerman, Aaron (2010). “Practical Interests, Relevant Alternatives, and Knowledge Attributions: An Empirical Study,” Review of Philosophy and Psychology, Vol. 1, No2, 265–73. McCormick, Miriam Schleifer (2017). “Rational Hope,” Philosophical Explorations, Vol. 20, No. 1, 127–41. McGlynn, Aidan (2013). “Believing Things Unknown,” Nous, Vol. 47, No. 2, 385–407. McGrath, Matthew (2017). “Pragmatic Encroachment – Its Problems Are Your Problems!,” in Conor McHugh, Jonathan Way, and Daniel Whiting (eds.), Normativity: Epistemic and Practical (Oxford: Oxford University Press), 162–78.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.011

226

References

McGrath, Matthew (2018). “Defeating Pragmatic Encroachment,” Synthese, Vol. 195, No. 7, 3051–64. McKenna, Robin (2013). “Epistemic Contextualism: A Normative Approach,” Pacific Philosophical Quarterly, Vol. 94, No. 1, 101–23. Meirav, Ariel (2009). “The Nature of Hope,” Ratio, Vol. 22, No. 2, 216–33. Miller, Boaz (2014). “Science, Values, and Pragmatic Encroachment on Knowledge,” European Journal of Philosophy of Science, Vol. 4, No. 2, 253–70. Moss, Sarah (2018). “Moral Encroachment,” Proceedings of the Aristotelian Society, Vol. 118, No. 2, 177–205. Mueller, Andy (2017a). “How Does Epistemic Rationality Constrain Practical Rationality?,” Analytic Philosophy, Vol. 58, No. 2, 139–55. Mueller, Andy (2017b). “Pragmatic or Pascalian Encroachment? A Problem for Schroeder’s Explanation of Pragmatic Encroachment,” Logos & Episteme, Vol. 8, No. 2, 235–41. Mueller, Andy (2019). “Hopeless Practical Deliberation – Reply to Bobier,” Analysis, Vol. 79, No. 4, 629–63. Mueller, Andy (2021). “The Knowledge Norm of Apt Practical Reasoning,” Synthese [online version]. https://doi.org/10.1007/s11229-021-03030-w. Mueller, Andy, and Ross, Jacob (2017). “Knowledge Dethroned,” Analytic Philosophy, Vol. 58, No. 4, 283–96. Nagel, Jennifer (2008). “Knowledge Ascriptions and the Psychological Consequences of Changing Stakes,” Australasian Journal of Philosophy, Vol. 86, No. 2, 279–94. Nagel, Jennifer (2010). “Knowledge Ascriptions and the Psychological Consequences of Thinking about Error,” The Philosophical Quarterly, Vol. 60, No. 239, 286–306. Neta, Ram (2003). “Contextualism and the Problem of the External World,” Philosophy and Phenomenological Research, Vol. 66, No. 1, 1–31. Neta, Ram (2009). “Treating Something As a Reason for Action,” Nous, Vol. 43, No. 4, 684–99. Neta, Ram (2012). “The Case against Purity,” Philosophy and Phenomenological Research, Vol. 85, No. 2, 456–64. Norby, Aaron (2015). “Uncertainty without All the Doubt,” Mind & Language, Vol. 30, No. 1, 70–94. Pace, Michael (2011). “The Epistemic Value of Moral Considerations: Justification, Moral Encroachment, and James’ ‘Will to Believe’,” Nous, Vol. 45, No. 2, 239–68. Parfit, Derek (2011). On What Matters (Oxford: Oxford University Press). Pinillos, N. Angel (2012). “Knowledge, Experiments, and Practical Interests,” in Jessica Brown and Mikkel Gerken (eds.), Knowledge Ascriptions (Oxford: Oxford University Press), 192–219. Raz, Joseph (2011). From Normativity to Responsibility (Oxford: Oxford University Press). Reed, Baron (2010). “A Defense of Stable Invariantism,” Nous, Vol. 44, No. 4, 224–44.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.011

References

227

Reed, Baron (2012). “Resisting Encroachment,” Philosophy and Phenomenological Research, Vol. 85, No. 2, 465–72. Reed, Baron (2013). “Fallibilism, Epistemic Possibility, and Epistemic Agency,” Philosophical Issues, Vol. 23, 40–69. Roeber, Blake (2014). “Minimalism and the Limits of Warranted Assertability Maneuvers,” Episteme, Vol. 11, No. 3, 245–60. Roeber, Blake (2018). “The Pragmatic Encroachment Debate,” Nous, Vol. 52, No. 1, 171–95. Roeber, Blake (2020). “How to argue for pragmatic encroachment,” Synthese, Vol. 197, 2649–64. Ross, Jacob, and Schroeder, Mark (2014). “Belief, Credence, and Pragmatic Encroachment,” Philosophy and Phenomenological Research, Vol. 88, No. 2, 259–88. Rubin, Katherine (2015). “Total Pragmatic Encroachment and Epistemic Permissiveness,” Pacific Philosophical Quarterly, Vol. 96, No. 1, 12–38. Russell, Gillian, and Doris, John (2008). “Knowledge by Indifference,” Australasian Journal of Philosophy, Vol. 86, No. 3, 429–37. Rysiew, Patrick (2001). “The Context-Sensitivity of Knowledge Attributions,” Nous, Vol. 35, No. 4, 477–514. Rysiew, Patrick (2007). “Speaking of Knowing,” Nous, Vol. 41, No. 4, 627–62. Savage, Leonard (1954). The Foundations of Statistics (New York: John Wiley & Sons). Schaffer, Jonathan (2006). “The Irrelevance of the Subject: Against Subject-Sensitive Invariantism,” Philosophical Studies, Vol. 127, No. 1, 87–107. Schroeder, Mark (2007). Slaves of the Passions (Oxford: Oxford University Press). Schroeder, Mark (2012). “Stakes, withholding and pragmatic encroachment on knowledge,” Philosophical Studies, Vol. 160, No. 2, 265–85. Schroeder, Mark (2017). “The Epistemic Consequences of Forced Choice,” Logos & Episteme, Vol. 8, No. 3, 365–74. Schroeder, Mark (2018). “Rational Stability under Pragmatic Encroachment,” Episteme, Vol. 15, No. 3, 297–312. Sharon, Assaf, and Spectre, Levi (2017). “Evidence and the Openness of Knowledge,” Philosophical Studies, Vol. 174, No. 4, 1001–37. Shin, Joseph (2014). “Time Constraints and Pragmatic Encroachment on Knowledge,” Episteme, Vol. 11, No. 2, 157–80. Simion, Mona (2018). “No Epistemic Norm for Action,” American Philosophical Quarterly, Vol. 55, No. 3, 231–38. Slovic, Paul, Fischoff, B., and Lichtenstein, Sarah (1982). “Facts vs Fears,” in Daniel Kahneman, Paul Slovic, and Amos Tversk y (eds.), Judgment under Uncertainty: Heuristics and Biases (Cambridge, UK: Cambridge University Press), 463–89. Smithies, Declan (2012). “The Normative Role of Knowledge,” Nous, Vol. 46, No. 2, 265–88. Snedegar, Justin (2013). “Reason Claims and Contrastivism about Reasons,” Philosophical Studies, Vol. 166, No. 2, 231–42.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.011

228

References

Sripada, Chandra, and Stanley, Jason (2012). “Empirical Tests of Interest-Relative Invariantism,” Episteme, Vol. 9, No. 1, 3–26. Staffel, Julia (2019). “How Do Beliefs Simplify Reasoning?,” Nous, Vol. 53, No. 4, 937–62. Stanley, Jason (2005). Knowledge and Practical Interests (Oxford: Oxford University Press). Stanley, Jason (2015). How Propaganda Works (Princeton: Princeton University Press). Tang, Weng Hong (2015). “Belief and cognitive limitations,” Philosophical Studies, Vol. 172, No. 1, 249–60. Way, Jonathan, and Whiting, Daniel (2016). “If You Justifiably Believe That You Ought to Φ, Then You Ought to Φ,” Philosophical Studies, Vol. 173, No. 7, 1873–95. Weatherson, Brian (2005). “Can We Do without Pragmatic Encroachment?,” Philosophical Perspectives, Vol. 19, 417–43. Weatherson, Brian (2011). “Defending Interest-Relative Invariantism,” Logos & Episteme, Vol. 2, No. 4, 591–609. Weatherson, Brian (2012). “Knowledge, Bets, and Interests,” in Jessica Brown and Mikkel Gerken (eds.), Knowledge Ascriptions (Oxford: Oxford University Press), 75–103. Wedgwood, Ralph (2008). “Contextualism about Justified Belief,” Philosopher’s Imprint, Vol. 8, No. 9, 1–20. Weisberg, Jonathan (2013). “Knowledge in Action,” Philosopher’s Imprint, Vol. 13, No. 22, 1–23. Whiting, Daniel (2014a). “Keep Things in Perspective: Reasons, Rationality and the A Priori,” Journal of Ethics & Social Philosophy, Vol. 8, No. 1, 1–22. Whiting, Daniel (2014b). “Reasons for Belief, Reasons for Action, the Aim of Belief, and the Aim of Action,” in Clayton Littlejohn and John Turri (eds.), Epistemic Norms (Oxford: Oxford University Press), 219–38. Williamson, Timothy (2000). Knowledge and Its Limits (Oxford: Oxford University Press). Williamson, Timothy (2005). “Contextualism, Subject-Sensitive Invariantism and Knowledge of Knowledge,” The Philosophical Quarterly, Vol. 55, No. 219, 213–35. Williamson, Timothy (2014). “Very Improbable Knowing,” Erkenntnis, Vol. 79, No. 5, 971–99. Worsnip, Alex (2015a). “Two Kinds of Stakes,” Pacific Philosophical Quarterly, Vol. 96, No. 3, 307–24. Worsnip, Alex (2015b). “Possibly False Knowledge,” Journal of Philosophy, Vol. 115, No. 2, 225–46. Worsnip, Alex (2020). “Can Pragmatists Be Moderate?,” Philosophy and Phenomenological Research [online version]. https://doi.org/10.1111/phpr.12673. Yablo, Stephen (2014). Aboutness (Princeton: Princeton University Press). Yalcin, Seth (2007). “Epistemic Modals,” Mind, Vol. 116, No. 464, 983–1026.

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.011

Index

acceptance, 7 access, 64, 93 actionable, 122, See permissibility, rational aggregation, 71, 133 Ahmed, Arif and Salow, Bernhard, 131 Airport case, 112, 200, 204 Alspector-Kelly, Marc, 215 alternatives, 144–48, 152, 153–55, 160–65 fine-grained, 161 in play, 148 Anderson, Charity and Hawthorne, John, 78, 83, 126, 189, 191, 215 answer, possible, 144, See also alternatives anti-intellectualism. See pragmatic encroachment anxiety, 100 appraisals of actions, xiv, 25–26, 58, 84 appropriate, epistemically, xiv, 28, See permissibility, rational argument-from-cases strategy, 114, 177 argument-from-principles strategy, 114, 120, 177 assertion, 35, 37, 47 knowledge norm, 47 assessments, evaluative, 147 Asymmetric Reward/Punishment case, 88 Audi, Robert, 5, 17, 93 availability, 43 of alternative actions, 72 of further evidence, 71 availability heuristic, 99 awareness constraint, 17 bad habit view, 100 balancing, 69, 73 bank cases, 74, 112, 141 Bank Hours case, 65 Bank or Aliens case, 216 Baril, Anne, xix barn façade county, 174 Basic Idea, 145 basis of reasoning, 7, 20 Basu, Rima and Schroeder, Mark, 214, 215

Bayesian reasoning, 150, See also rationality Beddor, Bob, 82 Beebe, James, 199 beings of thought and action, x, 60, 109, 156, 193 belief credence 1, 154 degree. See credence function, 149–51, See also closing off, uncertainty higher-order, 46, 52 normative, 5, 16, 53 rationality. See rationality, justification, epistemic belief–desire psychology, 188 beliefs, content, 6 Benton, Matthew, 37, 46 Bergson, Henri, x Berker, Selim, 93 bias, 113 blame, xiii Blome-Tillmann, Michael, 173 Bobier, Christopher, 34 Bolinger, Renée, 215 Bovens, Luc, 36 Boyd, Kenneth, xvi Bratman, Michael, 7 Brown, Jessica, 56, 61–62, 83–85, 113, 120, 121, 165, 190, 204 Brueckner, Anthony, 204 Buchak, Lara, 131 Buckwalter, Wesley, 112 Buckwalter, Wesley and Schaffer, Jonathan, 152 Burge, Tyler, 62 Car Bet case, 206 certainty, xvii, 57, 92, 122 practical. See Practical Certainty Chignell, Andrew, 40 circularity, 68 Clarke, Roger, 96 Clifford, W.K., 126

229

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.012

230

Index

closing off, uncertainty, 150 cognitive limitations, 149–50 cognitive reach, 16–20 Cohen, Stewart, 112, 120, 200 Coin Flip Stop case, 162 Comesaña, Juan, 162 common ground, conversational, 149 Communism, Epistemic, 200 concessive knowledge attributions (CKA), 86 conditional orders, xiii consequences of failed action (CFA), 130, 152, 153–55, 218 ignorant/apparent, 168 relevance approach, 191 unity approach, 190 context deliberative context, 62, 117 nondeliberative context, 117, 159 contextualism about “good reasons for belief,” 142 about “knows,” 56, 112, 212–14 about epistemic norms for practical reasoning, 55–61, 77–80 about evidential support, 165 contextualist justification norm for practical reasoning (CJN), 63, 77, 218 adequate justification (AJ), 74, 77 contrastivism, 165 costs of error (COE), 69, 72–77, 131–34, 218 costs of further inquiry (CFI), 69, 70–72, 73–77, 131–34, 191 artificial inflation, 191 Craig, Edward, 199 credence, xvi, 149, 150, 155, 181–82, See also justification, epistemic function of credence, 151 Dancy, Jonathan, xvi, 7, 10, 143 Das, Nilanjan, 131 Davies, Wayne, 113 Day, J.P., 36 defeater, 167 degree of confidence. See credence degree of justification. See justification,epistemic maximal, 66, 70 deliberative context. See context dependence claim conditional, 14 grounding, 14, 110, See also grounding DeRose, Keith, 86, 112 desire, 5 destabilizing trios, 191 Dinner case, 189 dogmatism, 151

Dogramaci, Sinan, 204 double-checking, 71, 94 Dougherty, Trent and Rysiew, Patrick, 86 doxastic attitudesbelief. See belief credence. See credence shiftyness, 181 doxastric attitudes withholding. See withholding Dretske, Fred, 145 Dutch Book case, 182–86 E=K, 175 Eaton, Daniel and Pickavance, Timothy, 126 eavesdropping, 202 Egan, Andy and Weatherson, Brian, 41 encroachment. See epistemic encroachment, pragmatic encroachment, moral encroachment ends, 31 environment, 9, 152, 159 epistemic closure, 204, 215–16 epistemic encroachment (EE), xviii, 4, 14, 20–24 epistemic filter, xvii epistemic injustice, 205–11 testimonial injustice, 205 epistemic modals, 39, 41, 87 epistemic norm for ends, 31, 33 for practical reasoning, xii, xiv–xv, 3, 20–24 for rational hope, 35, 42–47 epistemic vocabulary, 57 equivalence thesis (ET), 52, 178 evidence, xx, 70, 142, 156 as practical reason, 19 quality, 143, See also strength of reason for belief sameness, 174, 209 statistical, 46, 215 sufficient for knowledge, 175 excuse-maneuver. See knowledge norm for practical reasoning expected utility theory (EUT). See rationality experimental philosophy, 112, 119, 143, 217 explanatory gap, 135 factivity question, 61 fallibilism, 37, 84, See also fallibilism versus infallibilism fallibilism versus infallibilism, xix, 38–39, 46 falsehoods, xv, 29, 44 Fantl, Jeremy and McGrath, Matthew, xii, 51–52, 59, 120–23, 176 Fassio, Davide, 24–30 features question, 60 Feltz, Adam and Zarpentine, Chris, 112

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.012

Index Ferris Wheel case (FW), 183 Fine, Kit, 15 first person view. See perspective Slovic, Paul, 167, Forced Choice case (FC), 170 Fricker, Miranda, 205 Friedman, Jane, 97 Fritz, James, 214 fundamental, 73, 131, 142 Ganson, Dorit, 113 Gao, Jie, 181 genuine constituent explanation. See pragmatic encroachment Gerken, Mikkel, xvi, 52, 57, 62–65, 113, 116, 205–11 Gettier case, xv, 44 Gibbons, John, 17 Goldman, Alvin, 200 Good, Irving John, 131 Good’s theorem, 131 Greco, Daniel, 149, 182 Greco, Daniel and Hedden, Brian, 94 Grimm, Stephen, 136, 137, 186, 202 grounding, 15, 110, 115 group knowledge, 198 Hamblin, C.L., 144 Hannon, Michael, 136, 137, 160, 196 Harassment cases, 209 Harris, Adam, 99 Harman, Gilbert, 5 Harsanyi, John, 150, 152 Hawthorne, John, xii, 99, 196, See also Anderson, Charity and Hawthorne, John Hawthorne, John and Stanley, Jason, xii–xiv, 59, 96 Hieronymi, Pamela, 143 High Attributor–Low Subject Costs case, 197 High Costs of Error (HCOE), 111, 219 High Stakes Good Reason (HGR), 141, 219 Holton, Richard, 100, 149, 152 hope and knowledge, 37 and practical reasoning, 33–34, 50 epistemic norm. See epistemic norm motivational force, 48 rationality, 35 standard account of, 35–37 Hope–Action Link (HAL), 33 horse/cart objection, 187 Howell, Robert, 134 Hume, David, 31 Humean account of reasons for action, xix

231

hypotheticals, 27 Ichikawa, Jonathan Jenkins, 82, 188 impermissibility intuition, 84, 98, 105, 125 implicature. See pragmatics improbable knowledge. See knowledge impurism. See pragmatic encroachment incoherence, 39, 88, 91 incommensurability, 71, 133 indicator property, 135, 169 individualist epistemology, 200 inequality, 193, 207, 211 infallibilism, 45, 84, See also fallibilism versus infallibilism informant function infelicity, 86–87 innocent until proven guilty approach, 83 inquiry, 143 Insufficient Means Principle (IMP), 9 intellectualism (INT), 110, 218 intention, 33, 186 internalism versus externalism about justification, xix, 17, 44, 166–68 about reasons for action, xix Interview case, 207 introspection, 93 invariantism about epistemic norms traditional, 56, 113 irrationality epistemic, 3, 12–14, 30 practical, 7, 12–14, 20, 30 Jackson, Alexander, xvi, 47 Jackson, Elizabeth, 97, 177 James, William, 69 joint deliberation, 203 Joint Deliberation case, 197 justification, adequate. See contextualist justification norm for practical reasoning (CJN) justification, epistemic, 63, 70 belief, 70, 182 credence, 70, 182 degree of, 70, 155, 160 doxastic, 52 equivalence thesis. See equivalence thesis (ET) knowledge-level justification, 102 propositional, 45 shiftiness, 182 threshold for justified belief, 178 Kant, Immanuel, 31 Kelly, Thomas, 142 Kickoff case, 53, 59, 75, 78 KK principle, 85

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.012

232

Index

Knobe, Joshua and Schaffer, Jonathan, 112 know, being in a position to, 43 knowing the answer, 143 knowledge and credence 1, 96–98 concept, 48, 104 conjunctive ascriptions, 174–76 defeat, 167 function, 199 higher-order, 57, 81, 92 improbable, 91 loose use, 113 metaphysics of knowledge, 110 modal embedding, 173 practical adequacy condition. See practical adequacy condition (PAK) pragmatic encroachment. See pragmatic encroachment safety condition, 90, 98, 175 shifts, 117 stability, 186–87 temporal embedding, 173 knowledge ascriptions. See knowledge knowledge intuition, 83, 105 knowledge laundering, 204 knowledge norm for practical reasoning, xiv alternative norms, 52 excuse-maneuver, xvi knowledge-of-probabilities maneuver, xvi, 78 knowledge+, 92 knowledge-first program, xix, 47, 48, 50, 104 determination thesis, 47 knowledge versus justification, 47 Knowledge–Hope Account (KHA), 45, 49 knowledge-level justification sufficiency (KJS), 101, 114, 218 Knowledge–Reason Sufficiency (KRS), 82 Kolodny, Niko and Brunero, John, 9 Kruglanski, Arie and Mayseless, Ofra, 152 Kvanvig, Jonathan, 109 Lackey, Jennifer, 56, 82 Lawlor, Krista, 63 laziness, 76 Levy, Neil, 100 Linking Principle (LP), 11 Littlejohn, Clayton, 53 Locke, Dustin, xvi, 57, 95, 160, 163–65, 188 logical omniscience, 97 logical truths, 97 loose use. See knowledge lottery cases, xii, 36–40, 42–46 infinite, 40 Low Costs of Error (LCOE), 111, 218 Low Stakes Good Reason (LGR), 141, 218

luminosity, 45, 91–94, 103 Lutz, Matt, 113 MacFarlane, John, 204 Martin, Adrienne, 35–36, 48 maxims, 69 May, Joshua, Sinnott-Armstrong, Walter, Hull, Jay G., and Zimmerman, Aaron, McCormick, Miriam Schleifer, 35, 36 McGlynn, Aidan, 47 McGrath, Matthew, 109, See also Fantl, Jeremy and McGrath, Matthew McKenna, Robin, 212 means–end reasoning, 31, See instrumental rationality Meirav, Ariel, 36 memory, 74, 153, 172, 206 mental process, 7 mental states, 39, 64, 70, 150 Miller, Boaz, 217 Moore’s paradox, 8, 87 moral encroachment, 155, 214–15 Moss, Sarah, 214 Mueller, Andy, xiv, 16, 34, 136 Mueller, Andy and Ross, Jacob, xvi, 60 multiple decision contexts, 189 mundane cases, 52, See Kickoff case Nagel, Jennifer, 99, 113, 135, 152 negation, 42 negligence, xiii Neta, Ram, xvi, 52, 83, 120, 165 nondeliberative context. See context nonluminosity. See luminosity Norby, Aaron, 181 normative claim, 34, 47, 153 normative realm, 27 order of explanation, 67, 187–89 ordinal scale, 133 ordinary language, xiii, 6, See also vernacular language ought from is, 153 outcome, action, 72, 131 Pace, Michael, 155 Parfit, Derek, 5–6, 15–18, 29 Parfit’s challenge, 6 parsimony, argument from, 22 partial belief. See credence Pascal, Blaise, 157 perceptual experience, 71 permissibility, rational, xiv perspective first-person, 13, 22, 44

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.012

Index third-person, 85, 88, 196, 209 Pinillos, N. Angel, 113, 217 possibility, alternative. See alternatives metaphysical, 40–41 spaces, 154 practical adequacy condition (PAK), 126–28, 172 Practical Certainty, 66 practical reasoning, 7 practical relevance, xiv, 68, 82 pragmatic encroachment about knowledge-level justification (PE-K*), 115, 191, 219 argument/explanation distinction, 140, 171 combinatorial view, 138 contextualized (PE-K*), 213 explanation of, 134–39, 169–72 genuine constituent explanation, 135, 169 on justified belief (PE-J), 176, 219 on knowledge (PE-K), 110, 219 on knowledge-level justification (PE-K*), 114–16 on reasons (PE-R), 141–43, 179, 181–82, 219 social-sensitivity, 196–205 subject-sensitivity, 196–99 total pragmatic encroachment (TPE), 180, 182 pragmatics, 86 preferences, 27, 76 primacy of the practical, xix, 157–60, 189 probability, epistemic, xvi, 37 projectivist strategy, 203 proposition, practically relevant. See practical relevance psychological disorder, 43 Puppy or Object case, 216 purism. See intellectualism questions semantics of, 144 settling, 145 racial profiling, 215 Rational End Pursuit (REP), 49 Rational Permissibility (RP), 79, 114, 125, 219 rationality epistemic, 4, See also justification, epistemic expected utility theory, xii, xx, 68, 95–98, 124 factivity, xv, 28, 62 instrumental, 14, 32–33 practical, xiv, xx, 4 reasons-based approach, xx, 69, 97 Raz, Joseph, xvii reason apparent, 29 epiphenomenon, 68 factualism, xx, 28–29

233

for action, xix, 127 for belief, xx, 142, 151 most, 67 particularism. See situation sensitivity practical reason for belief, xx, 4, 156 prima facie, 10 pro tanto, 127 probabilistic, 78 situation-sensitivity, 143 strength of reason for belief, 144–49, 152 weighing, 23 reasonable person standard, 63–64, 76, 132, 133, 148, 159 reasoning-disposition implicature account, 189 Reason–Knowledge Principle (RKP), xiv, 190, 219, see also knowledge norm for practical reasoning Reed, Baron, 39, 87, 88, 120, 187 reflection, 45 reflection principle, 184 regulation conditions of norms, 25 relativism, 201 relevant alternatives theory, 145, See also alternatives reward cases, 5, See also practical reasons for belief, Pascalian considerations risk averseness, 90 Roeber, Blake, 82, 113, 116, 123–25 Ross, Jacob and Schroeder, Mark, 97, 126, 150, 187 Rubin, Katherine, 182 running down the clock, 171 Russell, Gillian and Doris, John, 194 Rysiew, Patrick, 86, 113 Savage, Leonard, 150 Schaffer, Jonathan, 170 Schroeder, Mark, 19, 134, 136, 166, 167, 170, 183 Sharon, Assaf and Spectre, Levi, 215 shifting thresholds view (STV), 136–38, 178, 219 communal threshold, 137 practical adequacy threshold, 137 problem of pragmatic encroachment on justified belief, 177–79 problem of pragmatic encroachment on reasons for belief, 179 problem of pure epistemology, 180 Shin, Joseph, 111 Ship Owner case, 126 Simion, Mona, xii simplifying function. See cognitive limitations situation, 149, 152 skepticism, 119, 148, 158

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.012

234

Index

Smithies, Declan, xvi, 52, 149 Snedegar, Justin, 165 Sorites paradox, 116 Sripada, Chandra and Stanley, Jason, 113, 143, 180 Staffel, Julia, 149, 186 stakes, 65, 72, 132, 194 psychological effect, 153 Stanley, Jason, xii, 142, 173, 190, 195, 196, See also Sripada, Chandra and Stanley, Jason, Hawthorne, John and Stanley, Jason stopping point, 81 storage hypothesis, 181 subjective–objective question, 61 subtraction argument, 51, 102 Surgeon case, 61, 83 Tang, Weng Hong, 149 testimony, 10, 75, 137, 198, 206 threshold problem for fallibilism, 214 total pragmatic encroachment (TPE), 219, See also pragmatic encroachment instability problem, 182–86 Train case, 75, 176 treating propositions as reasons, xv Trust Fund Baby case (TFB), 194 truth, 9, 51, 69, 102, 135 truth-conducive factors, 110

truth-tracking, 145 uncertainty. See closing off, uncertainty uniqueness thesis, 94 urgency, 65, 71, 78, See also Forced Choice case (FC) vernacular language, 57, See also ordinary language voluntariness, 43 warrant, 63, See also justification, epistemic warranted assertability maneuver, 113 Way, Jonathan and Whiting, Daniel, 177 wealth, 194 Weatherson, Brian, 113 Wedgwood, Ralph, 141 Weisberg, Jonathan, xx, 97 Whiting, Daniel, 5, 160 Williamson, Timothy, xii, 37, 45, 47, 56, 91, 93, 96, 99, 167 withholding, x, 12, 97, 128, 136 Wittgenstein, Ludwig, 87 Worsnip, Alex, 86, 128–29, 156, 215 Wounded Hiker case, 78 Yablo, Stephen, 215 Yalcin, Seth, 39, 87

Downloaded from https://www.cambridge.org/core. , on , subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108992985.012