Illuminating Errors (Routledge Studies in Epistemology) [1 ed.] 0367630427, 9780367630423

This is the first collection of essays exclusively devoted to knowledge from non-knowledge and related issues. It featur

131 89 3MB

English Pages 342 [343] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Endorsements
Half Title
Series Page
Title Page
Copyright Page
Contents
List of Contributors
Acknowledgments
Introduction
PART I: The Possibility of Knowledge from Non-Knowledge
Section I: Justification and Essential Falsehoods
1. Norms of Belief and Knowledge from Non-Knowledge
2. We Are Justified in Believing that KFK Is Fundamentally Wrong
3. No Knowledge from Falsity
4. Harmless Falsehoods
5. Knowledge from Blindspots
Section II: Gettier, Safety, and Defeasibility
6. Knowledge from Error and Anti-Risk Virtue Epistemology
7. Epistemic Alchemy?
8. The Benign/Malignant Distinction for False Premises
9. Knowledge, Falsehood, and Defeat
PART II: Beyond the Possibility of Knowledge from Non-Knowledge
Section III: Reasoning, Hinges, and Cornerstones
10. The Developmental Psychology of Sherlock Holmes: Counter-Closure Precedes Closure
11. Inferential Knowledge, Counter Closure, and Cognition
12. Knowledge from Non-Knowledge in Wittgenstein's On Certainty: A Dialogue
13. Vaults across Reasoning
14. Entitlement, Leaching, and Counter-Closure
Section IV: Knowledge: From Falsehoods and of Falsehoods
15. Why Is Knowledge from Falsehood Possible?: An Explanation
16. The Assertion Norm of Knowing
17. Knowledge without Factivity
18. Knowing the Facts, Alternative and Otherwise
Index
Recommend Papers

Illuminating Errors (Routledge Studies in Epistemology) [1 ed.]
 0367630427, 9780367630423

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Routledge Studies in Epistemology

ILLUMINATING ERRORS NEW ESSAYS ON KNOWLEDGE FROM NON-KNOWLEDGE Edited by Rodrigo Borges and Ian Schnee

“This book is unique in that it takes a highly focused set of questions that revolve around knowledge from non-knowledge and advances discussion of these questions from the perspectives of hinge-epistemology, anti-luck approaches to knowledge involving both safety and sensitivity constraints, virtue-theoretic epistemology, and knowledge-first approaches that emphasize the roles knowledge plays in licensing both theoretical and practical inferences.” Aaron Rizzieri, Coconino Community College

Illuminating Errors

This is the first collection of essays exclusively devoted to knowledge from non-knowledge and related issues. It features original contributions from some of the most prominent and up-and-coming scholars working in contemporary epistemology. There is a nascent literature in epistemology about the possibility of inferential knowledge based on premises that are, for one reason or another, not known. The essays in this book explore if and how epistemology can accommodate cases where knowledge is generated from something other than knowledge. Can reasoning from false beliefs generate knowledge? Can reasoning from unjustified beliefs generate knowledge? Can reasoning from gettiered beliefs generate knowledge? Can reasoning from propositions one does not even believe generate knowledge? The contributors to this book tackle these and other questions head-on. Together, they advance the debate about knowledge from non-knowledge in novel and interesting directions. Illuminating Errors will be of interest to researchers and advanced students working in epistemology and philosophy of mind. Rodrigo Borges is a Lecturer at the University of Florida. He works mainly in epistemology. He is currently working on a monograph about the Gettier Problem and knowledge. Ian Schnee is an Associate Teaching Professor at the University of Washington. He is the author of The Logic Course Adventure, an interactive textbook for formal logic courses. Besides epistemology, his research interests include philosophy of film, philosophy of video games, and pedagogy.

Routledge Studies in Epistemology Edited by Kevin McCain, University of Alabama at Birmingham, USA and Scott Stapleford, St. Thomas University, Canada

Epistemic Dilemmas New Arguments, New Angles Edited by Kevin McCain, Scott Stapleford and Matthias Steup Proposition and Doxastic Justification New Essays on Their Nature and Significance Edited by Paul Silva Jr. and Luis R.G. Oliveira Epistemic Instrumentalism Explained Nathaniel Sharadin New Perspectives on Epistemic Closure Edited by Duncan Pritchard and Matthew Jope Epistemic Care Vulnerability, Inquiry, and Social Epistemology Casey Rebecca Johnson The Epistemology of Modality and Philosophical Methodology Edited by Anand Jayprakash Vaidya and Duško Prelević Rational Understanding From Explanation to Knowledge Miloud Belkoniene Illuminating Errors New Essays on Knowledge from Non-Knowledge Edited by Rodrigo Borges and Ian Schnee For more information about this series, please visit: https://www.­ routledge.com/Routledge-Studies-in-Epistemology/book-series/RSIE

Illuminating Errors New Essays on Knowledge from Non-Knowledge

Edited by Rodrigo Borges and Ian Schnee

First published 2024 by Routledge 605 Third Avenue, New York, NY 10158 and by Routledge 4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2024 selection and editorial matter, Rodrigo Borges and Ian Schnee; individual chapters, the contributors The right of Rodrigo Borges and Ian Schnee to be identified as the authors of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. ISBN: 978-0-367-63042-3 (hbk) ISBN: 978-0-367-63303-5 (pbk) ISBN: 978-1-003-11870-1 (ebk) DOI: 10.4324/9781003118701 Typeset in Sabon LT Std by KnowledgeWorks Global Ltd.

Contents

List of Contributors Acknowledgments Introduction

x xi 1

RODRIGO BORGES AND IAN SCHNEE

PART I

The Possibility of Knowledge from Non-Knowledge

9

Section I: Justification and Essential Falsehoods 1 Norms of Belief and Knowledge from Non-Knowledge

11

E. J. COFFMAN

2 We Are Justified in Believing that KFK Is Fundamentally Wrong

28

PETER D. KLEIN

3 No Knowledge from Falsity

44

FRED ADAMS

4 Harmless Falsehoods

59

MARTIN MONTMINY

5 Knowledge from Blindspots RHYS BORCHERT, JUAN COMESAÑA, AND TIMOTHY KEARL

76

viii  Contents

Section II: Gettier, Safety, and Defeasibility 6 Knowledge from Error and Anti-Risk Virtue Epistemology

93

DUNCAN PRITCHARD

7 Epistemic Alchemy?

104

STEPHEN HETHERINGTON

8 The Benign/Malignant Distinction for False Premises

120

CLAUDIO DE ALMEIDA

9 Knowledge, Falsehood, and Defeat

139

SVEN BERNECKER

PART II

Beyond the Possibility of Knowledge from Non-Knowledge

159

Section III: Reasoning, Hinges, and Cornerstones 10 The Developmental Psychology of Sherlock Holmes: Counter-Closure Precedes Closure

161

ROY SORENSEN

11 Inferential Knowledge, Counter Closure, and Cognition

183

MICHAEL BLOME-TILLMANN AND BRIAN BALL

12 Knowledge from Non-Knowledge in Wittgenstein’s On Certainty: A Dialogue

196

MICHAEL VEBER

13 Vaults across Reasoning

215

PETER MURPHY

14 Entitlement, Leaching, and Counter-Closure FEDERICO LUZZI

231

Contents ix

Section IV: Knowledge: From Falsehoods and of Falsehoods 15 Why Is Knowledge from Falsehood Possible?: An Explanation

258

JOHN TURRI

16 The Assertion Norm of Knowing

273

JOHN BIRO

17 Knowledge without Factivity

286

KATE NOLFI

18 Knowing the Facts, Alternative and Otherwise

312

CLAYTON LITTLEJOHN

Index

329

List of Contributors

Fred Adams, University of Delaware, USA Brian Ball, New College of the Humanities, UK Sven Bernecker, University of California Irvine, USA John Biro, University of Florida, USA Michael Blome-Tillmann, McGill University, Canada Thys Borchert, The University of Arizona, USA E. J. Coffman, The University of Tennessee Knoxville, USA Juan Comesaña, Rutgers University, USA Claudio de Almeida, Pontifical Catholic University of Rio Grande do Sul, Brazil Stephen Hetherington, University of New South Wales, Australia Timothy Kearl, The University of Arizona, USA Peter D. Klein, Rutgers University, USA Clayton Littlejohn, King’s College London, UK Federicco Luzzi, University of Aberdeen, UK Martin Montminy, The University of Oklahoma, USA Peter Murphy, University of Indianapolis, USA Kate Nolfi, The University of Vermont, USA Duncan Pritchard, University of California Irvine, USA Roy Sorensen, The University of Texas at Austin, USA John Turri, University of Waterloo, Canada Michael Veber, East Carolina University, USA

Acknowledgments

We would like to thank a few people for their support, help, and patience as we worked to put this volume together. First and foremost, we would like to thank the contributors to this volume. We also thank the series editors for Routledge, Kevin McCain and Scott Stapleford, and all those at Routledge who were involved in the production of this volume. Finally, we thank Branden Fitelson for suggesting, some time ago, that we should work together, given our common interest in the issues in this volume. Rodrigo Borges, Gainesville, FL, and Ian Schnee, Seattle, WA November 2022

Introduction Rodrigo Borges and Ian Schnee

I.1 Introduction This volume contributes to the nascent literature in epistemology about the possibility of knowledge from error or ignorance. If this possibility holds, we might call these errors ‘illuminating’ since they (somehow) lead us to knowledge. We believe this volume makes a substantial contribution to this literature because it not only moves forward existing debates surrounding the possibility of knowledge from error, but it also enlarges the debate by introducing fresh, new perspectives on the issues.

I.2  A Brief History of Knowledge from Non-Knowledge The recent debate1 surrounding the possibility of knowledge from error took shape in the 2000s, after the publication of agenda-setting papers from Ted Warfield and Peter Klein. 2 Warfield and Klein wondered out loud about the possibility of inferential knowledge (i.e., knowledge of a conclusion) via a false single premise. This is the type of example Warfield and Klein had in mind. Party Hats Liz counts 35 children at her son’s birthday party and concludes that the 100 party hats she bought for the party are enough. However, there are 36 children at the party – one child ran to the bathroom after Liz started counting heads. Liz’s false premise that there are 35 children at the birthday party seems justified, and she seems to know the conclusion she deduces from this premise, that she bought enough party hats for the children. The intuition seems to be that, although strictly speaking false, her premise is close enough to the truth to grant her knowledge of the conclusion. Warfield and Klein took cases like Party Hats to illustrate the possibility of inferential knowledge from a single false premise. They also noted DOI: 10.4324/9781003118701-1

2  Rodrigo Borges and Ian Schnee that cases like Party Hats are structurally similar to Gettier cases. To see that, take one of Gettier’s own cases. Ten Coins Smith overhears the boss saying that Jones will get the job. He also saw Jones count ten coins he found in his pocket. Smith concludes that the man who will get the job has ten coins in his pocket. Unbeknownst to Smith, he (not Jones) will get the job and Smith himself has ten coins in his pocket. Smith, like Liz, has a justified but false belief from which he deduces a true and justified conclusion. However, noted Warfield and Klein, Liz seems to know her conclusion, while Smith doesn’t. They also noted that the divergent responses to these structurally similar cases put pressure on the common idea3 that what goes awry in many Gettier cases has something to do with the fact that the protagonist in them infers a justified true belief from a justified but false belief. This common interpretation of inferential Gettier cases seems too optimistic, given cases like Party Hats in which the justified but false premise seems responsible for knowledge and not ignorance, as in Ten Coins. What is more, if the Gettier Problem is in fact the litmus test for theories of knowledge that epistemologists claim it is, then this difference in epistemic assessment needs to be explained (or explained away). Both Warfield and Klein endeavored to do just that. Whether or not they succeeded, the question of what to say about cases like Ten Coins in light of structurally similar cases like Party Hats is important and many theorists tackled it after the contributions by Warfield and Klein. On the heels of Warfield’s and Klein’s work, others argued that knowledge may come not only from false premises4 but also from premises that are gettiered, unjustified, or not even believed. Federico Luzzi i­ ntroduced the possibility of inferential knowledge from a gettiered premise.5 Peter Murphy then argued that knowledge might also come from premises that are not justified6 or even believed.7 Assuming one does not know that p if one’s belief that p is gettiered or unjustified, then the work of Luzzi and Murphy seems to show that knowledge from falsehood was not an isolated phenomenon, but an example of the wider phenomenon that is knowledge from non-knowledge. In other words, according to Luzzi and Murphy, the nature of inferential knowledge is incompatible with the venerable principle which states that knowledge of the premises is necessary for knowledge of the conclusion. This is known as the knowledge from knowledge principle, or counter-closure, and it is a principle of inferential knowledge virtually everyone from Aristotle onward has accepted.8 Luzzi stated the principle thus9: (CC) Necessarily, if (i) S knows that p entails q and (ii) S comes to believe q solely on the basis of competently deducing it from p, and (iii) S knows q, then S knows p.

Introduction 3 The principle involves a single premise or basis (i.e., ‘p’), which it requires to be known. A multi-premise or multi-basis version is readily available, however, and it was introduced by Brian Ball and Michael Blome-Tillmann10: (MCC) If (i) S knows q, and (ii) S believes q solely on the basis of a competent deduction from some premises including p, then (iii) S knows p. In their discussion of (MCC), Ball and Blome-Tillmann advanced an argument against the possibility of knowledge from falsehood. According to them, cases like Party Hats overemphasize the subject’s explicit false premise at the expense of the implicit knowledge at play in the subject’s reasoning. According to them, the overemphasis on the subject’s false premise creates the illusion of knowledge from falsehood, while the fuller picture that includes what the subject implicitly knows depicts those cases for what they are – cases of knowledge despite falsehood.11 For example, Liz’s explicit inference in Party Hats has 1 as the sole basis for Liz’s inference to 2: 1 There are 35 children at the birthday party. 2 My 100 party hats are sufficient. On its face, the fact that Liz knows 2 on the basis of her false belief in 1 refutes (MCC), since this principle requires S’s inferential bases to be known. This is too hasty, argue Ball and Blome-Tillmann, since we seem to silently assume that Liz also knows that n. The result of her count is ‘35’. n1. There are approximately 35 children at the birthday party. n2. 35 is far less than 100. But if our assumption about Liz’s situation is correct, suggest Ball and Blome-Tillmann, then Party Hats does not refute (MCC), since her belief in 2 is also based on her tacit knowledge in n, n1, and n2. In other words, in cases like Party Hats, S’s implicit reliance on relevant background knowledge has priority over S’s explicit reliance on a falsehood when it comes to explaining why we believe there is knowledge in those cases.12 Proponents of knowledge from falsehood have replied to this type of response.13 The debate over whether it blocks the possibility of knowledge from non-knowledge is still ongoing. The viability of principles such as (CC) and (MCC) has received increasing attention since the work of Warfield and Klein. As we have seen, some argue that cases of knowledge from non-knowledge show that (MCC) is false, while others claim that those cases show no such

4  Rodrigo Borges and Ian Schnee thing. This is a debate of vital importance since it has consequences not only for the adjudication of cases of knowledge from non-knowledge (whether they in fact show that this is possible), but also for our understanding of reasoning itself.14 Many others joined the discussion. Often, they explored and applied the consequences of knowledge from non-knowledge to central questions in epistemology: epistemic luck,15 the nature of evidence,16 epistemic defeat,17 and more. Although recent, the literature on the possibility of knowledge from non-knowledge is vibrant, and the epistemological ramifications of the debate are deep. As we will see in the next section, the contributions in this volume bear this out.

I.3 The Volume’s Contribution to Knowledge from Non-Knowledge This volume brings together philosophers who have been thinking about knowledge from non-knowledge since the early 2000s and philosophers that are trying their hand at the issue for the first time. Contributions also explore new angles on the issue of knowledge from non-knowledge. The volume is divided into two parts, with two sections per part. The chapters in the first part discuss whether knowledge from non-­knowledge is possible, while the chapters in the second part of the book discuss issues that go beyond this possibility. The chapters in Section I focus on epistemic justification, essential dependence on falsehood, and how these issues affect the possibility of knowledge from non-knowledge. The chapters in Section II look at inferential Gettier cases in order to explore whether knowledge from non-knowledge is in fact possible. Section III looks at how knowledge from non-knowledge relates to the epistemological and psychological nature of explicit (and implicit) reasoning, and the epistemic status of tacit assumptions (hinges and cornerstones). Finally, Section IV, the most provocative section in the book, looks at knowledge from non-knowledge in order to shed light on the common assumption that knowledge requires the truth of what is known (factivity). We now turn to a brief description of each contribution in the order in which they appear. In ‘Norms of Belief and Knowledge from Non-Knowledge’, E. J. Coffman explores the plausibility of the following principle governing justified belief: (OJ) You ought to believe that p iff you have justification to believe that p. According to Coffman, the fact that this principle might entail the possibility of knowledge from non-knowledge should not stop epistemologists from endorsing it. Peter D. Klein, in his chapter ‘We Are Justified in Believing that KFK Is Fundamentally Wrong’, seeks to show that the knowledge from knowledge principle contains a fatal problem. According to Klein, the principle mistakenly requires that the beliefs containing the basis for what we inferentially know be

Introduction 5 doxastically justified. This, he argues, is an unwelcome commitment of the principle, since not all our beliefs have to be doxastically justified in order to serve as reasons for other beliefs – some need only be propositionally justified in order to play this role. In his ‘No Knowledge from Falsity’, Fred Adams argues that a particular account of knowledge as truth-tracking (information-actuated belief) excludes the possibility of knowledge from non-knowledge while avoiding some of the pitfalls of other views in the literature. In particular, Adams argues that false beliefs are sometimes merely causally relevant to knowing, and that positing non-conscious beliefs in order to explain away knowledge from non-knowledge is explanatorily idle. For Martin Montminy, however, non-conscious beliefs are anything but explanatorily idle. In his ‘Harmless Falsehood’ chapter, tacit knowledge plays an important role, and for Montminy, this is the key to understanding why false beliefs lead to knowledge only if they are ‘harmless’. In ‘Knowledge from Blindspots’, Rhys Borchert, Juan Comesaña, and Timothy Kearl present a new type of case that seems to show that knowledge from non-knowledge is possible. According to them, knowledge may be based on an unknowable but essential false premise (blindspot). Their argument adds new ammunition against the knowledge from knowledge principle and in support of the possibility of knowledge from non-knowledge. Although Duncan Pritchard recognizes that in general one does not acquire knowledge from a falsehood (a fact the original Gettier cases illustrate), his ‘Knowledge from Error and Anti-Risk Virtue Epistemology’ argues that knowledge from non-knowledge is compatible with the account of knowledge in anti-risk virtue epistemology. According to his view, the difference between knowledge-preventing gettierization and knowledge-yielding error is the fact that the true belief in the latter kind of case is not only safe, but also that the true belief is significantly attributable to the subject’s manifestation of cognitive ability. Stephen Hetherington’s ‘Epistemic Alchemy?’, on the other hand, argues against the idea that a modal condition on knowledge such as safety can account for the difference between Gettier cases and cases of knowledge from non-knowledge. His interpretative key is a particular view of knowledge he calls ‘knowledge minimalism’, according to which some knowledge is a mere true belief. Claudio de Almeida takes up the question of what distinguishes genuine cases of knowledge from falsehood from inferential Gettier cases in his ‘The Benign/Malignant Distinction for False Premises’. He argues that, when formulated in the right way, the defeasibility account of knowledge has the resources it needs to draw the distinction between knowledge-preventing (malignant) and knowledge-yielding falsehoods (benign). Sven Bernecker’s ‘Knowledge, Falsehood, and Defeat’ disagrees with de Almeida’s assessment. He argues that defeasibility accounts of

6  Rodrigo Borges and Ian Schnee knowledge such as de Almeida’s and Klein’s cannot explain the distinction between those types of cases. In a tour de force through the history of epistemology, developmental psychology, and the history of logic, Roy Sorensen’s ‘Mini-Meno: Counter-Closure Precedes Closure’ argues, with verve, for the conclusion that counter-closure is an untenable principle of reasoning. According to him, some instances of the general argument form ‘P, therefore, P’ can generate knowledge even though the reasoner does not know their premise. In their chapter ‘Inferential Knowledge, Counter Closure, and Cognition’, Brian Ball and Michael BlomeTillmann argue that, despite appearances to the contrary, Sorensen’s case does not show that knowledge from non-knowledge is possible. They claim that careful consideration of the psychological literature on implicit reasoning shows that every alleged case of knowledge from non-­knowledge (including Sorensen’s) is, in reality, a case of knowledge despite non-knowledge. Michael Veber’s ‘Knowledge from Non-Knowledge in Wittgenstein’s On Certainty: A Dialogue’ investigates Ludwig Wittgenstein’s suggestion that our knowledge rests on commitments to things we do not know. If this is right, then these ‘hinge propositions’ are responsible for quite a lot of knowledge from non-knowledge. Veber’s humor-infused dialogue investigates what exactly Wittgenstein might have meant by his suggestion, and what lessons we may draw for epistemology as a whole. Federico Luzzi’s chapter, ‘Entitlement, Leaching, and Counter-Closure’, applies the principle of counter-closure to a discussion concerning Crispin Wright’s approach to skeptical arguments. Wright and his critics argue over the compatibility of ‘cornerstone propositions’ (akin to Wittgenstein’s hinge propositions) and closure principles. According to Luzzi, however, the real issue at stake concerns counter-closure, not closure. We can therefore make philosophical progress by giving up counter-­closure, which Luzzi thinks we have independent grounds to do. In ‘Vaults Across Reasoning’, Peter Murphy creates a taxonomy of arguments that can generate knowledge from non-knowledge. He is interested in three main categories of such arguments: vaults-to-knowledge, vaults-to-sub-knowledge, and extended vaults. His goal is to broaden the current understanding of knowledge from non-knowledge by providing a systematic taxonomy epistemologists can use in their discussions. The last section contains four chapters discussing the prospect of knowing what is false. These chapters discuss the factivity condition on knowledge and whether rejecting this condition might inform the discussion on knowledge from non-knowledge (and vice versa). Prima facie, knowledge from knowledge principles derives some plausibility from the assumption that knowledge requires truth (especially in alleged cases of knowledge from falsehood). John Turri’s ‘Why is Knowledge from Falsehood Possible? An Explanation’ argues that knowledge from

Introduction 7 non-knowledge is not all that surprising given that it is possible for one to know things that are strictly false. John Biro’s ‘The Assertion Norm of Knowing’ pushes the discussion a few steps further and argues that knowledge requires being entitled to assert, not belief or truth. Kate Nolfi, in turn, argues that a virtue-theoretic epistemology that is action oriented will lead us to regard the possibility of knowledge of falsehoods with less puzzled eyes. Finally, Clayton Littlejohn’s ‘Knowing the Facts, Alternative and Otherwise’ argues in support of the factivity condition on knowledge. According to him, the case against factivity generalizes in problematic ways.

Notes 1. A less brief history of the literature on knowledge from non-knowledge would spend some time praising the groundbreaking work done by John T. Sanders and Narayan Champawat, and Risto Hilpinen. Their work discussed cases similar to Warfield’s and Klein’s. Our brief history is limited to the more recent history of this literature. See John T. Saunders and Narayan Champawat (1967). ‘Mr. Clark’s Definition of Knowledge’ in: Analysis 25, pp. 8–9; Risto Hilpinen (1988). ‘Knowledge and Conditionals’ in: J. Tomberlin (ed.) Philosophical Perspectives 2: Epistemology. Ridgeview, pp. 157–182. 2. Ted Warfield (2005). ‘Knowledge from Falsehood’ in: Philosophical Perspectives 19:1, pp. 405–416, and Peter D. Klein (2008). ‘Useful False Beliefs’ in: Quentin Smith (ed.) New Essays in Epistemology. Blackwell and Oxford University Press, pp. 25–63. 3. For example, Michael Clark (1963). ‘Knowledge and Grounds: A Comment on Mr. Gettier’s Paper’ in: Analysis 24:2, pp. 46–48; Robert Shope (1983). The Analysis of Knowing. Princeton Press; Richard Feldman (2003). Epistemology. Prentice Hall, pp. 27–28; and Rodrigo Borges (2017). ‘The Gettier Conjecture’ in: R. Borges, C. de Almeida, P. Klein (eds.) Explaining Knowledge: New Essays on the Gettier Problem. Oxford University Press, pp. 273–291. 4. Some of the work that sought to strengthen the case for the possibility of knowledge from falsehood includes E. J. Coffman (2008). ‘Warrant Without Truth?’ in: Synthese 162, pp. 173–194, Branden Fitelson (2010). ‘Strengthening the Case for Knowledge from Falsehood’ in: Analysis 70, pp. 666–669, and Alexander Arnold (2013). ‘Some Evidence is False’ in: Australasian Journal of Philosophy 91, pp. 165–172. 5. See Federico Luzzi (2010). ‘Counter-Closure’ in: Australasian Journal of ­Philosophy 88, pp. 673–683. 6. Peter Murphy (2013). ‘Another Blow to Knowledge from Knowledge’ in: Logos & Episteme IV, pp. 311–317. 7. Peter Murphy (2015). ‘Justified Belief from Unjustified Belief’ in: Pacific ­Philosophical Quarterly 98:4, pp. 602–617. 8. Aristotle, Posterior Analytics. Trans. Jonathan Barnes, 1994. Clarendon Press. Also see Federico Luzzi (2019). Knowledge from Non-Knowledge. Cambridge University Press, pp. 1–2. 9. Federico Luzzi (2010). ‘Counter-Closure’ in: Australasian Journal of Philosophy 88, p. 674. 10. Brian Ball and Michael Blome-Tillmann (2014). ‘Counter Closure and Knowledge Despite Falsehood’ in: Philosophical Quarterly 64:257, p. 552.

8  Rodrigo Borges and Ian Schnee 11. The ‘knowledge despite falsehood’ camp also includes, among others, M ­ artin Montminy (2014). ‘Knowledge Despite Falsehood’ in: Canadian ­Journal of Philosophy 4:3–4, pp. 463–475; and Ian Schnee (2015). ‘There Is No ­Knowledge from Falsehood’ in: Episteme 12:1, pp. 53–74. 12. Others have proposed similar strategies to counteract cases of knowledge from falsehood. For some examples, see the references in the previous footnote. 13. For example, Branden Fitelson (2017). ‘Counterclosure and I­nferential Knowledge’ in: R. Borges, C. de Almeida, P. Klein (eds.) Explaining Knowledge: New Essays on the Gettier Problem. Oxford University Press, ­Christopher Buford and Christopher Michael Cloos (2016). ‘A Dilemma for the Knowledge Despite Falsehood Strategy’ in: Episteme 15:2, pp. 166–182, and Federico Luzzi (2019). Knowledge from Non-Knowledge. Cambridge ­University Press, pp. 10–31. 14. See, among others, Claudio de Almeida (2017). ‘Knowledge, Benign ­Falsehoods, and the Gettier Problem’ in: R. Borges, C. de Almeida, P. Klein (eds.) Explaining Knowledge: New Essays on the Gettier Problem. Oxford University Press, pp. 292–311, Rodrigo Borges (2020). ‘Knowledge from Knowledge’ in American Philosophical Quarterly 57:3, pp. 283–298. 15. For example, Neil Feit and Andrew Cullison (2011). ‘When Does Falsehood Preclude Knowledge?’ in: Pacific Philosophical Quarterly 92, pp. 283–304. 16. For example, Clayton Littlejohn (2013). ‘No Evidence is False’ in: Acta ­Analytica 28:2, pp. 145–159, Alexander Arnold (2013). ‘Some Evidence is False’ in: Australasian Journal of Philosophy 91:1, pp. 165–172. 17. Cf. Robert K. Shope (2017). ‘Chained to the Gettier Problem: A Useful Falsehood?’ in: R. Borges, C. de Almeida, P. Klein (eds.) Explaining Knowledge: New Essays on the Gettier Problem. Oxford University Press, pp. 96–116.

Part I

The Possibility of Knowledge from Non-Knowledge

Section I

Justification and Essential Falsehoods

1

Norms of Belief and Knowledge from Non-Knowledge E. J. Coffman

I Suppose you’re entertaining a proposition, P, whose truth-value matters to you. Under what conditions is it the case that you epistemically ought to believe that P?1 I’ll consider how some prominent answers to that question bear on the possibility of an instance of knowledge that depends for its status as such on a belief that doesn’t constitute knowledge. I’ll focus on this thesis: (OJ) You ought to believe that P iff you have justification to believe that P. 2 John Gibbons (2013, pp. 276–288)—a leading defender of (OJ)—develops but then attempts to defeat, the following argument against (OJ): (1) If (OJ) is true, then there could be an instance of knowledge that depends for its status as such on a false belief. (2) There couldn’t be an instance of knowledge that depends for its status as such on a false belief. So, (3) (OJ) is false. Gibbons grants (2) but attempts to undermine (1). Now, if (1) is true, then an otherwise appealing “traditionalist” epistemological position that combines (OJ) with the thesis that inferential knowledge of a conclusion requires known relevant premises is in fact inconsistent (cf. Luzzi, 2019, pp. 97–98).3 So (1)’s truth would be at least somewhat surprising. I aim to strengthen the connection between (OJ) and questions about the nature of inferential knowledge that this volume explores. After some stage-setting, I reconstruct a promising argument for (1) that Gibbons suggests. On my reconstruction, Gibbons’ suggested argument for (1) has two premises. Gibbons concedes one of these premises but DOI: 10.4324/9781003118701-4

12  E. J. Coffman rejects the other. After defeating Gibbons’ objection to the premise of his suggested argument for (1) that he rejects, I present and defend an argument for the premise that he concedes. I close by reflecting on the prospects of some arguments against certain of (OJ)’s competitors that parallel the above argument against (OJ).

II To ensure proper understanding of the above argument against (OJ), we must distinguish between a belief’s depending on X for its status as knowledge and a belief’s depending on X for its overall epistemic status. If a belief depends on X for its status as knowledge, then it depends on X for its overall epistemic status. But a belief might depend on X for its overall epistemic status yet not depend on X for its status as knowledge. Examples like this one loom large in the literature on the possibility of knowledge from non-knowledge: Extra Reasons Smith has two independent sets of reasons for thinking that someone in his office owns a Ford. One set has to do with Nogot. Nogot says he owns a Ford, and so on. As usual, Nogot is merely pretending. But Smith also has equally strong reasons having to do with Havit. And Havit is not pretending. Havit does own a Ford, and Smith knows that he owns a Ford. (Feldman, 2003, p. 33) We’ll return to Extra Reasons shortly. For now, imagine a variant of Extra Reasons wherein Smith knows not only that Havit owns a Ford but also that Nogot owns a Ford. Smith’s belief that someone in his office owns a Ford would still constitute knowledge even if it were based wholly on his belief about Havit. So, Smith’s belief that someone in his office owns a Ford doesn’t depend for its status as knowledge on his belief about Nogot. But Smith’s belief that someone in his office owns a Ford would be less justified were it based wholly on his belief about Havit. So, Smith’s belief that someone in his office owns a Ford depends for its overall justificatory status on his belief about Nogot. Accordingly, dependence on X for status as knowledge is the stronger of the two notions highlighted in the last paragraph. In light of cases like Extra Reasons, it’s clear that there could be a knowledge-constituting belief that depends for its overall epistemic status on a “non-known” belief. In Extra Reasons (as in the more realistic variant), Smith’s belief that someone in his office owns a Ford would be less justified were it not based on his belief about Nogot. What’s less clear is whether there could be a knowledge-constituting belief

Norms of Belief and Knowledge from Non-Knowledge 13 that depends for its status as knowledge on a non-known belief. Extra Reasons doesn’t establish the latter possibility, since (as in the more realistic variant) Smith’s belief that someone in his office owns a Ford doesn’t depend for its status as knowledge on his belief about Nogot.

III Recall premise (1) of the argument against (OJ) that Gibbons engages: If (OJ) is true, then there could be an instance of knowledge that depends for its status as such on a false belief. Gibbons suggests and then attempts to defeat, a promising argument for (1). Gibbons’ suggested argument for (1) unfolds against the backdrop of a case like this: You’re looking at a counterfeit $20 bill on the ground. Through your visual experience as of a $20 bill on the ground, you acquire a justified false belief that there’s a $20 bill on the ground. You then reason as follows: “(a) There’s a $20 bill on the ground. (b) If there’s a $20 bill on the ground, then I ought to believe that there’s a $20 bill on the ground. So, (c) I ought to believe that there’s a $20 bill on the ground.” (Counterfeit Bills 1) Gibbons’ suggested argument for (1) runs as follows: (4) If (OJ) is true, then you could come to know (c) through inference from (a) and (b). (5) If you could come to know (c) through inference from (a) and (b), then there could be an instance of knowledge that depends for its status as such on a false belief. So, (1) If (OJ) is true, then there could be an instance of knowledge that depends for its status as such on a false belief. Gibbons grants (5) but rejects (4). According to Gibbons, (OJ) entails that while you could acquire a non-accidentally true belief of (c) through inference from (a) and (b), you couldn’t acquire a justified belief of (c) through inference from (a) and (b). By Gibbons’ lights, the fact that explains how (given OJ) you could acquire a non-accidentally true belief of (c) through inference from (a) and (b) also explains why you couldn’t acquire a justified belief of (c) through inference from (a) and (b). But if you couldn’t acquire a justified belief of (c) through inference from

14  E. J. Coffman (a) and (b), then you couldn’t come to know (c) through inference from (a) and (b). Therefore, even if (OJ) is true, you couldn’t come to know (c) through inference from (a) and (b). How, given (OJ), could you acquire a non-accidentally true belief of (c) through inference from (a) and (b)? Assume that (OJ) is true. Now imagine that, while having justification to believe (a) (= There’s a $20 bill on the ground), you acquire a belief of (c) (= I ought to believe that there’s a $20 bill on the ground) through inference from (a) and (b) (= If there’s a $20 bill on the ground, then I ought to believe that there’s a $20 bill on the ground). By (OJ)’s right-to-left conditional, your source of justification to believe (a) makes it true that you ought to believe (a). So, your source of justification to believe (a) makes (c) true. Moreover, your source of justification to believe (a) also gives you justification to believe (c).4 So, your source of justification to believe (c) (= your source of justification to believe (a)) makes (c) true. If a belief is made true by its subject’s source of justification for its content, then that belief is non-­ accidentally true.5 So, your belief of (c) is non-accidentally true. Hence, if (OJ) is true, you could acquire a non-accidentally true belief of (c) through inference from (a) and (b). What Gibbons claims is that, even if (OJ) is true, you couldn’t acquire a justified belief of (c) through inference from (a) and (b). Before examining Gibbons’ argument against (4), let’s consider a different (from Gibbons’) worry about (4). A theorist might deny (4) on the ground that, in a case like Counterfeit Bills 1, you can’t justifiedly believe the (admittedly odd-seeming) material conditional (b). But I follow Gibbons (2013, p. 259ff.) in thinking that in a case like Counterfeit Bills 1, you can indeed justifiedly believe (b). Notice first that the denial of (b) is equivalent to the following “Moore-paradoxical” proposition: (MP1) There is a $20 bill on the ground, but it’s false that I ought to believe that there’s a $20 bill on the ground. Since you can’t justifiedly believe (MP1), you can’t justifiedly disbelieve (b). Moreover, since (by stipulation) you justifiedly believe (a), you can’t justifiedly suspend judgment on (b). Justifiedly suspending judgment on (b) while justifiedly believing (a) would be tantamount to justifiedly believing the following different (from MP1) “Moore-paradoxical” proposition (cf. Gibbons, 2013, p. 213): (MP2) There is a $20 bill on the ground, but I don’t know whether I ought to believe that there’s a $20 bill on the ground. You can’t justifiedly believe (MP2) either. So, since (by stipulation) you justifiedly believe (a), you can’t justifiedly suspend judgment on (b).

Norms of Belief and Knowledge from Non-Knowledge 15 So, you can’t justifiedly reject (that is, disbelieve or suspend judgment on) (b). But surely you can, in a case like Counterfeit Bills 1, justifiedly take some or other doxastic attitude toward (b). So, in a case like Counterfeit Bills 1, you can justifiedly believe (b). Back to the main thread. Here’s my reconstruction of Gibbons’ argument against (4): (6) Even if (OJ) is true, your source of justification to believe (a) also gives you justification to believe (c). (7) If your source of justification to believe P also gives you justification to believe Q, then you couldn’t come to justifiedly believe Q on the basis of your belief of P.6 So, (8) Even if (OJ) is true, you couldn’t come to justifiedly believe (c) through inference from (a) and (b). [6, 7] (9) If you couldn’t come to justifiedly believe (c) through inference from (a) and (b), then you couldn’t come to know (c) through inference from (a) and (b). So, (~4) Even if (OJ) is true, you couldn’t come to know (c) through inference from (a) and (b). [8, 9] What should we make of this argument? I’m happy to concede (6) and (9), but not (7). I maintain that (7) over-ascribes failure of transmission of doxastic justification. The following case strikes me as a clear counterexample to (7)7: You see that there’s a rabbit in the yard. Your seeing that there’s a rabbit in the yard is your source of justification to believe both (i) that you see that there’s a rabbit in the yard and (ii) that there’s a visible animal in the yard.8 On the basis of your seeing that there’s a rabbit in the yard, you come to justifiedly believe that you see that there’s a rabbit in the yard. You then come to justifiedly believe that there’s a visible animal in the yard, on the basis of your justified belief that you see that there’s a rabbit in the yard. So, you’ve come to justifiedly believe that there’s a visible animal in the yard, through inference from the proposition that you see that there’s a rabbit in the yard, despite the fact that your source of justification to believe that you see that there’s a rabbit in the yard—namely, your seeing that there’s a rabbit in the yard—also gives you justification to believe that there’s a visible animal in the yard.

16  E. J. Coffman Granted, the way in which you’ve come to believe that there’s a visible animal in the yard is somewhat odd. You could have come to believe the indicated proposition directly in response to the pertinent perceptual state, rather than by inferring it from the proposition that you see that there’s a rabbit in the yard. It’s plausible to think that this “inefficient” way in which you’ve come to believe that there’s a visible animal in the yard engenders whatever sense theorists may have that your belief of that proposition is infelicitous. But the relevant kind of “inefficiency” needn’t prevent an inference from providing you with a new justified belief of its conclusion (cf. Tucker, 2010, p. 511ff.). Reflection on the above case reveals that (7) is false, thus defeating Gibbons’ argument against (4). Moreover, the foregoing discussion provides reason to think that (4) is true, since it provides reason to think that (given (OJ)) you could indeed acquire a justified and non-accidentally true belief of (c) through inference from (a) and (b).

IV Recall the other premise of Gibbons’ suggested argument for (1): (5) If you could (in Counterfeit Bills 1) come to know (c) through inference from (a) and (b), then there could be an instance of knowledge that depends for its status as such on a false belief. As noted above, Gibbons simply concedes the truth of (5). But we can argue for (5) as follows: (10) If you could (in Counterfeit Bills 1) come to know (c) through inference from (a) and (b), then you could form a knowledge-­ constituting belief of (c) wholly on the basis of your false belief of (a) by competently deducing (c) from (a). (11) Necessarily, if (at t) you form a knowledge-constituting belief of (c) wholly on the basis of your belief of (a) by competently deducing (c) from (a), then (at t) your belief of (c) depends for its status as knowledge on your belief of (a). So, (5) If you could (in Counterfeit Bills 1) come to know (c) through inference from (a) and (b), then there could be an instance of knowledge that depends for its status as such on a false belief. [10, 11] What should we make of this argument?

Norms of Belief and Knowledge from Non-Knowledge 17 As for (10), I assume that if you could (in a case like Counterfeit Bills 1) come to know a conclusion through tokening of a certain Modus Ponens argument, then you could form a knowledge-constituting belief of the relevant argument’s conclusion wholly on the basis of your distinct belief of the argument’s minor premise (cf. Tucker, 2010, p. 512). So far as I can see, then, the question whether the above argument for (5) succeeds boils down to the question how reasonable is (11). In the balance of this section, I’ll assess various considerations that bear on (11)’s truth-value. Consider the following thesis: (12) Necessarily, if (at t) you form a knowledge-constituting belief B1 wholly on the basis of a distinct belief B2 by competently deducing B1’s content from B2’s content, then (at t) B1 depends for its status as knowledge on B2. (12) obviously entails (11). So, if we should accept (12), then we should accept (11) as well. But should we accept (12)? Martin Montminy (2014, p. 467) endorses the following thesis: (MM) If your belief B2 contributes to your distinct belief B1’s constituting knowledge, then B1 wouldn’t constitute knowledge were you to cease holding B2. According to Montminy, we should accept (MM) because it constitutes the best explanation of the putative fact that (in Extra Reasons) Smith’s false belief about Nogot doesn’t contribute to Smith’s knowing that someone in the office owns a Ford. But (MM) entails (~12). Assume that you form your knowledge-constituting belief B1 wholly on the basis of your distinct belief B2, by competently deducing B1’s content from B2’s content. It’s compatible with this assumption that B1 might still constitute knowledge even if you were to cease holding B2 (perhaps, for example, your source of justification for B2 is also a source of justification for B1). By (MM), if B1 might still constitute knowledge even if you were cease holding B2, then B2 doesn’t contribute to B1’s constituting knowledge. And if B2 doesn’t contribute to B1’s constituting knowledge, then B1 doesn’t depend for its status as knowledge on B2. Hence, it’s possible that your knowledge-constituting belief B1 be formed wholly on the basis of your distinct belief B2, by competently deducing B1’s content from B2’s content, yet not depend for its status as knowledge on B2 (~12). I’m happy to concede the datum that Montminy offers in support of (MM), namely, the thesis that (in Extra Reasons) Smith’s false belief about Nogot doesn’t contribute to his knowing that someone in the

18  E. J. Coffman office owns a Ford.9 But we should scrutinize Montminy’s claim that (MM) is the best explanation of the indicated datum. Federico Luzzi (2019, pp. 17–18) attempts to undermine (MM) by suggesting the following alternative explanation of Montminy’s datum: Necessarily, S’s false belief B1 contributes to a distinct belief B2’s constituting knowledge only if there’s no truth, T, that meets the following conditions: (i) S has B1-independent justification to believe T; (ii) B1’s content doesn’t support T; and (iii) T supports B2’s content. In Extra Reasons, [Havit owns a Ford] is true and meets the following conditions (here and elsewhere, “[P]” abbreviates “the proposition that P”): (i) Smith has Nogot-independent justification to believe [Havit owns a Ford], (ii) [Nogot owns a Ford] doesn’t support [Havit owns a Ford], and (iii) [Havit owns a Ford] supports [Someone in the office owns a Ford]. So, Smith’s false belief about Nogot doesn’t contribute to Smith’s knowing that someone in the office owns a Ford. Luzzi’s suggested requirement for a false belief’s contributing to another belief’s constituting knowledge entails (~12) and thus doesn’t yield a successful defense of (12) from the (MM)-based argument against (12). To see the conflict, recall (12): Necessarily, if (at t) you form a knowledge-constituting belief B1 wholly on the basis of a distinct belief B2 by competently deducing B1’s content from B2’s content, then (at t) B1 depends for its status as knowledge on B2. As even prominent defenders of (2) concede (cf. Audi, 2011, p. 220; Klein, 2008, p. 36ff.; Montminy, 2014, p. 463ff.), there could be a knowledge-constituting belief that’s (psychologically) based wholly on a false belief. Consider the inferential belief in the following case, which also looms large in the literature on the possibility of knowledge from non-knowledge: Fancy Watch Ted has a 7 pm meeting and extreme confidence in the accuracy of his fancy watch. Having lost track of the time and wanting to arrive on time for the meeting, Ted looks carefully at his watch. He reasons: “It is exactly 2:58 pm; therefore, I am not late for my 7 pm meeting” … [A]s it happens it is exactly 2:56 pm, not 2:58 pm. (Warfield, 2005, p. 408)

Norms of Belief and Knowledge from Non-Knowledge 19 Imagine an amplified version of Fancy Watch wherein a friend whom Ted knows to be reliable tells Ted that he’s not yet late for his meeting. In this version of Fancy Watch, Ted’s knowledge-constituting belief of the testimony’s content is still based wholly on his false belief that it’s exactly 2:58 pm. Notice that [Ted’s reliable friend asserted that Ted isn’t yet late for his meeting] is true and meets the following conditions: (i) Ted has watch-independent justification to believe [Ted’s reliable friend asserted that Ted isn’t yet late for his meeting]; (ii) [Ted’s reliable friend asserted that Ted isn’t yet late for his meeting] doesn’t support [It’s exactly 2:58 pm]; and (iii) [Ted’s reliable friend asserted that Ted isn’t yet late for his meeting] supports [Ted isn’t yet late for his meeting]. Luzzi’s suggested requirement for a false belief’s contributing to another belief’s constituting knowledge entails that, in this amplified version of Fancy Watch, Ted’s belief of [It’s exactly 2:58 pm] doesn’t contribute to Ted’s knowing that he’s not late for his meeting. Luzzi’s suggested requirement thus entails that the amplified version of Fancy Watch is a counterexample to (12)—that is, a case wherein a knowledge-constituting belief B1 is formed wholly on the basis of a distinct belief B2, via competent deduction of B1’s content from B2’s content, yet doesn’t depend for its status as knowledge on B2. Luzzi’s attempt to undermine (MM) doesn’t yield a successful defense of (12) from the (MM)-based argument against (12). Still, the (MM)based argument against (12) fails, for (MM) doesn’t well explain the fact that (in Extra Reasons) Smith’s false belief about Nogot doesn’t contribute to Smith’s knowing that someone in the office owns a Ford. (MM) doesn’t well explain the indicated fact because (MM) is false. (MM) is falsified by the variant of Extra Reasons described above wherein Smith knows not only that Havit owns a Ford but also that Nogot owns a Ford. In that variant of Extra Reasons, Smith’s belief about Havit contributes (albeit redundantly) to Smith’s knowing that someone in his office owns a Ford, notwithstanding the fact that Smith might still know that someone in his office owns a Ford were he to lack his belief about Havit (given his knowledge that Nogot owns a Ford). In addition to the (MM)-based argument against (12), Montminy (2014, pp. 468–469) also suggests an attempted counterexample to (12). Consider a variant of Fancy Watch wherein Ted’s knowledge-­ constituting belief that he isn’t late for his meeting is (psychologically) based wholly on his false belief that it’s exactly 2:58 pm, yet depends for its status as knowledge (not on that false belief but instead) on his “­virtual knowledge” that it’s approximately 2:58 pm.10 This case allegedly illustrates the possibility of a knowledge-constituting belief B1 formed wholly on the basis of a distinct belief B2—via competent deduction of B1’s content from B2’s content—that doesn’t depend for its status as knowledge on B2.11

20  E. J. Coffman According to Luzzi (2019, pp. 23–24), the proponent of this attempted counterexample to (12) is committed to the following implausible thesis: In the attempted counterexample to (12), Ted’s belief of his conclusion is based wholly on his belief of the true proposition that it’s approximately 2:58 pm; however, in a variant of Fancy Watch wherein Ted accurately believes that it’s exactly 2:58 pm, Ted’s belief of his conclusion is instead based wholly on his belief of the true proposition that it’s exactly 2:58 pm. Writes Luzzi (2019, p. 23): “…[I]t is odd that a fact about the accuracy of the subject’s watch on this occasion could be allowed to determine what premise her conclusion is based on, regardless of what she takes her basis for belief to be.” Luzzi’s reply misunderstands the attempted counterexample to (12). Contrary to what Luzzi suggests, the proponent of the attempted counterexample will agree that Ted’s belief of his conclusion is—in each of the two cases considered above—psychologically based wholly on his belief about the exact time (cf. Fitelson, 2017, p. 319ff.). That Ted’s belief of his conclusion is so based is a stipulated detail of each case. Granted, the proponent of the attempted counterexample is committed to the thesis that how (if at all) a subject knows a particular proposition can differ across cases that are indistinguishable from the subject’s perspective. But that thesis is obviously true.12 Luzzi’s reply to Montminy’s attempted counterexample to (12) fails. To see that Montminy’s attempted counterexample fails as well, consider the following case (which blends Pritchard’s (2012, p. 260) “Temp” with Luzzi’s (2019, pp. 20–21) “One Short”): Gazing upon field F, S has a visual experience as of a sheep that is in fact caused by a fake sheep. In response to his visual experience as of a sheep, S comes to justifiedly but mistakenly believe that the object he’s looking at is a sheep (call this belief “B1”). In response to B1, S comes to justifiedly and accurately believe that there is at least one sheep in F. Finally, S knows—albeit merely dispositionally— that an infallible informant once asserted the following proposition: If S believes that there’s a sheep in F, then S has a safely held belief that there’s a sheep in F.13 Suppose that, in Montminy’s attempted counterexample to (12), Ted’s virtual knowledge that it’s approximately 2:58 pm contributes to Ted’s knowing that he’s not late for his meeting. If so, then—in the “fake sheep” case just described—S’s virtual knowledge that S now believes that there’s a sheep in F contributes to S’s knowing that there is now at

Norms of Belief and Knowledge from Non-Knowledge 21 least one sheep in F. Intuitively, though, S doesn’t (yet) know that there is at least one sheep in F. Hence, in Montminy’s attempted counterexample to (12), Ted’s virtual knowledge about the approximate time doesn’t contribute to his knowing that he’s not late. So, Montminy’s attempted counterexample to (12) fails. Having defeated a couple of instructive objections to (12), let’s consider a couple of things that might be said in support of (12). Branden Fitelson (2017, pp. 320–323) endorses this thesis: (Reasons) Necessarily, if (at t) you form a belief B1 on the basis of a distinct belief B2 by competently deducing B1’s content from B2’s content, then (at t) B1 owes its overall epistemic status (at least in part) to B2.14 According to Fitelson, if (Reasons) is false, then so is the following thesis: (T) Necessarily, if S comes to believe Q by competently deducing Q from P while maintaining her knowledge that P, then S (thereby) comes to know that Q.15 But (T) is true. So, (Reasons) must be true as well. Reflection on Fitelson’s (T)-based argument for (Reasons) suggests an argument for (12): If you deny (12), then you’re committed to the following possibility: S’s knowledge-constituting belief B1 is formed wholly on the basis of S’s distinct belief B2, via competent deduction of B1’s content from B2’s content, yet there’s a full explanation of B1’s status as knowledge that doesn’t cite B2. But if you’re committed to that possibility, then it seems you’re also committed to this possibility: S’s knowledgeconstituting belief B1 is formed wholly on the basis of S’s distinct knowledge-constituting belief B2, via competent deduction of B1’s content from B2’s content, yet there’s a full explanation of B1’s status as knowledge that doesn’t cite B2. Anyone committed to the latter possibility must deny (T). But (T) is true. So, (12) must be true as well. I accept both (T) and (12). But I suspect that anyone who doubts (12) will also doubt (T). Hence, I worry that the above (T)-based argument for (12) is dialectically inappropriate. In light of this worry, I offer the following argument specifically for (11) that’s neutral with respect to (T)/(12): If (11) is false, then it’s possible that you have a belief of (c) (= I ought to believe that there’s a $20 bill on the ground) such that there’s a full explanation of its status as knowledge that doesn’t cite

22  E. J. Coffman anything it’s based on. If it’s possible that you have a belief of (c) such that there’s a full explanation of its status as knowledge that doesn’t cite anything it’s based on, then it’s possible that you have a belief of (c) such that there’s a complete explanation of its status as knowledge that doesn’t cite anything it’s based on.16 If it’s possible that you have a belief of (c) such that there’s a complete explanation of its status as knowledge that doesn’t cite anything it’s based on, then it’s possible that you have “baseless” knowledge of (c)—that is, a knowledge-constituting belief of (c) that isn’t based on anything. But you couldn’t have baseless knowledge of (c). So, (11) is true. I hasten to add that this argument for (11) leaves open both (i) whether there could be baseless knowledge of other propositions, and (ii) whether there could be “direct” knowledge of (c) (that is, knowledge of (c) that doesn’t depend for its status as such on any other beliefs or dispositions to believe). Let’s pause briefly to take stock. I’ve now explained, defended, and bolstered a Gibbons-suggested argument for the thesis that (OJ) entails the possibility of an instance of knowledge that depends for its status as such on a false belief. I submit that this argument constitutes good reason to believe that (OJ) is indeed intimately related to issues concerning the nature of inferential knowledge that this volume explores. I’ll now wrap things up by (a) contrasting the Gibbons-inspired argument for (1) with an argument for (1) that may be attributed to Luzzi (2019, p. 85ff.), and (b) reflecting on the prospects of some arguments against certain of (OJ)’s competitors that parallel the argument against (OJ) that Gibbons engages.

V Luzzi (2019, p. 85ff.) may be interpreted as arguing for (something like) (1). He writes (2019, p. 85): “…[E]pistemologists who accept that it is sometimes permissible for a subject to believe p even though p is false should [endorse 1’s consequent] in the light of [cases like Fancy Watch].” Since one can have justification to believe a false proposition, (OJ)’s right-to-left conditional entails that one can be obligated to believe a false proposition. Since one is permitted to believe what one is obligated to believe, (OJ) entails that one can be permitted to believe a false proposition. Luzzi’s remark thus suggests the following argument for (1): (L1) If (OJ) is true, then (in Fancy Watch) Ted knows that he’s not late for his 7 pm meeting through inference from the false proposition that it’s exactly 2:58 pm. (L2) If (in Fancy Watch) Ted knows that he’s not late for his meeting through inference from the false proposition that it’s exactly

Norms of Belief and Knowledge from Non-Knowledge 23 2:58 pm, then there could be an instance of knowledge that depends for its status as such on a false belief. So, (1) If (OJ) is true, then there could be an instance of knowledge that depends for its status as such on a false belief. [L1, L2] How does this argument for (1) relate to the argument Gibbons suggests? (5) and (L2) seem equally plausible. Therefore, if one of the arguments is dialectically stronger than the other, this will presumably be due to a relevant difference between (4) and (L1). And there is indeed a relevant difference between (4) and (L1): (4) depends on logically weaker, and thus more plausible, assumptions about non-accidental truth than does (L1). To see the indicated difference between (4) and (L1), recall that (given (OJ)) your belief of (c) (in Counterfeit Bills 1) is made true by your source of justification for (c)—namely, your visual experience as of a $20 bill on the ground. As we saw above, the thesis that your belief of (c) is non-­ accidentally true follows from the conjunction of the above claim about the belief’s truth-maker with the extremely plausible thesis that any belief made true by its “propositional justifier” is non-accidentally true. By contrast, in Fancy Watch, Ted’s belief that he’s not late for his meeting isn’t justified by its truth-maker (Ted’s belief about his meeting is justified by certain watch-related experiences but made true by facts about time). So, to derive the thesis that Ted’s belief about his meeting is non-accidentally true, Luzzi must invoke something stronger than the thesis that any belief made true by its propositional justifier is non-accidentally true. Luzzi (2019, pp. 30–31) invokes (what’s often called) “basis-relative safety”: Ted’s belief that he’s not late for his meeting qualifies as non-accidentally true because “there is no nearby possibility where the false premise leads [Ted] to infer the same conclusion but this conclusion is false.” Recent literature on the Gettier Problem contains apparent counterexamples to the thesis that basis-relative safety suffices for non-accidental truth (cf. Coffman, 2017); (4) therefore depends on logically weaker, and thus more plausible, assumptions about non-accidental truth than does (L1). Other things equal, then, the Gibbons-suggested argument for (1) is dialectically stronger than is the Luzzi-suggested argument.

VI The last item on our agenda concerns two of (OJ)’s competitors: (OJT) You ought to believe that P iff (i) P is true and (ii) you have justification to believe that P.

24  E. J. Coffman (OWK) You ought to believe that P iff you’re in a position to know that P.17 Interestingly, arguments against (OJT) and (OWK) that parallel Gibbons’ suggested argument against (OJ) clearly fail. Let’s see why. A parallel argument against (OJT) will involve the following premise: (4*) If (OJT) is true, then (in Counterfeit Bills 1) you could come to know (c) through inference from (a) and (b). (4*) is false. To see this, simply recall the details of Counterfeit Bills 1. Since (a) is false, (OJT)’s left-to-right conditional entails that (c) is false— and so, that you could not (in the circumstances) come to know (c) through inference from (a) and (b). Moreover, since (OWK)’s right-handside entails that P is true, the same criticism applies to the proposition that results from replacing (OJT) with (OWK) in (4*)’s antecedent. We can avoid this problem with the envisaged parallel arguments against (OJT) and (OWK) by modifying Counterfeit Bills 1 as follows: You’re looking at a genuine $20 bill on the ground which is (unbeknownst to you) surrounded by several counterfeits. Through your visual experience of the genuine $20 bill on the ground, you acquire a justified true belief that there’s a $20 bill on the ground. You then reason as follows: “(a) There’s a $20 bill on the ground. (b) If there’s a $20 bill on the ground, then I ought to believe that there’s a $20 bill on the ground. So, (c) I ought to believe that there’s a $20 bill on the ground.” (Counterfeit Bills 2) Having presented Counterfeit Bills 2, one could argue against (OJT) as follows: (4**) If (OJT) is true, then (in Counterfeit Bills 2) you could come to know (c) through inference from (a) and (b). (5*) If you could come to know (c) through inference from (a) and (b), then there could be an instance of knowledge that depends for its status as such on a belief that doesn’t constitute knowledge. So, (1*) If (OJT) is true, then there could be an instance of knowledge that depends for its status as such on a belief that doesn’t constitute knowledge. (~OJT) follows from the conjunction of (1*) with this proposition:

Norms of Belief and Knowledge from Non-Knowledge 25 (2*) There couldn’t be an instance of knowledge that depends for its status as such on a belief that doesn’t constitute knowledge. What should we make of this argument? The argument fails because (4**) is false. (4**) is false because, given (OJT), your belief of (c) is accidentally true and therefore can’t constitute knowledge. Your belief of (c) is accidentally true (given (OJT)) because your belief of (a) is accidentally true (keep in mind that, given (OJT), (c) entails (a)). An argument against (OWK) that parallels Gibbons’ suggested argument against (OJ) but stems from Counterfeit Bills 2 fails as well. In Counterfeit Bills 2, your belief that there’s a $20 bill on the ground is accidentally true and thus doesn’t constitute knowledge. So, it’s false that you would know that there’s a $20 bill on the ground if you were to so believe. So, it’s false that you’re in a position to know that P. So, (OWK) entails that it’s false that you ought to believe that there’s a $20 bill on the ground. So, (OWK) entails that you couldn’t (in the circumstances) come to know (c) through inference from (a) and (b). The proposition that results from replacing (OJT) with (OWK) in (4**)’s antecedent is therefore false. So far as I can currently see, then, while (OJ) arguably entails the possibility of inferential knowledge of a conclusion from a non-known relevant premise, neither (OJT) nor (OWK) does. Of course, each of (OJT) and (OWK) has a problem that (OJ) lacks: each of (OJT) and (OWK) entails, counterintuitively, that one is epistemically permitted to unjustifiedly suspend judgment in a case where one has justification to believe a false proposition (cf. Feldman, 2000; Gibbons, 2013). And so, it may well turn out that—bracketing issues concerning the nature of inferential knowledge that this volume explores—(OJ) is on balance the most plausible of our three considered answers to the question under what conditions should you believe an important (to you) proposition you’re currently entertaining. If it turns out that—bracketing the issues that this volume explores—(OJ) is on balance the most plausible answer to that question (period) and independent investigation of the indicated issues concerning the nature of inferential knowledge doesn’t yield strong reason to reject the possibility of inferential knowledge of a conclusion from a false relevant premise, then there may well be a promising (OJ)-based argument available for that possibility.

Notes 1. See Feldman (2000) and Gibbons (2013) for clarification and defense of the thesis that you’re bound by some epistemic doxastic obligations. 2. As the chapter’s opening line indicates, (OJ) should be read as restricted to propositions that you’re currently entertaining and are important to you (cf. Gibbons, 2013, p. 5).

26  E. J. Coffman 3. For expressions of commitment to (OJ), see Audi (2015, p. 120ff.), Feldman (2000, p. 677ff.), and Gibbons (2013, p. 10ff.). For expressions of commitment to the thesis that inferential knowledge of a conclusion requires known relevant premises, see Audi (2011, p. 213ff.), Feldman (2003, p. 36ff.), and Gibbons (2013, p. 276ff.). 4. Writes Gibbons (2013, p. 284): “…the things that justify you in believing that P also justify you in believing that you ought to believe that P.” 5. Writes Gibbons (2013, p. 286): “…when justifiers are also truth makers, it’s not just an accident that my belief is true.” 6. Writes Gibbons (2013, p. 286): “If one and the same thing justifies two different states, the justification of neither is derived from that of the other.” Multiple instances of this general thesis appear throughout pp. 285-289, including the following (p. 289): “If one and the same thing justifies your belief that P and your belief that you ought to believe that P, then we can’t think of the justification of these beliefs on the model of inference from one to the other.” Notably, (7) is highly similar to the thesis that Tucker (2010, p. 512) labels “TFP1” and attributes to Wright (2002). 7. This case is inspired by one that Pryor (2012, pp. 299–300) describes. For complementary discussion of several similar cases, see Tucker (2010, pp. 512–514). 8. Writes Gibbons (2013, p. 287; cf. Audi 2020, p. 75): “… I don’t see why the belief that you see that P can’t be justified on the basis of the fact it’s about. Once you see that P, how much more do you need?” 9. According to Luzzi (2019, p. 17), a theorist “might think, quite plausibly, that the basis for Smith’s [knowledge that someone in the office owns a Ford] is the combination of the false [belief about Nogot] and the true [belief about Havit] and that [the true belief about Havit] suffices to counterbalance the epistemic badness of [the false belief about Nogot] to a degree sufficient to secure knowledge of the conclusion” (my emphasis). Luzzi (2019, p. 16–17) suggests that the above line of thought supports the thesis that Smith’s false belief about Nogot contributes to Smith’s knowing that someone in the office owns a Ford. So far as I see, though, all that the indicated line of thought supports is the (quite different) thesis that Smith’s false belief about Nogot contributes toward, yet doesn’t contribute to, Smith’s not knowing that someone in the office owns a Ford. (Unlike “X contributes to Y,” “X contributes toward Y” doesn’t entail that Y obtains.) 10. S “virtually knows” that P iff S is disposed to form a knowledge-constituting belief of P (cf. Audi 2020, p. 187). 11. Notably, in countenancing the possibility of a case of “indirect” knowledge (that is, knowledge that depends for its status as such on some other beliefs or dispositions to believe) that doesn’t depend for its status as such on ­anything it’s based on, the proponent of this attempted counterexample joins the Holistic Coherentist (cf. Audi 2011, p. 219). 12. Notably, Luzzi (2019, pp. 112–113) elsewhere distinguishes between (a) the psychological basis of a knowledge-constituting belief and (b) the factors in virtue of which the belief constitutes knowledge. The reply just discussed in the text seems to elide this distinction. 13. A belief, B, is “safely held” iff B is true in every close possibility where B is held as it actually is. 14. Writes Fitelson (2017, p. 320): “Whenever an agent S competently deduces Q from [the proposition] that P (while maintaining her belief that P), thereby coming to believe Q, [S’s belief that] P is (an epistemologically explanatorily essential) part of S’s epistemic basis for her belief that Q.”

Norms of Belief and Knowledge from Non-Knowledge 27 15. Fitelson (2017, p. 312) calls (T) a “closure” principle. But (T) is more accurately called a “transmission” principle, since (T) concerns the acquisition (as opposed to the mere possession) of knowledge (cf. Tucker 2010, pp. 498–499). 16. For the distinction between full and complete explanation, see Audi (1993, pp. 264). 17. Like (OJ), both (OJT) and (OWK) should be read as restricted to propositions that you’re currently entertaining and are important to you.

References Audi, R. (1993). The structure of justification. Cambridge University Press. Audi, R. (2011). Epistemology: A contemporary introduction to the theory of knowledge (3rd ed.). Routledge. Audi, R. (2015). Rational belief: Structure, grounds, and intellectual virtue. Oxford University Press. Audi, R. (2020). Seeing, knowing, and doing: A perceptualist account. Oxford University Press. Coffman, E. J. (2017). Gettiered belief. In R. Borges, C. de Almeida, & P. Klein (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 15– 34). Oxford University Press. Feldman, R. (2000). The ethics of belief. Philosophy and Phenomenological Research, 60(3), 667–695. Feldman, R. (2003). Epistemology. Prentice Hall. Fitelson, B. (2017). Closure, counter-closure, and inferential knowledge. In R. Borges, C. de Almeida, & P. Klein (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 312–324). Oxford University Press. Gibbons, J. (2013). The norm of belief. Oxford University Press. Klein, P. (2008). Useful false beliefs. In Q. Smith (Ed.), Epistemology: New essays (pp. 25–63). Oxford University Press. Luzzi, F. (2019). Knowledge from non-knowledge: Inference, testimony and memory. Cambridge University Press. Montminy, M. (2014). Knowledge despite falsehood. Canadian Journal of Philosophy, 44(3–4), 463–475. Pryor, J. (2012). When warrant transmits. In A. Coliva (Ed.), Mind, meaning, and knowledge: Themes from the philosophy of Crispin Wright (pp. 269– 303). Oxford University Press. Tucker, C. (2010). When transmission fails. Philosophical Review, 119(4), 497–529. Warfield, T. (2005). Knowledge from falsehood. Philosophical Perspectives, 19(1), 405–416. Wright, C. (2002). (Anti-)sceptics simple and subtle: G.E. Moore and John McDowell. Philosophy and Phenomenological Research, 65(2), 330–348.

2

We Are Justified in Believing that KFK Is Fundamentally Wrong Peter D. Klein

2.1  Stage Setting: The “K’s” in KFK In this paper, I will show why I think we are justified in believing that KFK is fundamentally wrong. But before considering KFK, it is important to make clear what I think is the intended meaning of “knowledge” as used by both the defenders and opponents of KFK. Let me begin by noting that “knowledge” is notoriously used in all sorts of ways. A quiz show host can appropriately say to a contestant “if you know the answer, you will win sixty-four thousand dollars” (Radford, 1966). Neither belief, nor justification, nor even truth is required for the contestant to “know” the answer – only the (alleged) truth is required. In such a case, it is clear that “knowledge” is not being used in the way that is appropriate in the context of most philosophical discussions about the nature and scope of human knowledge. Here, I take it that our subject is the kind of knowledge that Plato characterized as the most highly prized form of true belief in the Meno (Plato), the kind of demonstrative and non-demonstrative knowledge that Aristotle was considering in the Posterior Analytics (Aristotle, 2009), the kind that Hume thought we could not have regarding cause and effect, the kind that was called “scientia” by many medieval scholars, the kind that Descartes thought we could have, once we replied to the skeptic and proved the existence of an epistemically beneficent god, and the kind for which the Gettier literature showed is not mere true, justified belief.1 I have called that kind of knowledge “real knowledge” using “real” in the same way that a horse auctioneer uses it when, after having sold some mediocre mounts, a stately steed is brought to the platform and she says, “Ladies and gentlemen, now this is a real horse” (Klein, 2017). The auctioneer is not implying that the other animals were fake horses. She is saying of this horse that it is an exemplar of the species. Similarly, real knowledge is the most highly prized form of what we seek when we seek true, fully justified beliefs. That’s not sufficient, but it is necessary. The relevant question we will be asking is this: What is required for a belief to be fully justified? This paper attempts to provide an important part of the answer. DOI: 10.4324/9781003118701-5

We Are Justified in Believing that KFK Is Fundamentally Wrong 29 But even more than true, fully justified belief is needed to handle the cases generated by the Gettier literature. This paper only briefly discusses what that “more” is.2 Our focus will be on what is required to satisfy the justification condition in knowledge. I will argue that it is doxastic justification (not mere propositional justification) that is required for knowledge, and that doxastic justification arises when the knower employs a proposition that need not be doxastically justified. Thus, what is fundamentally wrong with KFK is that it, at least implicitly, appeals to a transfer view of justification – a view which holds that if a person, S, acquires knowledge via inference, S does so by transferring justification from the reasons employed to the inferred proposition. My claim is that this conception of justification misconstrues how doxastic justification arises and once it is recognized what doxastic justification requires and what it does not require, it should be clear that KFK is fundamentally wrong.

2.2 Propositional and Doxastic Justification and Requirements for Knowledge “Belief” is ambiguous. It can be used to refer to a proposition as in “her belief is true” or “her belief has the form of a subjunctive conditional,” or “her belief that p follows from her belief that q.” Beliefs, qua propositions, are typically the relata in inferences. That is, if you infer y from x, both x and y are propositions.3 However, “belief” can also refer to a mental state as in “she acquired the belief while visiting her friend” or “she had that belief for many years” or “because she believed she had so much to get done today, she drove to work an hour earlier than usual.” The expression “justified belief” is ambiguous because of the ambiguity of “belief.” It can refer to a justified proposition or it can refer to a justified belief-state. I will adopt the terminology that Roderick Firth proposed, i.e., the former is called “propositional justification” and the latter is called “doxastic justification” (Firth, 1978). The distinction between propositional and doxastic justification is crucial in understanding any analysis of knowledge that contains a justification condition. Restricting the scope of x to propositions known inferentially, we can define propositional justification this way: A proposition, x, is justified for a person, S, if and only if S has all the evidence, e, sufficient to entitle her to believe that x. And let us define doxastic justification this way: A belief-state with x as its content is justified for S if and only if x is propositionally justified for S by some e, and e is the basis of S’s believing that x.

30  Peter D. Klein There are at least three questions one might ask: (1) What makes evidence sufficient to justify a proposition? (2) What does it mean to say that S “has” evidence? (3) What is required of S to use e as a basis for believing that x? Answering those three questions would be a task beyond the scope of this paper. For our purposes, all that needs to be said about (1) and (2) is that any plausible account of sufficient evidence and any plausible account of evidence available to S will be compatible with the claims in this paper. However, answering (3) is the primary task of Section 2.4 in this paper. At this point, what is crucial is the distinction between propositional and doxastic justification. To see that distinction more clearly consider a significant difference between Holmes and Watson. They could have the same evidence, e, in a murder case. But, unlike Holmes, Watson does not use e to justify his believing that Moriarty is the murderer. Either he might not think Moriarty is the murderer or he might think that Moriarty is the murderer for some completely foolish reason, e.g., because Moriarty’s name begins with “M.” In the first case, he simply doesn’t have the belief that Moriarty is the murderer, and in the latter case, we would say that he believes the truth but for the “wrong reasons.” In both cases, Watson has propositional justification but not doxastic justification for the believing of the proposition Moriarty is the murderer.4 Consider this analogy: Sally is a well-trained sprinter and has all it takes to run a fast 100-meter race. That is, she is in great shape, she has practiced appropriately, she is familiar with her competition, etc. She has the potential to run an excellent race. But, of course, in order to run a great race, it is not enough that she has the potential to do so. She has to actually run the race and follow the rules (don’t move before the gun goes off, stay in your lane, don’t interfere with any other runner, etc.) and get to the finish line without any other runner getting there before she does. The difference between merely having the potential to run a good race and running one is analogous to the distinction between propositional justification and doxastic justification. In order to actually run a good race, Sally must have the potential to do so. In order for S to have a justified belief with content x, S must, in general, have some evidence, e, that justifies x and must follow some “rules of inference” when deploying that evidence. (I say “in general” to allow for non-inferential knowledge briefly discussed later.) In short, doxastic justification of the belief that x entails propositional justification of x. Importantly, the converse is not true. Propositional justification does not entail doxastic justification. To sum up: In propositional justification, what becomes justified is a proposition, say x; for doxastic justification, what gets justified is a belief-state with x as its content. To see why that is important, consider this question: Which kind of justification is required in the analysis of knowledge? Should the

We Are Justified in Believing that KFK Is Fundamentally Wrong 31 justification condition to be understood as merely propositional justification or should it be understood as the stronger doxastic justification? It might seem that it need not be the stronger form of justification because the tripartite analysis (insufficient though it is) already contains the belief condition. So, it would be redundant to require doxastic justification. But note that there are two requirements for S being doxastically justified in believing that x: (1) S must believe that x and (2) if the knowledge that x is inferential knowledge, then in order for S’s belief to be justified, the evidence S employs must actually contain enough of the “right reasons” – i.e., the reasons that make the proposition, x, justified. Just as Sally must employ her skills in winning, she doesn’t win even if she crosses the finish line first if, e.g., she were discovered to have put some muscle relaxation drugs in her competitors’ food! We need not decide what in general makes a reason a “right reason” in order to see that it is not enough simply to require that S has propositional justification and that S believes the target proposition. For example, if S believes that there will be a storm tomorrow and has good reasons to believe that there will be a storm tomorrow but does not employ those reasons to justify her belief but rather employs her belief that tea leaves predict a storm tomorrow, she lacks knowledge because her belief-state that there will be a storm tomorrow is not justified even though its propositional content is justified. The moral here is that the justification condition in knowledge must refer to doxastic justification, not mere propositional justification. Or more simply, knowledge that x requires doxastic justification for believing that x. That moral will play a crucial role in my argument that the KFK principle is fundamentally wrong.

2.3 Three Important but Relatively Peripheral Problems with KFK Let me begin by stating a canonical version of the KFK Principle5: KFK: If S knows that x via inference or reasoning from e, then S knows that e. Using KFK and the moral from the previous section, it is clear that if KFK were true, then if S knows that x via inference from e, then both S’s belief-state with content x and S’s belief-state with content e would be doxastically justified and that what makes believing x justified is that S employs e. “Employing” is often given a causal interpretation: The belief-state containing x is caused, at least in part, by the belief-state containing e. I will argue against that later, but at this point, we should leave the interpretation of “employs” open to both a causal and noncausal account, because in this section I will consider three challenges to KFK that do not depend upon which account is the correct one.

32  Peter D. Klein The first challenge of the three challenges is in the literature. To my knowledge, neither the second nor third is in the literature. I will argue that even if any of those challenges are valid, none of them addresses what I think is fundamentally wrong with KFK. Nevertheless, they do chip away at the KFK principle and might loosen some of our initial intuitions about the plausibility of the KFK and make its rejection a bit easier to accept. 2.3.1  Knowledge from Useful Falsehoods Knowledge can arise from beliefs that are false, and hence, not known. There are many such examples.6 Here’s one I have employed before: The Santa Claus Case: Mom and Dad tell young Virginia that Santa will put some presents under the tree on Christmas Eve. Believing what her parents told her, she infers that there will be presents under the tree on Christmas morning. She knows that there will be presents under the tree on Christmas morning. (Klein, 2008, p. 37) [Slightly modified] There are two strategies on offer for reacting to this as a counterexample to KFK (Borges, 2020, pp. 286–287): (1) Deny the intuition that this is a case of knowledge or (2) Grant the intuition that this is a case of knowledge but claim that what makes the target proposition (t: There will be presents under the tree Christmas morning) justified is a true and justified proposition that Virginia believes (even if only dispositionally). There is no way to defend the intuition that this is not a case of knowledge except to point to the many other examples presented in the relevant literature. Maybe one of them would prompt the intuition that this is a case of knowledge in which useful falsehoods appear to provide the evidential base. The second strategy for rejecting these cases seems more promising. What could be claimed, and was claimed by Borges in defending KFK against other proposed examples of useful falsehoods, is that there is a true, justified proposition believed by Virginia upon which her knowledge of t depends. That proposition would be: s: Someone will put presents under the tree Christmas morning. It is claimed that Virginia’s justification of t depends upon s because if she didn’t believe s, she would not be justified in believing that Santa will put presents under the tree. So, although she actually employed a false proposition in her reasoning, there is a known proposition on which her justification depends. I don’t think that strategy succeeds because it depends upon conflating propositional and doxastic justification. For the sake of the argument here, I grant that t is propositionally justified for Virginia.7 But her belief

We Are Justified in Believing that KFK Is Fundamentally Wrong 33 with that propositional content is not doxastically justified because she did not employ any reasons whatsoever for believing t. She could employ her reasons to believe the existential generalization in order to justify that belief, but a merely potentially doxastically justified belief isn’t doxastically justified (yet); and in order for her to know that someone will put presents under the tree, that belief-state must be doxastically justified. Later, we will consider two on-offer accounts of doxastic justification and neither of them would classify her belief as justified. So, I think that useful falsehood case presents a counterexample to KFK. Of course, the tricky problem here is to be able to distinguish cases in which there are useful falsehoods that lead to knowledge from those cases in which falsehoods preempt knowledge as in the original Gettier Cases. I have tried to provide a way to make that distinction properly (Klein, 2008), but laying that out here is not necessary because in all of the purported cases of useful falsehoods leading to knowledge the false proposition is, as Risto Hilpenin said, “close to truth” (Hilpinen, 1988, pp. 163–164). I suggested two criteria for closeness to the truth: (1) The false proposition entails the true one and the evidence for the false proposition is equally good evidence for the true one (Klein, 2008).8 Maybe those requirements are too stringent or too lax. The point here is that whatever is the correct account of useful falsehoods, it must capture the relevant sense of “close” to the truth. Thus, strictly speaking, the KFK principle is false, but still close enough to being true that it remains a useful way of understanding knowledge. KFK could be understood in the same way regarding knowledge as does the Ideal Gas Law (PV = nRT) in our understanding the behavior of gasses. With a little chisholming 9 and some added parameters, e.g., restricting its scope to the behavior of gases under standard temperatures and pressures and to molecules that are not highly polarized, the Ideal Gas Law remains a useful principle. Maybe the KFK principle should be seen as a useful way of understanding knowledge under ideal standard epistemic conditions (no twins, no clever car thieves, no magicians, no malicious daemons, no red lights illuminating white widgets, etc.) and only applied to the knowledge of people who can put two and two together and are able to recognize the relatively obvious implications of what they believe. The Ideal Gas Law was not jettisoned when it didn’t accurately describe the behavior of some gasses under some conditions. Perhaps KFK is useful in the same way. Closeness to the truth is good enough. Horses have four legs is useful in understanding what horses are, even though there are horses that only have three legs. I will argue later that there is a fundamental problem with KFK because it misconstrues how doxastic justification typically arises. But before doing that, let’s look at two other possible counterexamples to KFK.

34  Peter D. Klein 2.3.2  Reductio ad Absurdum Reasoning I haven’t seen this potential problem with KFK mentioned in the literature, but it is an interesting issue that at least deserves to be mentioned.10 Suppose S believes that ~r but has not found any direct way justifying her belief that ~r. Then she remembers something from her Logic 101 class – the method of indirect proof. Applying that method, one begins reasoning by supposing the negation of what they believe to be true and, then, they derive a contradiction from that supposition. So, she supposes that r. And, further, she is able to show that r entails both a proposition, say c, and its negation, ~c. Her belief that ~r becomes justified by that reasoning. (So you “can prove a negative”!) But it is crucial to note that although she gains knowledge from that pattern of reasoning, her reasoning begins with a supposition. Supposing something is like questioning something; it is neither true nor false, and a fortiori it is not known. Of course, there are alternative ways S could have justified her belief that ~r without using the indirect method.11 My point, however, is that the method she did use resulted in knowledge. S justified her belief that ~r, at least in part, on the basis of something she did not know but only supposed. Nevertheless, even if this example shows that in some instances some essential steps in S’s reasoning are not known, it could be argued that a supposition is temporarily taking something to be true or imagining that something is true, for the sake of the argument. Or KFK could be modified by saying that if S knows that x on the basis of e, then every proposition in e is known. Either one of those suggestions captures the spirit of KFK and it avoids this purported counterexample. My point here is that even if this counterexample shows that some chisholming is needed, there are ways available to save KFK in the same way that the various modifications of the Ideal Gas Law preserved it as a useful general characterization of the relationships among pressure, volume, and temperature. 2.3.3 The Regress of Reasons and Equivocation on “Knowledge” Finally, I want to briefly consider the problem of the regress of reasons. I have argued that the regress, though potentially infinite, is salutary and not at all vicious (Klein, 2007). Nevertheless, most epistemologists, suffering from infiniphobia, seek ways to terminate the chain of propositions serving as reasons by requiring that the alpha proposition is known but not on the basis of inference from another proposition. Rationalists will claim that the first proposition is self-evident; empiricists will claim that the proposition is directly evident or known by acquaintance. That is a very rough and ready sketch of foundationalism but it is sufficient to illustrate a problem for KFK as currently formulated because

We Are Justified in Believing that KFK Is Fundamentally Wrong 35 foundationalism, whether rationalism or empiricism, takes the chain of reasons to begin with a reason that is not inferred from another proposition.12 The problem that arises here is that it seems that the “K’s” in KFK are not being used univocally. Some explanation of how doxastic justification which is part of the defining definition of knowledge seems to be of two sorts: inferential and non-inferential. Empiricists will say that the non-inferential origin of doxastic justification occurs whenever the “external” object/fact appropriately causally connected to the belief with the alpha reason as its content; the rationalist will claim that there is some “internal” property of the alpha reason that is intuitively apprehended. What is noteworthy is that one of the reasons that KFK seems so plausible is that it doesn’t seem at all mysterious to say that inference can transfer justification from one belief to another. It seems similar to a person transferring a fire from one piece of burning wood to ignite another piece, or transferring a cold to another person by sneezing. The doxastic justification possessed by the alpha belief is transferred to the beta belief, and so on down the chain of reasons to the target belief. What does seem somewhat mysterious is how doxastic justification arises in the alpha belief. Of course, both empiricists and rationalists attempt to solve this by offering some general account of doxastic justification designed to show that both the belief with alpha reason as its content and the subsequent beliefs are, ceteris paribus, instances of knowledge. In other words, maybe the seemingly apparent equivocation on what the “K” means can be explained away. Later, I will argue that this type of purported solution is at odds with an epistemic agent’s purpose in finding and using reasons to enhance the degree of justification of a belief. But at this point, I merely want to note that a theory that holds that there are two distinct ways in which doxastic justification, and hence knowledge, can arise is significantly more complex than a theory which only appeals to one way in which doxastic justification arises. This brings us to the key question: How does a belief become doxastically justified?

2.4 Two Accounts of Doxastic Justification: The Etiological Account and the Emergent Account There are two answers to the question posed at the end of Section 2.3 that have been developed which are designed to explain how doxastic justification arises. The Etiology Account: Doxastically justified beliefs arise only when the etiology of the belief satisfies some specified constraints. (I will focus on causal/reliabilist theories of doxastic justification because they are the most well-known.)

36  Peter D. Klein The Emergent Account: Beliefs become justified when and only when the believer justifies them by citing reasons which make their propositional contents justified. Let’s begin by briefly looking at three problems for the etiological account. First, if a causal account were to require that the known object or fact be appropriately causally connected to the belief, it will be extremely difficult to explain how beliefs become doxastically justified when those beliefs are about abstract objects. Maybe there is some way to include such knowledge in their accounts, but that even deepens the problem mentioned in Section 2.3. For whatever way is devised will require having two accounts of the origin of doxastic justification – one for objects that can be efficient causes (i.e., events in the world) and one for objects that can’t be the efficient cause of anything (i.e., abstract objects). Second, if doxastic justification arises (somehow) in the alpha link and is transferred to the beta link and, then, to the subsequent links, the degree of justification of the target belief will diminish as the chain lengthens unless there are some deductive inferences in the chain that could restore the lost justification. However, an epistemic agent seeks and uses the reasons to justify her beliefs in order to enhance the degree of doxastic justification of the belief. But if the etiological view were correct, the more beliefs added to the length of the chain, the lower the degree of doxastic justification there is in the target belief. Stopping the inquiry for further reasons would be the best way to maintain justified beliefs! Perhaps there is some sort of theoretical epicycle that could be employed to restore the lost justification as the chain lengthens. For example, perhaps the foundationalist could require that every third link must be deductive or maybe that every non-deductive link in the chain must be matched by a deductive, restorative link. Doesn’t seem very promising, does it? Third, there is what I call the HED problem, i.e., the hazard of empirical disconfirmation (Klein, 2019, pp. 401–403). The foundationalist/ reliabilist claims that a belief is doxastically justified only when the belief has an appropriate causal pedigree. For my purposes here, it doesn’t matter what makes it kosher. It could be that the known object or fact plays a causal role in the genesis of the belief. Or perhaps the belief is doxastically justified only if the belief is causally produced by a reliable process, i.e., the type of process that generates a sufficiently high percentage of true beliefs.13 Or, maybe, the process is kosher only if the belief results from an adroit exercise of a epistemically virtuous person.14 The HED problem is that as we gain more empirical knowledge about the actual etiology of our beliefs, it might just turn out that our doxastically justified beliefs do not satisfy the constraints imposed by any current etiological account.

We Are Justified in Believing that KFK Is Fundamentally Wrong 37 Here’s another way to understand the HED problem. The etiology view requires that a change in the justificatory status of belief can occur only when there is a change in the process that produced or sustained the belief. Suppose I believe that Alabaster Toothpaste is the best one for maintaining white teeth. Suppose further that what caused my belief was a TV advertisement claiming that “four out of five NYC dentists chose Alabaster Toothpaste.” I later find out that their claim about the NYC dentists is true but misleading because the Alabaster Company carefully selected the five NYC dentists knowing full well what opinions those dentists had about the toothpaste. I learn about that very misleading advertisement in an article that, nevertheless, goes on to say that there was a well conducted survey in NYC and 80% of the dentists did, in fact, choose Alabaster Toothpaste. There are at least three different degrees of justification for believing that Alabaster Toothpaste is the best: (1) the degree of justification when I hear the advertisement, (2) the degree of justification when I learn how the advertisement misled me, and (3) the degree of justification when I learn of the survey. The HED problem is this: Did the causes of my belief change three times? Maybe. Maybe not. That’s an empirical question and it could just prove to be false; that’s the hazard of empirical investigations. My point is that at least until those investigations are much further along, it would be hasty to adopt any etiological account of the distinction between doxastically justified beliefs and beliefs lacking doxastic justification. Now let’s turn to the Emergent Account of doxastic justification and begin by noting that believing comes in degrees, from psychological certainty down to hunches. What degree of belief is required for knowledge is a vexing issue, and I will not attempt to address it here. Let’s just take the degree of belief to be whatever is required for knowledge. The issue here is what makes a belief doxastically justified? The answer, I think, is that the knower makes the belief justified by sincerely citing enough good reasons for the truth of the proposition contained in the target belief.15 For example, if S believes that there are vaccines which are effective in preventing most severe cases of COVID-19, S can make the belief justified by employing her reasons for her belief if those reasons make the target proposition justified. If S were to have such reasons but not employ them, the proposition that the vaccines are effective would be justified for S, but S’s believing would not be. What reasons are available for believing that the emergent account is the correct way to understand how beliefs become justified? First, “justify” is in a family of verbs that end in “fy.” There are about 200 such verbs in English.16 The “fy” in almost all of them can be traced to the Latin “facere” meaning “to make.” When we certify something, we make it certified. When we rectify something, it has been made right. When we fortify something, it has been fortified. There is even the verb

38  Peter D. Klein “fishify” which means to transform something into a fish! (I have no idea how or why you would do that, but that’s what it means!) If I were to utter this sentence, “I plan to greenify my backyard” every competent English speaker would know what I meant, even though there is no word “greenify” in English. In short, “justify” belongs to the class of completion verbs as opposed to process verbs. Once S employs the proper reasons for her belief, S has made the belief justified.17 As far as I know, the distinction between process and completion verbs was first noted by Aristotle: Since of the actions which have a limit none is an end but all are relative to the end … but that movement in which the end is present is an action…. . Of these processes, then, we must call the one set movements, and the other actualities. For every movement is incomplete – making thin, learning, walking, building; these are movements, and incomplete at that. For it is not true that at the same time a thing is walking and has walked, or is building and has built, or is coming to be and has come to be …, or is being moved and has been moved, but what is being moved is different from what has been moved, and what is moving from what has moved. But it is the same thing that at the same time has seen and is seeing, or is thinking and has thought. The latter sort of process, then, I call an actuality, and the former a movement. (Aristotle, 1941, pp. 1048b17–34) In all the “fy” verbs whose etiology includes “facere,” the act contains the completion of a process. For example, consider these verbs: certify, magnify, clarify, simplify, purify. That they signify the completion of the act is in the etiology of the words. If we xxxfy something we make it ­xxxied. In other cases, like some of the completion verbs Aristotle mentions, it is not in the etiology of the word but the test that he proposes does seem to work. The test is whether when you xxxfy, have you also xxxfied? What is crucial to note is that prior to S’s justifying the belief, S could have the belief and could have good reasons for believing the proposition, but until those reasons are employed, the belief, though potentially justifiable, is not justified. The proposition is justified, the belief – the believing – is not. This points to the primary difference between the etiological and emergent accounts of doxastic justification. The etiological account would say that the belief is justified solely on the basis of its causal history. For example, if the process that resulted in the belief is a reliable one, the belief is ipso facto justified. There is no need for S to justify the belief. It is already justified. All the justifying we do is superfluous. If an etiologist wanted to determine whether a particular belief she has is justified, she should examine whether the process that produced the belief is a reliable

We Are Justified in Believing that KFK Is Fundamentally Wrong 39 one. If an emergentist wanted to determine whether her belief is justified, she should examine the reasons she has for the propositional content of the belief to determine whether those reasons are sufficient to justify the propositional content of the belief. I think it is clear that the emergentist account of doxastic justification provides a more accurate description of our actual epistemic practices. Second, the emergent account is not subject to the HED problem. For whether S is justified in believing some proposition, x, does not depend upon the causal history of the belief. No matter what that history is, if S sincerely deploys the enough good reasons for her belief (either to herself and/or others), the belief is justified. There is no HED. Are the reasons good enough to justify the proposition? That is not an empirical question. It is a normative one. Third, the emergent account has a straightforward way of explaining why the Brain-in-the-Vat (BIV) has (or at least can have) justified beliefs even though the causal process that results in its beliefs is about as unreliable as such a process could be. If the BIV justifies its beliefs by marshaling its reasons for the propositional content of its beliefs, then the BIV has doxastically justified beliefs. We don’t have to add an epicycle to a reliabilist account enabling that account to handle such cases. That is, we don’t have to index reliability to our world and classify beliefs obtained by the BIV as reliable because such a process is reliable in our world even though it is not reliable in the BIV’s world. Fourth, the emergentist account is better at describing how I sometimes come to some of my beliefs. Maybe I’m idiosyncratic but I often come to believe things that I don’t yet have any good reasons for believing. Wishful thinking is a case in point. For example, at various times, I believed that Trump would become presidential. I had no good reasons for believing that and still don’t. Wishful thinking is not a reliable way to come to believing something. Nevertheless, in some distant possible world, there are many occasions in which he is presidential. In that world, I would have good reasons for the belief even if wishful thinking caused the belief. My point is that those reasons were acquired after I had the belief. The etiology of the belief did not involve those reasons as a cause of my belief. The cause was wishful thinking.18

2.5 Why We Are Justified in Believing that KFK Is Fundamentally Wrong This section is mercifully short. KFK requires that if we have inferential knowledge that x based on evidence e, we must also know that e. Knowing that e entails that the belief that e is doxastically justified. The emergent account of doxastic justification is the correct account of doxastic justification. That account does not require that the beliefs containing the reasons we employ to justifying x are, themselves, doxastically

40  Peter D. Klein justified. Just as a magnifying glass need not itself be magnified or that the sandbags we use to fortify a beach from erosion are themselves fortified or that the mask used to horrify an audience is itself horrified, the beliefs with the reasons we use to justify our target beliefs need not be justified. Maybe in some cases we have justified some of those beliefs, or in some cases, maybe we have even justified all of the beliefs with the reasons as their contents. But requiring that in every case of knowledge, as KFK does, does not correctly describe our epistemic practices. I suspect that when the defenders of KFK require that the reasons must be known, they are thinking that (mere) propositional justification is required for the reasons. I grant that the propositions which serve as reasons for our beliefs could be propositionally justified. That is, I grant that we could have all the evidence it takes to be entitled to believe the reasons. But in order for those reasons to be used to justify a target belief, it is not necessary that the beliefs with the reasons as their contents be doxastically justified. Recognizing the distinction between doxastic justification and propositional justification and that doxastic justification is required for knowledge justifies the belief that KFK is fundamentally wrong. There is one objection presented by Borges to that claim that should be addressed because it seemed correct to me at one time. But I think discussing illustrates why I now think KFK is fundamentally wrong. It goes like this: KFK is entailed by [the undefeated justified true belief account] of knowledge because if one fails to know a premise, x, on which one’s conclusion depends, then there is a truth ~k (namely, “S does not know that x”) that is such that the conjunction of ~k and those premises fails to justify one’s belief in the argument’s conclusion. More specifically, ~k defeats one’s justification because either one does not believe that x, x is false, or x is not justified for one; the truth of any of those claims entails ~k and is sufficient, according to Klein, to defeat one’s justification.19 (Borges, 2020, p. 285) [slightly modified] There are two reasons for thinking that ~k is not a defeater: (1) S can acquire knowledge from a useful falsehood, so it is not true, strictly speaking, that if x is false, S does not know the conclusion of the argument. But I grant that x must be close enough to the truth. So I don’t think that is a fundamental problem with KFK. (2) But the belief with x as its content need not be doxastically justified in order for S to be know the conclusion of the argument. In other words, it is not true that if the belief-state containing x is not doxastically justified, then S does not know the conclusion of the argument. That’s why we are justified in believing that KFK is fundamentally wrong.

We Are Justified in Believing that KFK Is Fundamentally Wrong 41

Notes 1. I say “Gettier literature” rather than “Gettier Cases” because I think that at least one of the original Gettier Cases fails to provide a counterexample to the TJB account of knowledge (Klein, 2021). 2. Of course, whatever the “more” is could be added to the justification condition. For example, only justifications that cannot be defeated could be taken to satisfy the justification condition. For clarity of exposition, I will keep the “more” separate. But an issue related to the no-defeat condition will considered near the very end of the paper when discussing an argument that has been given for KFK by Rodrigo Borges (Borges, 2020). 3. Of course I grant that we say such things as “She inferred he was happy because she saw his broad smile.” I take that as shorthand for “She inferred he was happy from he has a broad smile.” 4. For the sake of grammatical naturalness, in the future, I will use “the belief that ...” relying on the reader to disambiguate “belief” except in those cases in which either the context does not provide sufficient clarity or in order to emphasize the point at issue. 5. This is my gloss of the principle as stated by Rodrigo Borges (Borges, 2020, p. 283). 6. See Klein (2008) and Warfield (2005) for other counterexamples. 7. I doubt that Virginia must believe that someone will put presents under the tree. Virginia might have all the concepts necessary to understand and believe the proposition Santa will put presents under the tree, but she might not yet have the concept of “someone,” i.e., existential generalization. 8. I should note that in that article I gave a causal account of doxastic justification. As should be obvious, I no longer think that is the correct way to characterize doxastic justification. 9. “Chisholming” was coined by Daniel Dennett in his The Philosophical Lexicon as follows: chisholm, v. To make repeated small alterations in a definition or example. “He started with definition (d.8) and kept chisholming away at it until he ended up with (d.8iiiiiiii)” (Dennett, 1987). 10. That sentence was true when I wrote it, but since submitting this paper I read a forthcoming paper by Matt Leonard that does argue that reductio reasoning can lead to knowledge (Leonard, forthcoming). 11. For example, she could have reasoned this way: 1 2 3 4 5

r -> (c & ~c) ~ (c & ~c) -> ~r (c v~c) -> ~r c v ~c (if your system has this as an axiom) ~r

12. Following Sosa, I think contemporary coherentism is a form of foundationalism in which what makes a proposition justified is that it is a member of a comprehensive set of coherent propositions (Sosa, 1980). It is the fact that the proposition is a member of such a set that makes the individual propositions justified. Thus, this can be seen as a type of foundationalism. Also see (Klein, 2007, pp. 15–16). 13. For a full discussion of the causal/reliabilist account, see Goldman (1986). 14. The virtue epistemology accounts (Sosa 2007; Zagzebski, 1996) are difficult to categorize. If the character of the knower is an efficient cause, even in part, of a belief being doxastically justified, then they are instances of the etiological accounts. But, maybe character is not an efficient cause (see Klein 2019, pp. 408–410).

42  Peter D. Klein 15. The requirement that S be sincere is needed to prevent cases of rationalization and deceit from qualifying as cases in which the belief becomes justified. 16. See https://www.wordmom.com/verbs/that-end-with-fy. (I note that “signify” and “rigidify” are not on the list.) There are at least two on the list that do not trace back to “facere.” Those are “defy” and “affy.” 17. Some relatively contemporary work on this class of verbs can be seen in Vendler (1957), Comrie (1976), and Ryle (1949). 18. It could be claimed that in this case the original cause of my belief was wishful thinking, but it was replaced by a new cause, namely Trump’s otherworldly presidential behavior. And similarly, in the three degrees of justification, perhaps it could be claimed that the original cause was the TV advertisement and it was replaced by two successive sustaining causes. Those moves merely reinforce the HED problem. 19. I use “e” where Borges uses “x.”

References Aristotle. (2009). Posterior analytics. In R. McKeon (Ed.), The basic works of Aristotle (pp. 110–187). Random House. Aristotle. (1941). Metaphysics. In R. McKeon (Ed.), The basic works of Aristotle (pp. 689–934). Random House. Borges, R. (2020). Knowledge from knowledge. American Philosophical Quarterly, 27(3), 283–297. Comrie, B. (1976). Aspect. Cambridge University Press. Dennett, D. (1987). The philosophical lexicon (8th ed.). Tuft’s Digital Library. Firth, R. (1978). Are epistemic concepts reducible to ethical concepts? In A. Goldman & J. Kim (Eds.), Values and morals (pp. 215–229). D. Reidel Publishing Company. Goldman, A. (1986). Epistemology and cognition. Harvard University Press. Hilpinen, R. (1988). Knowledge and conditionals. In J. E. Tomberlin (Ed.), Philosophical perspectives (vol. 2, pp. 157–182). Ridgeview Publishing Company. Klein, P. (2007). Human knowledge and the infinite progress of reasoning. Philosophical Studies, 134(1), 1–17. Klein, P. (2008). Useful false beliefs. In Q. Smith (Ed.), Epistemology: New essays (pp. 25–61). Oxford University Press. Klein, P. (2017). The nature of knowledge. In C. de Almeida, R. Borges, & P. Klein (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 35–56). Oxford University Press. Klein, P. (2019). How to get certain knowledge from fallible justification. Episteme, 16, 395–412. Klein, P. (2021). Justification closure rightly constrained, Gettier cases, and skepticism. In E. Alves, E., K. Etcheverry, & J. R. Fett (Eds.), Socratically: A festschrift in honor of Claudio de Almeida (pp. 423–450). EDIPUCRS. Leonard, M. (forthcoming). Knowledge, false belief, and reductio. Inquiry: An Interdisciplinary Journal of Philosophy. Plato. (1997). Meno. In J. Cooper (Ed.), Complete works (pp. 870–897). Hackett Publishing. Radford, C. (1966). Knowledge – By examples. Analysis, 27, 1–11. Ryle, G. (1949). The concept of mind (pp. 149–153). Barnes & Noble.

We Are Justified in Believing that KFK Is Fundamentally Wrong 43 Sosa, E. (1980). The raft and the pyramid. Midwest Studies in Philosophy, 5, 3–25. Sosa, E. (2007). A virtue epistemology. Clarendon Press. Vendler, Z. (1957). Verbs and times. The Philosophical Review, 66(2), 143–160. Warfield, T. (2005). Knowledge from falsehoods. Nous-Supplement, 19, 405–416. Zagzebski, L. (1996). Virtues of the mind: An inquiry into the nature of virtue and the ethical foundations of knowledge. Cambridge University Press.

3

No Knowledge from Falsity Fred Adams

3.1 Introduction Knowledge is of the truth.1 Truth is of the world. Truth involves representation of reality. The world (what is represented) is what makes things true. So knowledge is of the world. What is false is not part of the world—not part of what is the case.2 The false representation may be part of the world, but not what it represents. Hence, I maintain that what is false does not deliver knowledge. It cannot be the thing that elevates a belief from being a mere belief, or even a mere true belief, to being knowledge of the world.3 How could it? How could it give to something else (a belief, a believer) something it does not itself have? My answer is that it could not and that wherever that appears to be happening, upon closer inspection, something else is going on to give the illusion of knowledge being generated via falsity. In what follows, I will treat knowledge as the end product of information flowing from the world to the knower. Like electricity in a circuit, knowledge flows only when information flows. I shall also follow the view (Dretske, 1981) that the information that p cannot be false. So my guiding principle is that knowledge is information-actuated belief. When S knows that p, S’s belief that p is informed by the information that p. S receives the information that p when, based upon some local empirical situation e, the conditional probability of p, given e, is 1.4 (I shall limit my discussion to empirical knowledge only.)

3.2  Knowledge from Falsity Recently, there have been several cases where it is claimed that there is knowledge from falsity. I will list some examples and discuss some cases. But first, what does it mean to say there is knowledge from falsity? No one has claimed that S can know that p where p is false. So that is not being claimed. Hence, if a false belief is to be involved in the acquisition of knowledge, it must be involved in some other way. What other ways are possible? This may not be exhaustive, but the false belief may be causally relevant or inferentially relevant (these may DOI: 10.4324/9781003118701-6

No Knowledge from Falsity 45 be the same thing, but perhaps not always), or there may be some other dependence relations such that the true belief that turns into knowledge depends upon the false belief in some non-trivial way. In what follows, I will not dispute that a belief that is known can causally or inferentially depend upon a false belief (or state—there may be misperceptions, illusions, hallucinations that are not beliefs but play some causal role in arriving at a belief, even a true belief). 5 I will maintain that what turns the true belief into knowledge is not the false belief or mental state, nor the dependency upon the false state, but some ­genuine information that flows to the knower in the given situation.6 3.2.1 LSD Suppose Ken has been drugged and has been hallucinating under the influence of lysergic acid kiethylamide (LSD). So when the drug wears off, Ken is still a bit afraid to trust his own eyes. Nurse Angie walks in and Ken asks: “am I okay now?” He thinks he sees a lizard on the floor. Angie has no idea. She just took over Betty’s shift and doesn’t know Ken’s condition. As it happens, the drug wore off completely hours ago while Ken was asleep and it is indeed a lizard he sees upon the floor. Angie says; “Yes, I know: you’re fine now.” Ken would not have believed his own eyes, had she not said this (without actually knowing). So Ken’s true belief that there is a lizard on the floor depends upon his false belief that Angie knows his current medical condition. I think in this case Ken does know there is a lizard on the floor. He may have needed a false belief in order to trust his eyes and acquire the true belief that there is a lizard on the floor. But the false belief did not lead to his knowledge. That came from this veridical receipt of visual information about the whereabouts of the lizard.7 The point of the example is just that one’s coming to know can depend upon other beliefs that are false, without the knowledge thereby coming “from falsity” (in the relevant sense). There was no flow of information through the false state to the true belief—and even if there were, it would be the flow of information, not the false beliefs that was the source of knowledge.8

3.3  Responding to Warfield9 In “Knowledge from Falsehood” (2005), Ted Warfield argues that there are cases in which one may obtain knowledge of a proposition by inferring it from a falsehood (Warfield, 2005, p. 412), where this inference is not merely the causal basis for believing the known proposition but its epistemic basis, or as he puts it, that the belief in the false proposition has a “epistemizing role” (Warfield, 2005, p. 409). Warfield proceeds by giving five examples in which this is apparently the case. He then argues that four “resistance strategies” fail. These are attempts to show that the

46  Fred Adams epistemic basis for the proposition that one knows is not a false belief but some other true belief. We will argue that there is a simple resistance strategy that succeeds. Consider just one of Warfield’s examples, Handout. Counting with some care the number of people present at my talk, I reason: ‘There are 53 people at my talk; therefore my 100 handout copies are sufficient. My premise is false. There are 52 people in attendance—I double counted one person who changed seats during the count. And yet I know my conclusion. (Warfield, 2005, p. 408) The first resistance strategy is that there is a true proposition “somewhere in the neighborhood,” for example that there are approximately 53 people at my talk, that I am justified in believing and that I am ­disposed to believe. Warfield rejects this because “mere dispositions to believe cannot play an epistemizing role in an inferential argument; allowing them to do so grossly over-ascribes inferential knowledge” (Warfield, 2005, p. 410). The second resistance strategy is that I have a justified true dispositional belief somewhere in the neighborhood, “suitable for epistemizing” my conclusion. For example, I have a justified true dispositional belief that there are approximately 53 people at my talk, and if I were to infer from this that my 100 handout copies are sufficient, then I would know that these are sufficient. Warfield rejects this because it is supposed to incorrectly predict cases of knowledge. For example, a detective has the true belief that Jones is the murderer and also has the true belief in a forensic report that strongly indicates that Jones is the murderer but believes that Jones is the murderer solely because he infers this from the false and delusional belief that Jones confessed to the murder. The detective does not know that Jones is the murderer, but “there is a true and evidentially supported dispositional belief in the neighbourhood (about the contents of the forensic report) suitable for epistemizing the belief that Jones is the murderer” (Warfield, 2005, p. 411). The third resistance strategy is that I have a justified true dispositional belief “somewhere in the neighborhood” entailed by the falsehood that I actually believe. My justified true dispositional belief is the inferential basis for my knowledge although my false belief is its causal basis. For example, my false belief that there are 53 people at my talk is what makes me believe that my 100 handout copies are enough, but I know that they are enough because my justified true dispositional belief that there are approximately 53 people at my talk is my inferential basis for believing that they are enough.

No Knowledge from Falsity 47 Warfield rejects this because it incorrectly predicts knowledge in Gettier cases. Seeing a toy in the yard that looks like a dog I have the false belief that there is a dog in the yard and then infer from this that there is an animal in the yard. My belief that there is an animal in the yard is true because there is a squirrel in the yard that I cannot see behind a bush. My false belief that there is a dog in the yard is what makes me believe that there is an animal in the yard. But I also have the true dispositional belief that there is a dog or a squirrel in the yard and this entails that there is an animal in the yard. But I do not know that there is an animal in the yard. The fourth resistance strategy is that I have a true dispositional belief that is supported by the evidence for my false belief. For example, I have a true dispositional belief that there are approximately 53 people at my talk that is supported by my careful counting, which is also evidence for my false belief that there are 53 people at my talk. Warfield rejects this again because of Gettier cases. I have the true dispositional belief that there is a dog or a squirrel in the yard that is supported by my seeing what looks like a dog in the yard, which is also evidence for my false belief that there is a dog in the yard. While we agree that I know that my 100 handouts are enough, there is something deeply counterintuitive in the idea that one can acquire knowledge on the basis of false belief. It seems like epistemic black magic that one’s false belief could be transmuted into one’s secure possession of truth. Accordingly, we suggest a simple resistance strategy that has been overlooked. This is that I have a true and justified dispositional or even occurrent belief “somewhere in the neighborhood” that I in fact use as the basis for believing what I know. The question uppermost in my mind when I begin counting the number of people at my talk is whether my 100 handouts are enough. So before I finish my count, I will surely form the belief that there are fewer than 100 people at my talk. Since the question of whether my 100 handouts are enough is still one of which I am aware, it is plausible that this belief (that there are fewer than 100 people at my talk) is occurrent. It is also plausible that I infer from this that my 100 handouts are enough. This is what gives me knowledge that they are enough. I am not merely disposed to form the belief that there are fewer than 100 people at my talk, which avoids the objection to the first resistance strategy. I believe that my 100 handouts are enough because I infer this from my justified true belief that there are fewer than 100 people at my talk, rather than from a belief that is delusional and false. This avoids the objection to the second resistance strategy. How does our resistance strategy avoid the objection to the third strategy? In fact, the objection is flawed. Warfield characterizes the third strategy as the claim that “one has knowledge despite an involved falsehood if there is a justified and (at least) dispositionally believed truth

48  Fred Adams entailed by the falsehood that serves as the premise in one’s inferential argument” (Warfield, 2005, p. 411, our italics). But in his objection that this allows knowledge in his Gettier case, he describes that case as merely one in which “there is a justified and dispositionally believed truth that is … entailed by my false belief” (Warfield, 2005, p. 412, our italics). Given that the believed truth (that there are approximately 53 people at my talk, or that there is a dog or a squirrel in the yard) is entailed by the false belief (that there are 53 people at my talk, or that there is a dog in the yard), it does not follow that the justified and (at least) dispositionally believed truth (that there are approximately 53 people at my talk, or that there is a dog or a squirrel in the yard) serves as the premise in one’s inferential argument (that my 100 handouts are enough or that there is an animal in the yard) nor conversely. Moreover, surely I cannot use the proposition that there is a dog or a squirrel in the yard as a premise in any argument that there is an animal in the yard, because I have no idea whether there is a squirrel in the yard. I have no thoughts of squirrels at all, the only one being concealed from my view behind a bush. That is precisely why it is a Gettier case, since the fact that makes my belief that there is an animal in the yard true is one of which I am unaware. Even if our resistance strategy were to succumb to Warfield’s objection, we could easily stipulate that it does not apply to cases in which one’s belief is Gettiered. What makes me believe that there is an animal in the yard is my belief that there is a dog in the yard, but what makes my belief true is that there is a squirrel in the yard concealed from my view behind a bush. In contrast, what makes me believe that my 100 handouts are enough is my belief that there are fewer than 100 people at my talk and that there are fewer than 100 people at my talk is also what makes it true that my 100 handouts are enough. We have described this basis as my inference from my belief that there are fewer than 100 people at my talk to my belief that my 100 handouts are enough. Does this mean that no false belief is involved? No, because when I finish my count I may also form the false belief that there are 53 people at my talk and also infer from this that my 100 handouts are enough. This is not the source of my knowledge. I knew before I finished the count that my 100 handouts are enough because I realized before then that there are fewer than 100 people at my talk. This is the basis of my knowledge that my 100 handouts are enough. Going on to confirm that they are enough by reasoning that there are exactly 53 people at my talk is a rather neurotic attempt to confirm what I already know, rather like acquiring the knowledge that my door is locked by accepted testimony that I know is utterly reliable but then still checking for myself that the door won’t open without the key. Our resistance strategy—that I have a true and justified dispositional or even occurrent belief “somewhere in the neighborhood” that I in fact

No Knowledge from Falsity 49 use as the basis for believing what I know—also avoids the objection to the fourth resistance strategy—that I have a true dispositional belief that is supported by the evidence for my false belief. This is simply because our resistance strategy makes no mention of evidence for a false belief. In any case, as we have just noted above, we could stipulate that it does not apply to cases in which one’s belief is Gettiered. The upshot is that Handout provides no reason to think that one may obtain knowledge of a proposition by inferring it from a falsehood. What remains is to show that this is also true of Warfield’s other four putative examples of such cases, such as 4. Knowledge despite falsity. Since Williams and I contemplated this strategy, I have learned that many others have as well and that it now goes by the name “knowledge despite falsehood.”10 As it turns out, I now reject the particular version of this strategy that we articulated above and, hence, several of the other versions going by this name. I’ll explain my departure and give my revised version and then discuss some of these other attempts and why I reject them. 3.3.1  Strategy Revised As noted in the endnote, this was a first pass at a response to Warfield’s examples by John Williams and myself. I now think this kind of strategy won’t work. Any stipulation about “non-Gettiered” cases seems to me ad hoc. And I now find the appeal to dispositional (or other forms of non-occurrent beliefs) to be problematic. What is more, others in the literature find them to be problematic as well and I will discuss some of these issues below. Here is a modified case that does not seem to exploit a justified occurrent true belief nor a dispositional belief. Ted asks his grad student Alice to go count the audience prior to his talk to see if his 100 handouts are enough. Alice miscounts (actually 52 people) and reports that there are 53 people in the audience. Ted reasons that there are 53 attendees so his 100 handouts are sufficient.11 Hence, my new way of handling Warfield’s Handout example is this. While Ted miscounts the people in attendance and has the false belief about the absolute number of attendees, he receives the correct information about the relative number of attendees to number of handouts, that he has more handouts than attendees. This information and ensuing belief is correct and informs his true belief that his 100 handouts are sufficient.12 This reply does not depend on hidden, dispositional, or other non-­ conscious states in any way. Yet it allows for his false belief without yielding the result that the false belief is what gives Ted his knowledge. Instead, it is the genuine information about the relative value of attendees to handouts that gives Ted knowledge.

50  Fred Adams Here is another of Warfield’s original examples and an explanation of how this new strategy handles it. Ted’s doctor has ordered that he get at least 8 hours sleep per night. He knowingly goes to bed at 11 pm. He wakes up and see the clock reading ‘2:30 am’. He reasons: (premise) I’ve been asleep 3 hours, so (conclusion) I haven’t slept the mandated 8 hours. My premise is false. I’ve forgotten that it’s the night of the (Fall) time change and I have a clock that automatically resets in the appropriate way at the appropriate time and it has already done this. So I’ve been asleep 4 hours (we’re at ‘2:30 am’ for the 2nd time). Despite all of this complexity (and, indeed, partly because of it) Ted still knows his conclusion. (Warfield, 2005) In this case, we will assume that except for the clock not being reset to “Fall back” it is working reliably. Despite its not showing the correct time, it is only off by 1 hour, (not enough to be close to an 8 hour mistake on Ted’s part). Hence, when Ted looks at the clock and believes it is actually 2:30 am, he forms a false belief. Nonetheless, given that the clock is off only 1 hour, he receives the correct relative information that the number of hours he has slept is far fewer than 8 hours. This is true and genuine information. This information allows Ted to know that he has not slept the mandated 8 hours. He does acquire from the clock the information that the number of hours he has slept x, relative to the number of hours he is supposed to sleep y is smaller: x < y. He does not acquire the correct absolute time from the clock but he does acquire the correct relative time difference (x < y).

3.4  Ball and Blome-Tillman I now want to turn to the views of Ball and Blome-Tillman because they offer a view somewhat similar to the one Williams and I first tried.13 But I think there are similar difficulties for their view to our initial attempt and some more besides. Ball and Blome-Tillman (2016) argue that there is sensitivity to subconscious beliefs and this is why S knows in the handouts case. I like their reliance on “sensitivity,” “information” and “tracking,” but not their appeal to subconscious beliefs. As I pointed out with the counterexample to my own initial view, their view is vulnerable to it as well. They also point out that S receives the information that there are “approximately 53 people in the room.” And, of course, this is the information that there are fewer than 100 people in the room—which is all S needs to know he has enough handouts. This last move is, I believe, similar to my revised strategy of basing S’s knowledge on the relative value of

No Knowledge from Falsity 51 the information about number of handouts to number of persons in the audience. Of course, they do not spell things out exactly as I do in terms of relative vs. absolute informational values, but I think they are on that track (except for their appeal to subconscious states). However, they do make a somewhat odd claim that I think is totally wrong and want to point out now.14 First I must give the example from Warfield that they examine—Fancy Watch. Ted has a 7 pm meeting and extreme confidence in the accuracy of his fancy watch. Having lost track of the time and wanting to arrive on time for the meeting, he looks carefully at his watch, and reasons: ‘It is exactly 2:58 pm; therefore, I am not late for my 7 pm meeting’. Again he knows his conclusion, but as it happens it’s exactly 2:56 pm, not 2:58 pm. (Warfield, 2005) As should be clear by now, my current view agrees that Ted knows he is not late for his 7 pm meeting. And he knows this despite his false belief that it is actually 2:58 pm. He can know this because his mistake is off by only 2 minutes and his fancy watch is correctly giving him the information that the relative time of his meeting x is significantly greater (i.e. later) than the time on his watch y. He is receiving the correct information that x > y (and he is not late for his meeting), despite his false belief about the actual time. Ball and Blome-Tillman deny that S’s beliefs in Handout and Fancy Watch are causally based upon the false beliefs and that if they were, these would not be cases of genuine knowledge. They say: Obviously, there are cases in which subjects really do (causally) base certain beliefs inferentially on explicitly held false beliefs: but we think that such cases will not involve knowledge of the conclusions; and, moreover, if they are described appropriately, we will not be tempted to think of those conclusions as known (that is, we will not have the intuition that they are known). (Ball & Blome-Tillman, 2016, p. 556) I disagree. I won’t here go in detail through their reasoning about why they think Ted’s knowledge is not causally based on his false beliefs in Handout and Fancy Watch. I won’t because the causal role of the false belief is not at issue. What is at issue is whether there are beliefs actuated or sustained by genuine information leading to true beliefs despite the false beliefs and their causal role in arriving at the true beliefs. It seems clear to me that Ted’s beliefs are causally based upon the false beliefs in both of these examples (Handouts and Fancy Watch). What is more, in my opening case of Ken, it is clear that he would not have believed his

52  Fred Adams own eyes, had he not believed (though false) that Angie knew his present medical condition. Furthermore, it is easy to generate cases of knowledge despite causal basing upon false beliefs: Walt steps on his scale Monday morning. His scale (which is off by 1.5 pounds unknown to him) reads 150 lbs. On Friday, he steps on his scale and it reads 165 lbs. He reasons, “I’ve gained weight.” Now despite both of his beliefs about the absolute value of his weight being false, he acquires the correct relative information that he has indeed gained weight. And he would not have believed this had he not believed the two scale readings. So I agree with Ball and Blome-Tillman that there is knowledge in Handouts and Fancy Watch, and I agree with them that these are not cases of knowledge from falsity. However, I disagree that we need to appeal to there being no causal connection between the knowledge and the false beliefs. And instead of their appeal to tacit or dispositional, or subconscious true beliefs to support the knowledge, all we need to show is that the knowledge is supported by genuine information and truth (not falsity). This I have done by appeal to the information the knower (Ted) receives, despite his false beliefs.

3.5  Buford and Cloos Buford and Cloos (2018) lay out a significant problem for an attempt such as my (and Williams) first attempt and others who adopt non-conscious beliefs in their knowledge despite falsity strategy. They note: Consider the possibility that the falsehood does not play a role in the inference. If this is the case, the proponent of the knowledge despite falsehood strategy must posit a known proposition responsible for the inferential knowledge. However, if that known proposition is implicitly believed, then it is incapable of playing a role in inference, as the mind does not possess the belief in a way that it can be readily accessed or deployed in inference. (Buford & Cloos, 2018, p. 4) As they go on to say, if the falsehood does play an essential inferential role, then there is knowledge from falsity (contrary to the resistance strategy). They explain why this “dilemma” is a problem for Ball and Blome-Tillman (and later for Montminy). BBT do not characterize the tacit belief in much detail, and they do not defend the idea that t is tacitly known by way of an account of

No Knowledge from Falsity 53 tacit knowledge. This deficiency in their argument opens them to an objection that if the true proposition is implicitly believed, then it is inferentially inert. (Buford & Cloos, 2018) Their long and detailed account of several cases and the problems for this particular version of the “knowledge despite falsehood strategy” is quite compelling. It is for similar reasons that I have abandoned this appeal to non-conscious states in my resistance strategy. Now they don’t deny that falsehoods can play a role, just that if they are inferentially “inert,” they will not do Warfield’s “epistmizing” work. Of course, if they are even causally inert, then they are no help at all (despite Ball and Blome-Tillman’s attraction to that possibility). In my view, the falsehoods can be causally relevant without, as Warfield says, “epistemizing.” The suggestion that there will always be a known proposition that serves as the actual basis for the inferred belief is problematic. It is not enough to always be able to identify a known proposition that evidentially supports the inferred belief. Instead, in order for the account to answer the dilemma, it must always be the case that there exists a distinct known proposition that serves as the actual inferential basis for the known inferred proposition. Determining whether the subject’s belief that p is caused by the falsehood q or is inferred from the known proposition t appears to be an almost impossible epistemic task, especially given the possibility of subconscious or tacit beliefs. (Buford & Cloos, 2018, p. 7) While I’m not at all sure I agree with Buford and Cloos that reasoning always must be at a fully conscious level, this is not a problem for my approach which sidesteps their role. (I believe the discovery of the Benzene Ring—where Faraday was dreaming of a snake chasing its tail might have been non-conscious inference. And then there is the work of Helmholz where non-conscious inferences rule the day in perception.) On my strategy, however the relevant true beliefs are fully conscious. In the end, Buford and Cloos say this: We have suggested that understanding false beliefs as playing a tethering role allows us to see the vital role false beliefs might play in the generation of genuine inferential knowledge. (Buford & Cloos, 2018, p. 16)

54  Fred Adams Unfortunately, they introduce this term toward the very end of their paper and sadly do not say enough about “tethering” for me to tell just how “vital” these false beliefs are supposed to be. If they are causal only, then they don’t satisfy the “epistmizing role” needed for genuine knowledge from falsity.

3.6  A Few More Examples In this section, I will present a few more examples claiming to be cases of knowledge from falsity. There are more cases in the literature than I will discuss. My goal is not to give an exhaustive list, but a representative list. This has been done before (Hawthorne & Rabinowitz, 2017). The goal is to represent a pattern in these examples and then analyze what generates knowledge in them. Is it really falsity? As I have argued, it is not—at least it is not the case that what is false generates knowledge in any given case. Knowledge there may be, but not due to what is false even if what is false plays an etiological role in the formation of the belief. Flying Home With hopes of getting him to attend a party in Providence on Saturday night, Jaegwon Kim asks Christopher Hill what he’s doing on Saturday. Hill replies ‘I’m flying to Fayetteville on Saturday night’ and the conversation ends. Kim, recalling that Hill taught for many years in Fayetteville, Arkansas, reasons as follows: Hill will be in Arkansas on Saturday night; so, he won’t be at my party Saturday night’. Kim knows his conclusion, but his premise is false: Hill is flying to Fayetteville, North Carolina. (Warfield, 2005) My analysis: Kim knows that Hill won’t be at the party Saturday night. Kim falsely believes that Hill will be in Fayetteville, Arkansas (when he will actually be in Fayetteville, North Carolina). Nonetheless, the reason Kim knows Hill will not be at the party is that he receives the information that the relative difference between where Hill will be and Kim’s party is so significant that Hill will not be at the party. Breaking News CNN breaks in with a live report. The headline is ‘The President is speaking now to supporters in Utah’. I reason: ‘The President is in Utah; therefore, he is not attending today’s NATO talks in Brussels’. I know my conclusion but my premise is false: the President is in Nevada—he is speaking at a ‘border rally’ at the border of those two states and the speaking platform on which he is standing is in Nevada. The crowd listening to the speech is in Utah. (Warfield, 2005)

No Knowledge from Falsity 55 This example is so similar to the last that I don’t think I need to say more about it. Ted receives the information that the president is not attending today’s North Atlantic Treaty Organization meeting (despite his false belief that the President is actually in Utah). Chain of False Beliefs s1 says that Tom, Dick and Harry are on the bus. s2 and s3 say the same thing. Knowing that they are honest friends, Ted believes them and correctly deduces that there are at least three people on the bus, something which happens to be true. Unbeknownst to Ted, neither Tom nor Dick nor Harry is on the bus—the people on the bus are all wearing masks and are hard to recognize. Ted’s conclusion belief that there are at least three people on the bus seems to qualify as knowledge. And it is based on conclusive evidence—if it were not the case that there are at least three people on the bus, then it would not be the case that s1, s2 and s3 say that Tom, Dick and Harry are on the bus. Hence, Ted received the information that there were at least three people on the bus.15 Consistent Liar When Bertha was a teenager, she suffered a head injury while ice skating and, shortly afterwards, became quite prone to telling lies, especially about her perceptual experiences involving wild animals. After observing this behavior, her parents became increasingly distressed and, after consulting various psychologists and therapists, finally took her to see a neurosurgeon, Dr. Jones. Upon examining her, Dr. Jones noticed a lesion in Bertha’s brain which appeared to be the cause of her behavior, and so it was decided that surgery would be the best option to pursue. Unfortunately, Dr. Jones discovered during the surgery that he couldn’t repair the lesion—instead, he decided to modify her current lesion and create another one so that her pattern of lying would be extremely consistent and would combine in a very precise way with a pattern of consistent perceptual unreliability. Not only did Dr. Jones keep the procedure that he performed on Bertha completely to himself, he also did this with the best of intentions, wanting his patient to function as a healthy, happy, and well respected citizen. As a result of this procedure, Bertha is now—as a young adult—a radically unreliable, yet highly consistent, believer with respect to her perceptual experiences about wild animals. For instance, nearly every time she sees a deer, she believes that it is a horse; nearly every time she sees a giraffe, she believes that it is an elephant; nearly every time she sees an owl, she believes that it is a hawk, and so on. At the same time, however, Bertha is also a radically insincere, yet highly consistent, testifier of this information. For instance, nearly every time she

56  Fred Adams sees a deer and believes that it is a horse, she insincerely reports to others that she saw a deer; nearly every time she sees a giraffe and believes that it is an elephant, she insincerely reports to others that she saw a giraffe, and so on. Moreover, because of her consistency as both a believer and a liar, those around her do not have any reason for doubting Bertha’s reliability as a source of information. Indeed, in her home community, she is regarded as one of the most trustworthy people to consult on a wide range of topics. Yesterday, Bertha ran into her next door neighbor, Henry, and insincerely though correctly reported to him that she saw a deer on a nearby hiking trail. Since, in addition to his trust in Bertha, it is not at all unlikely for there to be deer on the hiking trail in question, Henry readily accepted her testimony. (Lackey, 2006, pp. 82–83) Now Lackey used this and other examples to test views about knowledge via testimony. I want to modify slightly and use it in this context. Suppose Bertha is on a hike through an elaborate animal preserve. She sees a deer (says “horse”), a giraffe (says “elephant”) and an owl (says “hawk”). Lackey’s point is that one can learn from her words that there was a deer, giraffe and owl, due to the consistency and reliability of her mistakes, but she herself cannot know these things because of her errant beliefs.16 However, suppose we ask Bertha, how many animals did you see on your walk? Bertha can know that she saw three, despite her false beliefs about what kinds of animals she saw. So this too, it seems, qualifies as a case of knowledge via (despite) falsity. How does she know? Because she would not have believed she saw an “elephant,” “hawk” and “horse” unless she had seen a deer, giraffe and owl. So her belief that she saw three animals rests on having received that information—despite her cognitive/perceptual oddity.

3.7 Conclusion In this chapter, I have argued for rejecting the view that there is knowledge from falsity—knowledge where falsity actually “epistemizes” (to use Warfield’s useful phrase). I reviewed but then rejected a very popular “resistance” strategy shared by myself (John Williams) and several others which exploits non-conscious beliefs. I don’t think these are needed and they are not explanatorily transparent. I rejected an attempt to block knowledge from falsity by claiming false beliefs cannot be causally relevant to knowing—they can be. And lastly, I gave several examples to explain how my current resistance strategy (appeal to information-­ actuated belief) works on standard kinds of cases from the literature. In all cases where there is knowledge, there is belief based upon information.

No Knowledge from Falsity 57

Notes 1. I could not have written this paper without the support and encouragement of the following people: John Williams, Rodrigo Borges, and John A. Barker. 2. See Adams (2011) for more on empty names and the semantics of fiction. 3. Warfield (2005) uses the term “epistemize.” 4. There must also be a stable set of channel conditions for information to flow, but I will not go through all of the details. See Dretske (1981). 5. See also Fitelson (2010). 6. See also Adams and Barker (2020) and Adams et al. (2017). 7. The conditional probability of their being a lizard on the floor, given Ken’s visual experience, (and under is present stable LSD-free condition) is 1. 8. Jennifer Lackey (2006) has several examples where information may flow through false links. I’ll go through at least one of them below. 9. I am dedicating this paper to the memory of my good friend John Williams. John invited me to Singapore (2016) in his last years there to rub my nose in his (and Neil Sinhababu’s) “backward clock” counterexample to tracking theories. Though we intensely disagreed over that, we agreed on no knowledge from falsity and agreed to write a joint paper to that effect. John died before we could get very far, but the very first part of this section was written by John, and hence I left it much as it was. 10. See Montminy (2014) and Buford and Cloos (2018). See also Adams et al. (2017), where we use the expression “knowledge via falsehood” “from falsehood,” and where we give an account consistent with “inference” from the falsehood, not just causal basing. 11. This case suggested by John A. Barker. 12. I won’t say this every time, but the conditional probability that he has enough handouts, given the relative greater number of counted handouts to counted audience, is 1. I will give an alternative formulation of acquiring the information. Call this case Ted 2 and the original Handouts case Ted 1: Ted asks his grad student Alice to go count the audience prior to his talk to see if his 100 handouts are enough. Alice miscounts (actually 52 people) and reports that there are 53 people in the audience. Ted reasons that there are 53 attendees so his 100 handouts are sufficient. Unbeknownst to Ted, Alice went to the wrong lecture hall, which happens to contain 52 people, just as the correct lecture hall does. In Ted 1 but not in Ted 2, Ted has what can be called conclusive evidence that he has more handouts than attendees, for if it were the case that he does not have more handouts than attendees, then it would be the case that there are more than 100 attendees in the lecture hall where he is giving his talk and Alice wouldn’t have reported that there are 53 attendees there. 13. See Montminy (2014) for a similar version of this strategy. 14. See also Montminy (2014) for a similar negative reaction to this claim of Ball and Blome-Tillman. 15. Please note there are no plausible candidates for “tacit,” “dispositional,” or “non-conscious” beliefs that qualify as knowledge to enable Ted to acquire inferential knowledge that there are at least three people on the bus. Hence, there is another mark against that resistance strategy. 16. Lackey points out: “The first point to notice about CONSISTENT LIAR is that even though Bertha is a radically unreliable believer with respect to her animal sightings, she is nonetheless an extremely reliable testifier of this information—indeed, even more reliable than many average testifiers who frequently exaggerate, distort, or are simply wrong in their reports about what is true” (Lackey, 2006, p. 83).

58  Fred Adams

References Adams, F. (2011). Sweet nothings: The semantics, pragmatics, and ontology of fiction. In F. Lihoreau (Ed.), Truth in fiction (pp. 119–135). Ontos Verlag publisher. Adams, F., & Barker, J. A. (2020). Dretskian externalism about knowledge. In P. Skowkowski (Ed.), Information and mind: The philosophy of Fred Dretske (pp. 11–45). CSLI. Adams, F., Barker, J. A., & Clarke, M. (2017). Knowledge as fact tracking true belief. Special Epistemology Issue of the Journal Manuscrito Rev Int Fil Campinas, 40(4), 1–30. Ball, B., & Blome-Tillman, M. (2016). Counter closure and knowledge despite falsehood. The Philosophical Quarterly, 64(257), 552–568. Borges, R., de Almeida, C., & Klein, P. D. (Eds.) (2017). Explaining knowledge. Oxford University Press. Buford, C., & Cloos, C. M. (2018). A dilemma for the knowledge despite falsehood strategy. Episteme, 15(2), 166–182. Dretske, F. 1981 Knowledger and the Flow of Information, MIT Press. Fitelson, B. (2010). Strengthening the case for knowledge from falsehood. Analysis, 70(4), 666–669. Hawthorne, J., & Rabinowitz, D. (2017). Knowledge and false belief. In R. Borges, C. de Almeida, & P. Klein (Eds.), Explaining knowledge. Oxford University Press. Lackey, J. (2006). Learning from words. Philosophy and Phenomenological Research, 73(1), 77–101. Montminy, M. (2014). Knowledge despite falsehood. Canadian Journal of Philosophy, 44(3–4), 463–475. Warfield, T. (2005). Knowledge from falsehood. Philosophical Perspectives, 19, 405–416.

4

Harmless Falsehoods Martin Montminy

4.1 Counter-Closure According to a popular principle in epistemology, we cannot gain knowledge from non-knowledge. We must know the premises of our inference if we are to know the conclusion. Several attempts have been made to capture this principle more carefully. Federico Luzzi, who calls this principle counter-closure, offers a useful review of how various influential epistemologists have stated and endorsed the principle (2019, pp. 1–3).1 In this essay, I will rely on Brian Ball and Michael Blome-Tillmann’s (2014, p. 552) version: (CCK) If (i) S knows q, and (ii) S believes q solely on the basis of a competent deduction from some premises including p, then (iii) S knows p. CCK, or counter-closure for knowledge, is adapted from Luzzi (2010), where a single-premise version is proposed. Since the cases I will consider involve inferences from more than one premise, Ball and BlomeTillmann’s multi-premise version is preferable. Condition (ii) needs unpacking. The conclusion q of the inference must be based on a set of premises that includes p. This basing relation is not merely causal, but also epistemic. Now, there are many rival accounts of the epistemic basing relation. For our purposes, we do not need to explore and assess them. 2 We can instead rely on intuition, since we have a reasonable sense of when this relation obtains. The set of premises must provide epistemic support for, or justify, q. Moreover, we are concerned with doxastic justification (“Is S justified in believing that q on the basis of S’s belief that p?”) rather than propositional justification (“Is q justified on the basis of p?”). A second key notion in (ii) concerns the fact that the inference to q should be based solely on a set of premises that includes p. In other words, there should be only one inferential path leading to q. In the literature on counter-closure, different locutions have been used to capture DOI: 10.4324/9781003118701-7

60  Martin Montminy this condition. Ted Warfield notes that the known premises should be relevant: Rutgers Case. I reason (having seen each enter the room a minute ago)—“Jerry Fodor is in the room, Steve Stich is in the room, Colin McGinn is in the room, Brian McLaughlin is in the room; therefore, at least one Rutgers philosopher of mind is in the room.” Assume that my premises are all true except one: McGinn has, after entering, stepped out to take a phone call. I know my conclusion despite the false premise about McGinn. Here’s a first pass at an explanation: the McGinn premise is not “relevant” in this case to the particular conclusion. (2005, p. 405) The Rutgers Case is clearly not a counterexample to CCK. Although one inferential path leading to the conclusion is based on a falsehood, the McGinn premise, three other paths not involving any falsehood also support the conclusion. The McGinn premise is thus not essential to the conclusion. When the belief that p is not essential in producing the belief that q, then there is a line of thought, or evidential path, which does not involve the belief that p but which produces the belief that q. Advocates of knowledge from falsehood (hereafter KFF) hold that there are genuine counterexamples to CCK (Klein, 2008; Luzzi, 2010, 2019; Warfield, 2005). In these cases, a person is said to know a conclusion that is essentially based on a false premise. Before I present an alleged case of KFF, I will carefully examine an instance of what I will call a harmless falsehood. In such a case, the subject draws an inference based on a false premise, knows her conclusion, but does not know this conclusion on the basis of the false premise. I will identify some key ­features of this type of case. Then, in the rest of the essay, I will present my response to alleged cases of KFF, focusing on a specific one. My argument will be that if this case is supplemented with plausible assumptions, then the subject’s knowledge is based not on false belief but on tacit knowledge. The falsehood is thus harmless. On the other hand, if these plausible assumptions are withheld, then the subject lacks knowledge of the truth inferred from the falsehood. The alleged case of KFF is actually a Gettier case.

4.2  Harmless Falsehood To get a better grasp of the issue and the key notions it involves, let us consider this well-known case, which I have slightly modified for reasons that will become apparent later. Extra Reasons. Smith has made a bet that someone in her office owns a Ford. She has two independent sets of reasons for thinking that she

Harmless Falsehoods 61 will win. One set has to do with Nogot. She has seen Nogot drive a Ford, and heard him say he owns a Ford, and so on. Unfortunately, Nogot is merely pretending. But Smith has equally strong reasons having to do with Havit. And Havit is not pretending. Havit owns a Ford, and Smith knows that he owns a Ford.3 Intuitively, Smith knows that someone in her office owns a Ford (s) and thus knows that she will win her bet (w). Yet, she inferred this conclusion from two premises, namely Nogot owns a Ford (n) and Havit owns a Ford (h), the first of which is false. Extra Reasons is similar to Warfield’s Rutgers Case and is not a counterexample to CCK. As Richard Feldman puts it, “Smith has two independent lines of thought that lead to the same conclusion. One line of thought, concerning Nogot, does depend on a false proposition. The other line of thought, involving Havit, does not depend on anything false. In this case, Smith’s belief that someone in the office owns a Ford does not essentially depend upon the falsehood. This is because there is a justificatory line that ignores the falsehood” (2003, p. 36). Because of the two justificatory lines leading to the conclusion, Smith’s false belief that n is not essential in producing the conclusion that w. This is thus a case of harmless falsehood: Smith does not know that w, even partly, in virtue of her false belief that n. Smith’s false belief that n is not essential because Extra Reasons involves causal overdetermination: both the belief that n and the belief that h cause the belief that w. This does not mean that the event consisting in Smith’s acquiring the belief that w is caused by both beliefs. We may suppose that when she is wondering whether she will win the bet, Smith first recalls evidence about Nogot. Her remembering this makes her draw the conclusion that w. Only after drawing this conclusion does Smith also remember her evidence about Havit. This second memory event does not cause her to form the belief that w, since she has already formed that belief at this point. However, Smith’s belief that h is a s­ ustaining cause of her belief that w. This idea can be captured by the following counterfactual test: thanks to her belief that h, Smith would still believe that w if she no longer had the belief that n. More generally, we may say that a belief that p is essential in producing a subject S’s belief that q just in case S believes that q but would no longer do so if the belief that p were simply removed from S’s set of beliefs. This test for essentiality concerns the production of belief. We may adopt a similar test regarding the production of knowledge: a belief that p is essential in producing a subject S’s knowledge that q just in case S knows that q but would no longer do so if the belief that p were simply removed from S’s set of beliefs.4 In applying this counterfactual test, we should simply remove the belief that p. To appreciate this point, consider the following objection:

62  Martin Montminy “Suppose one has obtained the belief that Fa and the belief that Fb from a single testimonial source. And imagine that were one to have ­distrusted the source with regard to Fa one would also have distrusted the source with regard to Fb. Here the belief that Fa is essential by the counterfactual test” (Hawthorne & Rabinowitz, 2017, p. 329). Contrary to intuition, the counterfactual test seems to entail that the belief that Fa is essential to arriving at the conclusion that something is F. However, this is not how the test should be construed. Let us suppose, for the sake of the argument, that the world Hawthorne and Rabinowitz describe is the nearest one in which the person does not believe that Fa. Hence, the counterfactual “If the person did not believe that Fa, she would not believe that something is F” is true. However, moving to this world involves more than simply removing the belief that Fa, since it also adds doubt about the source. When applying the counterfactual test, we should operate a “surgical” removal of the belief that p that modifies as little as possible the person’s set of beliefs and other mental states. Since it is possible to remove the belief that Fa without removing the belief that Fb, and vice versa, then neither belief is essential to the conclusion that something is F.5 Extra Reasons is also a case of epistemic overdetermination. By assumption, both Smith’s belief that n and her belief that h are justified. Moreover, both of her inferences are competent. Peter Klein writes, “the cognition is held to be both evidentially and causally overdetermined. The true proposition (Havit owns a Ford) and the false proposition (Nogot owns a Ford) each separately fully justifies the known proposition that someone in the class owns a Ford, and, further, both the true and the false belief are sufficient causes in the actual causal chain that results in the cognition” (2008, p. 42). The fact that Extra Reasons involves epistemic overdetermination suggests the following position. One may contend that even though the false belief that n is not essential to Smith’s knowledge that w, Extra Reasons is an instance of KFF, since Smith’s knowledge is based on her false belief. Just because Smith’s knowledge that w is based on the true belief that h does not mean that it is not also based on the false belief that n. Since Extra Reasons is a case of epistemic overdetermination, knowledge is actually based on both beliefs. In this view, Smith’s knowledge owes its epistemic status to a false belief, even though this false belief is not essential to her knowledge. We may call Extra Reasons a case of knowledge from inessential falsehood (KFIF). This position is endorsed by at least two proponents of KFF (Coffman, 2008, p. 191; Luzzi, 2019, pp. 16–17). First, I should note that a case of KFIF is not a counterexample to CCK. Proponents of KFIF would grant that Smith does not believe that w solely on the basis of a competent deduction from some premises that include n, since she also believes that w on the basis of a competent

Harmless Falsehoods 63 deduction from h. For this reason, one may think that the view that there are cases of KFIF is not especially exciting. However, the view does hold that knowledge may be based on false belief. The thesis that an inessential falsehood is not necessarily a harmless falsehood is worth exploring, it seems to me. Let us recall why Extra Reasons involves epistemic overdetermination. Smith justifiably believes that w based on her (justified) belief that n and also based on her (justified) belief that h. A pair of counterfactual tests confirms this point. If the justificatory line connecting h to w were blocked (by, say, removing her belief that h), Smith would still justifiably believe that w, thanks to the justificatory line connecting n to w. Similarly, if the justificatory line connecting n to w were blocked, Smith would still justifiably believe that w, thanks to the justificatory line connecting h to w. This shows that Extra Reasons is an instance of justificatory overdetermination. However, advocates of KFIF hold that Extra Reasons involves not just justificatory overdetermination, but also knowledge overdetermination: on this view, Smith knows that w based on her belief that n and also based on her belief that h. Is this assessment correct? No.6 Let us apply our counterfactual test, this time focusing on knowledge instead of justification. Suppose the inferential line connecting h to w is blocked. To make things vivid, imagine that Smith does believe that h because she has never seen Havit drive a Ford, has never heard Havit claim he owns a Ford, and so on.7 In other words, we are asked to imagine the following case: Faulty Reasons. Smith has made a bet that someone in her office owns a Ford. She has only one set of reasons for thinking that someone in her office owns a Ford. This set has to do with Nogot. Nogot says he owns a Ford, and so on. However, Nogot is merely pretending. Moreover, unbeknownst to Smith, Havit does own a Ford.8 Does Smith know that w in Faulty Reasons? No: Faulty Reasons is a Gettier case. About a slight variant of this case, Klein writes, “Even though the belief that someone in the [office] owns a Ford is doxastically justified and true, it is not knowledge. In such a case, [Smith does] not have knowledge; [she] arrived at the truth only by a lucky break. [She] was lucky to arrive at the truth because there is a genuine defeater of [her] justification—namely, Nogot does not own a Ford” (2008, p. 36). It is thus incorrect to hold that Smith knows that w based on her false belief that n. It is worth noting that if we were to block the inferential line connecting n to w instead, then Smith’s knowledge that w would be unaffected: her belief (and knowledge) that h would be a sufficient basis for her knowledge that w. This shows that although Extra Reasons involves

64  Martin Montminy justificatory overdetermination, it does not involve knowledge overdetermination. Smith’s knowledge that w owes its epistemic status solely to the inferential path involving h. Extra Reasons is thus not a case of KFIF. It remains to be seen whether cases of KFIF are possible. I will consider this question in Section 4.4.

4.3  An Alleged Case of KFF: First Horn of the Dilemma It is now time to examine Fancy Watch, an alleged case of KFF. Fancy Watch. I have a 7 pm meeting and extreme confidence in the accuracy of my fancy watch. Having lost track of the time and wanting to arrive on time for the meeting, I look carefully at my watch. I reason: “It is exactly 2:58 pm; therefore I am not late for my 7 pm meeting.” Again I know my conclusion, but as it happens it’s exactly 2:56 pm, not 2:58 pm. (Warfield, 2005, p. 408)9 In Fancy Watch, Ted believes the falsehood that it is exactly 2:58 pm (f) based on what his watch indicates and his belief that his watch is accurate. From this falsehood, he infers that he is not late for his 7 pm meeting (m). Intuitively, Ted knows that m. A couple of remarks are in order. First, in a typical case, a person would not form the belief that m by reasoning their way to a conclusion on the basis of an occurrent belief or judgment that f. More plausibly, upon seeing the time his watch shows, Ted would directly form the belief that m, without undergoing the kind of conscious train of reasoning described in Fancy Watch. A quick glance at his watch would generate a host of beliefs in Ted, including the belief that f and the belief that m; however, the formation of the latter belief would not be mediated by a previous belief that f.10 However, for the sake of the argument, we should grant that Fancy Watch accurately describes Ted’s conscious thought process. Still, one may wonder what Ted’s belief that it is exactly 2:58 pm amounts to. Surely, he does not believe that it is 2:58 pm ± 0.000001 s. By the time he has reached his conclusion, it is already past that time. Let us just stipulate that by “exactly 2:58 pm,” we mean something like “2:58 pm ± 1 s.” Assuming that his reasoning takes less than one second, Ted should feel confident that his premise remains true at the moment he forms the belief that he is not late for his 7 pm meeting. There is a second feature of Fancy Watch that makes the case unusual. We should assume that Ted has no independent evidence regarding what time it is. In other words, he does not (and cannot) rely on daylight, how hungry he feels, his “body clock,” and so on. To form the belief that m, he relies solely on his beliefs about his watch: the time it shows, how accurate it is, and so on.

Harmless Falsehoods 65 My response to the claim that Fancy Watch is a case of KFF invokes what has come to be known as the proxy premise strategy (Ball & Blome-Tillmann, 2014; Borges, 2017; Montminy, 2014; Schnee, 2015). According to my version of the strategy, there is a true proposition t, a proxy premise that is somewhere in the neighborhood of f, which Ted tacitly knows. Ted’s knowledge that t introduces an independent inferential path leading to the conclusion that m. Ted knows that m in virtue of this inferential path rather than in virtue of the inferential path that contains the falsehood f. Ted’s knowledge that m thus owes its epistemic status to his knowledge of the proxy premise t rather than to his belief that f. To be sure, the proxy premise strategy grants that the falsehood plays a role in the inference to the conclusion. Ted’s false belief that f is not inferentially inert.11 As will gradually become clear, Fancy Watch is very much like Extra Reasons. Recall that in Extra Reasons, the falsehood that n does play a role in the inference to the conclusion that w. My version of the proxy premise strategy relies on the notion of tacit belief (and knowledge). We have many antecedently held but non-­ occurrent beliefs: that London is the capital of England, that John F. Kennedy was assassinated, that gold has the atomic number 79, and so on. We may call these “dispositional beliefs.” In this sense, “A subject dispositionally believes P if a representation with the content P is stored in her memory or ‘belief box’ […] When that representation is retrieved from memory for active deployment in reasoning or planning, the subject occurrently believes P. As soon as she moves to the next topic, the occurrent belief ceases” (Schwitzgebel, 2019, p. 2.1). This is not the type of belief I invoke. The proxy premise strategy invokes propositions that a person never consciously entertains, yet plausibly believes and knows. For example, I (plausibly) believe that London is bigger than my backyard and that my dogs were not alive when John F. Kennedy was assassinated, even if these propositions have never occurred to me. Moreover, looking at a nugget of gold, I (plausibly) believe that it is worth more than a lump of bubble gum of the same size, even though the thought does not cross my mind. Such beliefs are sometimes called “implicit” or “tacit” (Schwitzgebel, 2019, sec. 2.2.1). I believe these propositions, even though I do not explicitly represent them, because they are obvious to me, given the beliefs I explicitly represent, my perceptual experiences and my memory.12 What is the proposition Ted tacitly believes (and knows) in Fancy Watch? There are many candidates: that it is about 2:58 pm, that it is earlier than 3:30 pm, that it is earlier than 4 pm, and so on. This means that there are more than two independent inferential paths leading to the conclusion that m. However, to keep things simple, I will focus on only one. Very plausibly, Ted tacitly believes (and knows) that it is 2:58 pm ± 10 min. (t). Proposition t, which, for convenience, I will equate with the

66  Martin Montminy proposition that it is approximately 2:58 pm, will serve as our true proxy premise. I should emphasize that to say that Ted tacitly knows that t is not to say that he is in a position to know that t.13 According to the proxy premise strategy, Ted knows that t, even though he does not consciously entertain that proposition. Recall the nugget of gold case. I know that the nugget of gold is worth more than a lump of bubble gum of the same size (g), even though that thought does not cross my mind. The idea is not that if I were to consider the question whether g, I would form the belief (and knowledge) that g. I already know that g before I consider the question. My answer to the question would simply be the expression of an antecedently held belief. Here, I am following our common practice of counting people as believing (and knowing) many things they are not explicitly representing (Dennett, 1978; Field, 1978; Lycan, 1986). I am also following the methodology presupposed by proponents of KFF themselves, since they rely on intuition and common sense to hold that Ted knows that m. Proponents of KFF cannot hold that common sense is a reliable guide to what Ted knows when it comes to the proposition that m, but not when it comes to the proposition that t. I should note, however, that ultimately my defense of CCK does not hinge on the claim that there is such a thing as tacit knowledge, and that Ted knows that t. In Section 4.5, I will examine a position that combines KFF with a denial that knowledge may be tacit. But for now, let us grant that tacit knowledge is possible. Why should we think that Ted believes (and knows) that t? First, Ted believes that f, and t obviously follows from f. Ted knows that if the time is exactly t1, then it is approximately t1. However, if f were Ted’s only evidence for t, then there would be only one path leading to the conclusion that m. According to the proxy premise strategy, Ted has independent evidence for t. Ted believes that t because he tacitly believes (and knows) the following conditional: (even) if Ted’s watch is not exactly accurate, it is approximately so (c). Ted plausibly knows that c, because typically, a watch that is not exactly accurate is either a little slow or a little fast, but not wildly inaccurate. This means that there are two justificatory lines leading to Ted’s belief that m. First, Ted sees that his watch indicates 2:58 pm. Given that he believes that his watch is accurate, he forms the belief that it is exactly 2:58 pm (f), and then concludes that he is not late for his 7 pm meeting (m). Second, Ted sees that his watch indicates 2:58 pm. Given that he (tacitly) believes that if his watch is not exactly accurate, it is approximately so (c), he (tacitly) believes that it is approximately 2:58 pm (t) and thus believes that m. This second line of inference does not contain the falsehood f. Hence, since Ted’s belief that t provides independent support for his belief that m, his belief that f is inessential. This account of Fancy Watch does not entail that Ted is confused about his reasons for believing that m, or that these reasons are obscure

Harmless Falsehoods 67 to him.14 The account provides an adequate explanation both from a psychological and from an epistemological point of view. To appreciate this point, we may imagine a conversation Ted could have after he has looked at his watch: CHALLENGER:  How

did you know that you’re not late for your 7 pm meeting? TED:  Because it was exactly 2:58 pm. That’s what my watch showed. CHALLENGER: But what if your watch is not accurate and it was not exactly 2:58 pm? TED:  My watch was and is accurate. But even if my watch turned out not to be exactly accurate, it would still be approximately so. I’m thus confident that in the unlikely event that it was not exactly 2:58 pm, it was at least approximately 2:58 pm. So, regardless, I am not late for my 7 pm meeting. I knew, and still know that. On the current construal, when Ted says, “If my watch turned out not to be exactly accurate, it would still be approximately so” and “In the unlikely event that it was not exactly 2:58 pm, it was at least approximately 2:58 pm,” he is reporting on beliefs he already has, as opposed to expressing beliefs he just acquired. The conversation also shows that Ted does not have to doubt the accuracy of his watch in order to have an independent line of reasoning for his conclusion.15 In general, one need not doubt one’s reason for a conclusion in order to endorse additional, independent reasons for that conclusion. In Extra Reasons, Smith could say, “How do I know that I win the bet? Well, Nogot owns a Ford. And even if I’m wrong about that, I still know that I win, because Havit also owns a Ford.” This proposed account of Fancy Watch, which rests on plausible assumptions, entails that Ted’s false belief that f is not essential in producing his knowledge that m. Ted would still believe (and know) that m if the belief that f were simply removed from his set of beliefs. Because Ted believes that c, he does not need to rely on his belief that f to believe that m. Hence, like Extra Reasons, Fancy Watch involves a harmless falsehood.

4.4 An Alleged Case of KFF: Second Horn of the Dilemma Proponents of KFF are free to stipulate the various details of Fancy Watch however they wish. After all, it is their case. Perhaps they would be unhappy with a key assumption I made in the previous section. They may insist that we should not assume that Ted tacitly knows that if his watch is not exactly accurate, it is approximately so. To avoid confusion, let us call this version of the case Fancy Watch*. In Fancy Watch*,

68  Martin Montminy Ted has no opinion about how close his watch would be if it turned out not to be exactly accurate. Perhaps he is so confident in the accuracy of his watch that he has never considered this question. For all he knows, if this were to happen, his watch could be approximately accurate, but it could also be wildly inaccurate. If this is true of Ted, then he has only one inferential path leading to his belief that m. Ted’s belief that f is thus essential in producing his belief that m: Ted would no longer believe that m were the belief that f simply removed from his set of beliefs. Does Ted know that m in Fancy Watch*? No. The case has the hallmarks of a Gettier case, as it exhibits a “double-luck” structure (Zagzebski, 1994). First, Ted forms a justified belief that f, but an element of bad luck makes it such that f is false. It is mere bad luck that Ted’s watch happens to be inaccurate. Second, an element of good luck counteracts the bad luck: Ted’s belief that m happens to be true. For all Ted knows, given that his watch is inaccurate, it could be before 7 pm, or it could be after 7 pm: it is a matter of luck that it is before 7 pm. This assessment of Fancy Watch* gains further support by comparing the case to Faulty Reasons. Recall that in Faulty Reasons, Smith has only one justificatory line for her belief that someone in the office owns a Ford and that she will thus win the bet. This justificatory line is based on the false belief that Nogot owns a Ford. The similarities between the two cases are worth spelling out. Basic evidence: Ted’s evidence consists in what his watch currently indicates, that is, 2:58 pm, as well as what he has been told about his watch’s accuracy. Smith’s evidence consists in the fact that she has seen Nogot drive a Ford and heard him claim that he owns one. Unfortunately, both agents’ evidence is misleading. False belief: Ted falsely (but justifiably) believes that it is exactly 2:58 pm, and Smith falsely (but justifiably) believes that Nogot owns a Ford. Proxy proposition: In each case, there is a true proxy proposition that can be inferred from the falsehood: that it is approximately 2:58 pm in Ted’s case, and that someone in the office owns a Ford in Smith’s case. Conclusion: Each agent infers a true conclusion from their false belief. Ted infers that he is not late for his 7 pm meeting; Smith infers that she will win the bet. A key feature of both cases is worth emphasizing: the agent’s belief in the proxy proposition is entirely dependent on their false belief. This is because each agent lacks a belief in a key conditional that would introduce an independent inferential line leading to their conclusion.

Harmless Falsehoods 69 Ted does not believe that if it is not exactly 2:58 pm, then it is (at least) approximately 2:58 pm. Smith does not believe that if Nogot does not own a Ford, then someone (else) in the office does. The unavailability of an independent justificatory path leading to their conclusion is what dooms the agents. They lack knowledge of their conclusion, because the only path to this conclusion involves the double-luck structure characteristic of Gettier cases. Still, we are tempted to judge Fancy Watch* differently than we do Faulty Reasons. In my view, this is because we import a number of assumptions into the story. If this were a typical situation, Ted would have all kinds of clues that would tell him at least roughly what time it is: the sun’s position in the sky, Ted’s internal clock, and so on. But remember that we are supposed to assume that this evidence is not available to him: Ted relies solely on his watch. Moreover, Ted has no opinion about what time it would approximately be if it turned out that his watch is inaccurate. To illustrate, imagine Ted being challenged about his conclusion: CHALLENGER:  How

did you know that you’re not late for your 7 pm meeting? TED:  Because it was exactly 2:58 pm. That’s what my watch showed. CHALLENGER:  But what if your watch is not accurate, and it was not exactly 2:58 pm? TED:  My watch was and is accurate. If my watch were not exactly accurate, then I would not know, even approximately, what time it is. I would thus not know whether I’m late for my meeting. But like I said, I was and am very confident that it was exactly 2:58 pm. Ted’s epistemic position with respect to m is very much like Smith’s epistemic position with respect to w. Smith has no opinion about whether someone in the office would own a Ford if it turned out that Nogot does not own one. Ted, like Smith, is lucky that his conclusion is true, given that this conclusion derives from a false premise. The claim that Ted knows this conclusion does not withstand scrutiny. Fancy Watch* is not a case of KFF.16 The foregoing considerations also help address a possible move by proponents of KFF. They may contend that in the original Fancy Watch, even though the false belief that f is not essential to Ted’s conclusion that m, it still epistemizes17 this conclusion. In other words, Ted’s knowledge that m is based on his false belief that f, even though it is also based on his true belief that t. In other words, Fancy Watch is a case of KFIF. The parallels between Fancy Watch and Extra Reasons show that this position is misguided. In Section 4.2, I explained why the claim that in Extra Reasons, Smith’s knowledge that w is based on her false belief that n is incorrect. Although Extra Reasons involves justificatory

70  Martin Montminy overdetermination, it is not a case of knowledge overdetermination. Similarly, Fancy Watch involves justificatory overdetermination: each inferential path provides justification for the conclusion that m. If we block the path from t to m, Ted’s belief that m is still justified by his belief that f; if we block the path from f to m, Ted’s belief that m is still justified by his belief that t. However, Fancy Watch is not a case of knowledge overdetermination. If we block the path from t to m, then Fancy Watch is turned into Fancy Watch*, a Gettier case in which Ted does not know that m. By contrast, if we block the path from f to m, Ted still knows that m. Hence, Fancy Watch does not involve knowledge overdetermination: Ted’s knowledge that m is not based on his belief that f. Proponents of KFF thus face a dilemma: either Ted knows that he is not late for his meeting, but his false belief is harmless (Fancy Watch); or Ted’s false belief is essential to his conclusion, but Ted does not know that conclusion (Fancy Watch*).

4.5  Another Variant of the Alleged Case of KFF A third version of Fancy Watch is worth considering. Before describing this version, I need to distinguish between two types of non-occurrent belief. According to a common view that I have so far assumed, although Ted’s belief that t is tacit, he counts as believing that t, even if he has not explicitly considered the question whether t. This is because t obviously follows from Ted’s occurrent beliefs, his perceptual experience and memory. Tacit beliefs, in other words, are straightforward: a person who has a tacit belief that p would effortlessly and swiftly form the occurrent belief that p upon considering whether p. Because of that, it is plausible to hold that the person already believes that p, antecedently to considering the question whether p. There are, by contrast, instances of non-occurrent belief that require time and effort: Grandpa’s Age. Milosz is trying to remember how old Grandpa is. At first, he draws a blank. But then, he remembers that Grandpa has told him that he was 16 at the start of the Second World War. He then tries to remember what year that war started. “1939, of course!” he tells himself. But wait, what month? Milosz needs to rummage through his memory before he recalls that Germany invaded Grandpa’s country, Poland, on September 1. Milosz then reasons: since we are in June, 2023, and Grandpa’s birthday is in November, this means that Grandpa is 100 years old! In Grandpa’s Age, Milosz does not consult any external source to reach his conclusion. All the relevant information is available to him. He relies solely on his memory and mathematical skills. But the process takes

Harmless Falsehoods 71 some time. For this reason, it would seem incorrect to hold that at the beginning of his deliberation, Milosz believes that Grandpa is 98 years. Yet, we can say that initially, Milosz is disposed to believe the conclusion he reaches. Let us call this type of non-occurrent belief demanding. In demanding cases, a person would form the belief that p upon considering whether p only after some time and mental effort. For this reason, the person does not already believe that p before considering whether p, even though she is disposed to acquire that belief. There is, of course, a continuum of cases between straightforward and demanding cases. It is far from clear where we should draw the line in this continuum: How quick should the thought process yielding the occurrent belief that p be for the person to count as believing that p, prior to considering whether p?18 Fortunately, given our purpose, it is not necessary to settle this question. However, two possible responses by proponents of KFF are worth considering. First, they may hold that although Ted possesses all the relevant evidence to form the beliefs contained in the independent line of reasoning, these are demanding non-occurrent beliefs. For this reason, he cannot be said to believe or know that t on the basis of this line of reasoning. Alternatively, proponents of KFF may, following Robert Audi (1994), deny that even in a straightforward case a person has the relevant belief or knowledge. In other words, they may deny that belief or knowledge may be tacit. On this view, it may well be that Ted would effortlessly and swiftly form the occurrent belief that t upon considering whether t; however, he does not count as believing or knowing that t before considering whether t. He merely has a disposition to believe that t. Although these two responses are similar, they deserve separate treatments. Let us start with the first and suppose that in Fancy Watch**, Ted is disposed to believe that if his watch turned out not to be exactly accurate, it would at least be approximately accurate, but this is a demanding case. In other words, upon considering the question, Ted would believe that conditional only after some time and mental effort. Given that this is a demanding case, it would be incorrect to hold that upon seeing that his watch indicates 2:58 pm, Ted knows that if it is not exactly 2:58 pm, it is approximately so. He is merely in a position to know that. What about the proposition that it is approximately 2:58 pm (t)? Is his attitude about it straightforward or demanding? This is a delicate question, because there are two inferential paths leading to t. One is plausibly straightforward: upon considering the question, Ted would likely immediately form the belief that t, since this belief obviously follows from his belief that it is exactly 2:58 pm. However, as we saw, there is also a demanding inferential path leading to t, one that relies on the conditional that if is not exactly 2:58 pm, it is approximately so. Fancy Watch** thus involves two lines of reasoning leading to the conclusion that m. The first involves Ted’s false belief that f, from which

72  Martin Montminy he draws the conclusion that m. The second line involves demanding reasoning. Because of that, it would be incorrect to say that at the time he forms the belief that m, Ted knows that m based on that line of reasoning. More accurately, Ted is in a position to know that m based on that second line of reasoning. Therefore, Ted’s belief that f is essential: Ted would not believe that m if the belief that f were simply removed from his set of beliefs; he would merely have a disposition to believe that m. This means that Ted does not know that m based on the first line of reasoning. The reasons for this judgment are now familiar. The first line of reasoning should be assessed in the same way we assessed it in Fancy Watch*: it is a Gettier case. Hence, in Fancy Watch**, Ted does not know that m; however, thanks to the second line of reasoning, he has a disposition to know that m. Once again, the analogy with Extra Reasons will help. Consider Extra Reasons*, a variant in which Smith decides to bet that someone in the office owns a Ford. Immediately after her decision to bet, she remembers the evidence about Nogot, believes that he owns a Ford (n), and concludes that she will win the bet (w). While this happens, Smith does not have any occurrent belief about Havit. She could recall the evidence about him, but that would take some time and mental effort. The path leading to the belief that Havit owns a Ford (h) is thus demanding. Hence, although she has a disposition to know that h, she does not know that. This also means that she does not know that if it turns out that Nogot does not own a Ford, someone else in the office does. In Extra Reasons*, Smith lacks the knowledge that w, because the reasons for her belief that w are faulty. These reasons are just the ones she has in Faulty Reasons, a Gettier case. However, she is disposed to know that w, thanks to her disposition to know that h. Hence, by parity of reasoning, in Fancy Watch**, Ted does not know that m, since his reasons for his belief that m are faulty. He is merely disposed to know that m. Let me turn to the second response, according to which even though Ted would effortlessly and swiftly form the belief that t upon considering the matter, he neither believes nor knows that t before that. This is because there is no such thing as tacit belief or knowledge. The first thing to note about this response is that it compromises the case for KFF. As I pointed out in Section 4.3, proponents of KFF embrace our commonsense intuition when it comes to Ted’s knowledge that m. The problem is that common sense would also grant Ted the knowledge that t; however, according to the current response, this particular commonsense judgment should be rejected. It is far from clear that this combination of positions about common sense is sustainable. Proponents of KFF who reject tacit knowledge ought to explain why common sense is not mistaken regarding Ted’s knowledge that m. There is little hope for such an explanation, since Ted very plausibly lacks knowledge that m if he lacks knowledge that t. As I just showed, if Ted does not believe

Harmless Falsehoods 73 (or know) that t, then, just as in Fancy Watch*, he has only one line of reasoning leading to the conclusion that m. Ted’s false belief that f is essential to his belief that m. And just as in Fancy Watch*, he does not know that m, since this is a Gettier case.

4.6 Conclusion I have argued that Fancy Watch, an alleged case of KFF, is either a case of harmless false belief or a Gettier case. Two features of the case tend to obscure this assessment. First, if construed as typical, there is independent evidence (daylight, internal clock, and so on) that Ted can rely on to know at least roughly what time it is. Understood this way, Ted knows the conclusion of his reasoning thanks to this independent evidence. On the other hand, once the case is construed appropriately and this evidence is ruled out, the temptation to attribute knowledge to Ted significantly declines. Although I lack the space to discuss other alleged cases of KFF here, I believe that (almost) all of them share this feature. A second feature of the case that also mars our judgment concerns the role played by the subject’s tacit knowledge. I have remained neutral about a key question regarding tacit knowledge: Could a person who would effortlessly and swiftly form the belief that p upon considering whether p count as knowing that p? I have argued that if the answer is positive, then Ted knows that he is not late for his meeting in virtue of his tacit knowledge that it is approximately 2:58 pm. In this case, Ted’s false belief is harmless. On the other hand, if the answer is negative, then Fancy Watch is a Gettier case. Ted’s only line of reasoning for his conclusion involves the kind of double luck characteristic of Gettier cases.19

Notes 1. See also Borges (2017, pp. 281–286). 2. See Korcz (2019) for a useful review. 3. This case was originally proposed by Lehrer (1965, p. 170). 4. See Feit and Cullison (2011, p. 288), Klein (2008, p. 41), Lehrer (1965, p. 174), and Montminy (2014, p. 467). Coffman (2008, p. 191) proposes a different counterfactual test for essentiality, which I will not discuss here. See Fitelson (2010), Montminy (2014, p. 468), and Schnee (2015, pp. 65–66) for critical discussions. 5. This is just another way of saying that the test involves a non-backtracking counterfactual (Lewis 1979): such a counterfactual holds the past fixed up until the time (or just before the time) at which the antecedent is taken to obtain. According to Hawthorne and Rabinowitz’s preferred gloss of essentiality, “An essential premise is one that figures in all the belief-forming processes that generate the relevant belief” (2017, p. 329). If generating the relevant belief is construed broadly, to include sustaining causes, then I do not see any incompatibility between their gloss and the proposed counterfactual test. 6. Schnee (2015, p. 70) also considers this question and gives the same answer.

74  Martin Montminy 7. I ask readers to forgive this (inconsequential) violation of the non-­ backtracking condition. 8. This case is also from Lehrer (1965, pp. 169–170). 9. See Arnold (2013), Fitelson (2010), Hawthorne and Rabinowitz (2017), Hiller (2013), Hilpinen (1988), Klein (2008), Luzzi(2010, 2019), and ­Murphy(2015) for other alleged cases of KFF. 10. See Cassam (2010, pp. 81–84) and Hawthorne and Rabinowitz (2017, ­p. 331). 11. Ball and Blome-Tillmann (2014) also contend that Fancy Watch is not a case of KFF. Their contention relies on the claim that the falsehood is inferentially inert. See Buford and Cloos (2018, pp. 169–174) and (Montminy 2014, p. 466) for criticisms. 12. The existence of beliefs in this sense is relatively uncontroversial. Hartry Field (1978) writes, “For instance, suppose I tell you that no one dug a ­tunnel from here to China through the center of the earth in 1953. I’m sure that by telling you this I’m not telling you something you didn’t already believe, but I’m equally sure that it was not part of your belief core—i.e., not one of your explicitly represented beliefs—before I told it to you” (p. 17). See also Dennett (1978) and Lycan (1986). 13. See Fitelson (2017, p. 317) for this construal of the proxy proposition strategy. 14. See Fitelson (2017, pp. 319–323) and Luzzi (2019, pp. 23–25) for this objection. 15. See Buford and Cloos (2018, p. 171) for this objection. I agree with Buford and Cloos that attributing such doubt to Ted would go against the spirit of the story; however, my proposed account does no such thing. 16. See Schnee (2015, pp. 58–63) for additional considerations in favor of this assessment. 17. I borrow this word from Warfield (2005). 18. See Lycan (1986) for a useful discussion. 19. Many thanks to John Biro, Rodrigo Borges, Peter Klein, Peter Murphy, Luis Rosa, Roy Sorensen, and Michael Veber for comments on an earlier draft.

References Arnold, A. (2013). Some evidence is false. Australasian Journal of Philosophy, 91, 165–172. Audi, R. (1994). Dispositional beliefs and dispositions to believe. Noûs, 28, 419–434. Ball, B., & Blome-Tillmann, M. (2014). Counter closure and knowledge despite falsehood. Philosophical Quarterly, 64, 552–568. Borges, R. (2017). Inferential knowledge and the Gettier conjecture. In R. Borges, C. de Almeida, & P. Klein (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 273–291). Oxford University Press. Buford, C., & Cloos, M. (2018). A dilemma for the knowledge despite ­falsehood strategy. Episteme, 15, 166–182. Cassam, Q. (2010) Judging, believing, and thinking. Philosophical Issues 20: Philosophy of Mind, 20, 80–95. Coffman, E. J. (2008). Warrant without truth? Synthese, 162, 173–194. Dennett, D. (1978). Brainstorms. MIT Press. Feit, N., & Cullison, A. (2011). When does falsehood preclude knowledge? Pacific Philosophical Quarterly, 92, 283–304.

Harmless Falsehoods 75 Feldman, R. (2003). Epistemology. Prentice Hall. Field, H. (1978). Mental representation. Erkenntnis, 13, 9–61. Fitelson, B. (2010). Strengthening the case for knowledge from falsehood. Analysis, 70, 666–669. Fitelson, B. (2017). Closure, counter-closure and inferential knowledge. In R. Borges, C. de Almeida, & P. Klein (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 312–324). Oxford University Press. Hawthorne, J., & Rabinowitz, D. (2017). Knowledge and false belief. In R. Borges, C. de Almeida, & P. Klein (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 325–344). Oxford University Press. Hilpinen, R. (1988). Knowledge and conditionals. Philosophical Perspectives, 2, 157–182. Hiller, A. (2013). Knowledge essentially based upon false belief. Logos and Episteme, 4, 7–19. Klein, P. (2008). Useful false beliefs. In Q. Smith (Ed.), Epistemology: New essays (pp. 25–62). Oxford University Press. Korcz, K. A. (2021). The epistemic basing relation. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. https://plato.stanford.edu/entries/ basing-epistemic/ Lehrer, K. (1965). Knowledge, truth and evidence. Analysis, 25, 168–175. Lewis, D. (1979). Counterfactual dependence and time’s arrow. Noûs, 13, 455–476. Luzzi, F. (2010). Counter-closure. Australasian Journal of Philosophy, 88, 673–683. Luzzi, F. (2019). Knowledge from non-knowledge. Cambridge University Press. Lycan, W. (1986). Tacit belief. In R. Bogdan (Ed.), Belief: Form, content, and function (pp. 61–82). Clarendon. Montminy, M. (2014). Knowledge despite falsehood. Canadian Journal of Philosophy, 44, 463–475. Murphy, P. (2015). Justified belief from unjustified belief. Pacific Philosophical Quarterly, 98, 602–617. Schnee, I. (2015). There is no knowledge from falsehood. Episteme, 12, 53–74. Schwitzgebel, E. (2019) Belief. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. https://plato.stanford.edu/archives/fall2019/entries/belief/ Warfield, T. (2005) Knowledge from falsehood. Philosophical Perspectives 19: Epistemology, 19, 405–416. Zagzebski, L. (1994). The inescapability of Gettier problems. Philosophical Quarterly, 44, 65–73.

5

Knowledge from Blindspots Rhys Borchert, Juan Comesaña, and Timothy Kearl

5.1 Introduction At least some Gettier cases involve inference from a false belief. This prompted the “No False Lemmas” (NFL) definition of propositional knowledge, according to which knowledge is incompatible with inference from a false belief. Some have argued that the NFL view fails in the same way in which the justified true belief definition of knowledge failed—namely, but not providing sufficient conditions for knowledge, for there allegedly are cases where subjects have justified true beliefs not inferred from a falsehood that nevertheless do not amount to knowledge. But, interestingly, it has also been claimed that the NFL view fails to provide necessary conditions for knowledge, because it is possible to know a proposition on the basis of inferring it from a false belief. This claim has sparked debate, with some arguing that, in those cases where knowledge is genuinely present, the falsehood in question is not essential to it—that they are cases of knowledge despite falsehood, rather than cases of knowledge from falsehood. In this note, we argue for the existence of cases of knowledge from blindspots—i.e., knowledge from unknowable propositions. An interesting feature of those cases is that, if they are genuinely possible, then they are cases of knowledge from falsehood where the falsehood is essential, and so the aforementioned objection to cases of knowledge from falsehood does not apply. The rest of the chapter develops as follows: Section 5.2 briefly recounts the sort of Gettier cases that gave rise to the NFL view, Section 5.3 argues for the existence of knowledge from blindspots, Section 5.4 defends the existence of knowledge from falsehoods from objections, Section 5.5 puts the existence of knowledge from blindspots in the larger context of different kinds of defective knowledge, and Section 5.6 concludes.

DOI: 10.4324/9781003118701-8

Knowledge from Blindspots 77

5.2  Gettier Cases, NFL, and Counter-Closure Recall Gettier’s (1963) infamous case of the man with ten coins in his pocket: Suppose that Smith and Jones have applied for a certain job. And suppose that Smith has strong evidence for the following conjunctive proposition: (d) Jones is the man who will get the job, and Jones has ten coins in his pocket. Smith’s evidence for (d) might be that the president of the company assured him that Jones would in the end be selected, and that he, Smith, had counted the coins in Jones’s pocket ten minutes ago. Proposition (d) entails: (e) The man who will get the job has ten coins in his pocket. Let us suppose that Smith sees the entailment from (d) to (e), and accepts (e) on the grounds of (d), for which he has strong evidence. In this case, Smith is clearly justified in believing that (e) is true. But imagine, further, that unknown to Smith, he himself, not Jones, will get the job. And, also, unknown to Smith, he himself has ten coins in his pocket. Proposition (e) is then true, though proposition (d), from which Smith inferred (e), is false. In our example, then, all of the following are true: (i) (e) is true, (ii) Smith believes that (e) is true, and (iii) Smith is justified in believing that (e) is true. But it is equally clear that Smith does not know that (e) is true; for (e) is true in virtue of the number of coins in Smith’s pocket, while Smith does not know how many coins are in Smith’s pocket, and bases his belief in (e) on a count of the coins in Jones’s pocket, whom he falsely believes to be the man who will get the job. This case and others like it have engendered two, related lines of inquiry. First, Gettier’s own remarks suggest that source of the problem is the falsity of the lemma, (d), from which Smith infers (e). Even today, there are many ardent defenders of this “No False Lemmas” solution to the Gettier Problem; one cannot gain knowledge from falsehood, or so say its proponents.1 No False Lemmas: Necessarily, S’s belief that p is knowledge only if it is not inferred from any falsehood.

78  Rhys Borchert, Juan Comesaña, and Timothy Kearl Second, consider what Federico Luzzi (2010) has called “Counter-­ Closure”:2 Counter-Closure: Necessarily, if (i) S knows that p entails q and (ii) S comes to believe q solely on the basis of competently deducing it from p, and (iii) S knows q, then S knows p. Counter-Closure offers another way to explain why Smith fails to know (e) by way of the following argument: 1 It’s not the case that Smith knows (d). 2 Smith knows that (d) entails (e). 3 Smith comes to believe (e) solely on the basis of competently deducing it from (d). 4 Necessarily, if Smith knows that (d) entails (e) and Smith comes to believe (e) solely on the basis of competently deducing it from (d), and Smith knows (e), then Smith knows (d). 5 Therefore, it’s not the case that Smith knows (e). This argument is valid, and its premises seem, to many, unobjectionable; after all, premises 1–3 are stipulations of the case, and premise 4 is simply an instance of Counter-Closure. This argument, moreover, offers friends of NFL a bit of adjacent support. It may be, for instance, that the appeal of NFL derives, at least in part, from the truth of CounterClosure (or vice versa). Here, we hope to upset this happy theoretical alignment. We present a novel argument against Counter-Closure by appeal to knowledge from unknowable premises. If one can gain knowledge from unknowable premises—blindspots and Moorean abominations among them— then not only is Counter-Closure false, but also to the extent that NFL derives some plausibility from Counter-Closure, NFL looks much less appealing.

5.3  Knowledge from Blindspots Let’s start by considering a vignette: Bamboozle: Juan is a convincing epistemologist, so convincing, in fact, that over the course of his lecture he gets his students to believe that (i) knowledge of the external world is impossible, but (ii) that he would only be able to convince them of (i) if he exists, and therefore, that there is an external world. One of these students, Rhys, reflects on this and comes to believe the following proposition: “There is an external world but I do not know it”. Suppose that Rhys then deduces from that “there is an external world”.

Knowledge from Blindspots 79 Against the background of some assumptions, the proposition There is an external world but I do not know it is unknowable. For suppose that Rhys know is. Then it is true—that is to say, there is an external world but Rhys doesn’t know it. Contradiction. Therefore, it is unknowable. 3 Some would call such a proposition “abominable”,4 perhaps casting it to the flames of Moorean absurdity.5 As a shorthand, we’ll just call these “blind spot propositions” or say that a certain proposition is “in one’s blindspot”.6 But even though Rhys cannot know that there is an external world but he doesn’t know it, he can justifiably believe it—after all, Juan has given him very convincing arguments for that proposition. Moreover, Rhys employs that justifiably believed blindspot proposition as a premise in his competently deducing that there is an external world, in virtue of which he knows that there is an external world. If we are right, then the proposition on the basis of which Rhys knows that there is an external world is not only unknowable (to him), but it is also false. Compare now Bamboozle with Warfield’s example of knowledge from falsehood: I have a 7 pm meeting and extreme confidence in the accuracy of my fancy watch. Having lost track of the time and wanting to arrive on time for the meeting, I look carefully at my watch. I reason: “It is exactly 2:58 pm; therefore I am not late for my 7 pm meeting”. Again I know my conclusion, but as it happens it’s exactly 2:56 pm, not 2:58 pm. (Warfield 2005, p. 408) Branden Fitelson (2010) has suggested that one could respond to cases like Warfield’s by saying that the falsehood in question is not essential, in the sense that, had it been true, the subject would have still been in a position to know based on it. Fitelson then argues that there are cases of knowledge from falsehood which satisfy what we will call “Fitelson’s counterfactual condition”: Fitelson’s counterfactual condition: If the subject’s belief p had not been false, then the example would not have constituted a case of inferential knowledge. Warfield’s case violates Fitelson’s counterfactual condition because, had it been exactly 2:58 pm, the subject would still have inferentially known that he was not late for his 7 pm meeting. Fitelson presents his own case, which he argues satisfies his counterfactual condition: I have a 7 pm meeting and extreme confidence in the accuracy of both my fancy watch and the Campanile clock. Having lost track of

80  Rhys Borchert, Juan Comesaña, and Timothy Kearl the time and wanting to arrive on time for the meeting, I look out of my office window (from which the Campanile clock is almost always visible). As luck would have it (owing, say, to the fluke occurrence of a delivery truck passing by my window), the Campanile clock is obscured from view at that instant (which is exactly 2:56 pm). So, instead, one minute later, I look carefully at my watch, which (because my watch happens to be running one minute slow) reads exactly 2:56 pm. I reason: “It is exactly 2:56 pm (p) therefore (q) I am not late for my 7 pm meeting”. Thus (supposing Warfield is right), I have inferential knowledge that q, based on a relevant premise p, which is a falsehood. Now for the twist. If my belief that p had been true, then (we can plausibly suppose) it would have been based on my reading (at exactly 2:56 pm) of the Campanile clock, which would have read exactly 2:56. Unbeknownst to me, however, the Campanile clock has been (and would have been) stuck at 2:56 for some time. The idea is that Fitelson’s example satisfies his counterfactual condition because, had the belief that it is exactly 2:56 pm been true, it would have been based on his reading of the Campanile clock, but the Campanile clock is stuck at 2:56, and no knowledge about the time can be gained from reading a stuck clock. Another related objection to alleged cases of knowledge from Falsehood comes from Coffman (2008), who suggests that, in all those cases, there is a nearby truth which is doing all the heavy lifting. More precisely, according to Coffman in any such case, there is a true proposition p′ such that (i) the subject is (at least) disposed to believe p′ and (ii) if the subject’s inferential belief (that q) had been based on a belief in p′, the inferential belief would still have constituted knowledge. Fitelson argues that his own example can be slightly modified so as to deal with this reply from Coffman: suppose that, if Fitelson had based his belief that he is not late for his 7:00 pm meeting, not on the proposition that it is exactly 2:56 pm, but rather on the proposition that it is approximately 2:56 pm, that would have been based on his reading of the Campanile clock (Fitelson thinks, say, that the Campanile clock is only approximately right, but his own fancy watch is always exactly right). Fitelson’s counterfactual condition can be strengthened. Consider instead the following essentiality condition: Essentiality condition: Necessarily, if the subject’s belief that p is true, then the subject does not know q. Fitelson’s case does not satisfy the essentiality condition, for it is obviously possible for Fitelson’s belief that it is exactly 2:56 pm to be true and for him to have knowledge that he is not late for his 7:00 pm meeting.

Knowledge from Blindspots 81 But our Bamboozle case does satisfy the essentiality condition. In any possible world in which it is true that there is an external world and Rhys does not know it, Rhys does not know that there is an external world. And given that our case satisfies the essentiality condition, it also obviously satisfies the counterfactual condition. But is our case possible? We anticipate two sources of resistance: first, that in addition to being unknowable, blindspot propositions cannot even be justifiably believed. And second, that Rhys’s knowledge is not derived from his competent deduction from a blindspot. Suppose one thought that blindspot propositions cannot even be justifiably believed. One might point to something like the “felt inconsistency” of such propositions as grounds for skepticism about the possibility of justifiably believing something in one’s blindspot. But this reaction is too strong. Start by noticing that a disjunction of blindspot propositions is not, in general, itself in one’s blindspot. There is no felt inconsistency, abominableness, or Moorean absurdity in the air when one believes de se that I am content to live in an ice hut and I doubt it OR I am not content to live in an ice hut and I doubt it.7 Moreover, an author’s statement in the preface of her manuscript is a long disjunction of blindspots (Sorensen, 1988), but we don’t accuse authors of unjustifiably or ­irrationally acknowledging the likelihood that their work contains errors, despite thinking, of each particular claim, that it is not erroneous. So, not only can a disjunction of blindspot propositions be justifiably believed, but intellectual humility might also even demand that one believes a disjunction of blindspot propositions. But now imagine that the author, after acknowledging her fallibility in the preface of her manuscript, proceeds to double-check her claims individually. One way to conceptualize what goes on in her double-checking is that the author engages in a long chain of disjunction elimination. She begins with the disjunction expressed in the preface, where each disjunct is itself a blindspot proposition: (P and not-K(P)) OR (P’ and not-K(P’)) OR (P’’ and not-K(P’’)) OR … She first eliminates the possibility expressed by P and I don’t know that P by, say, double-checking her grounds for believing that P and deeming them adequate. But she is already in a position to justifiably reason as follows: 4) (P and not-K(P)) OR (P’ and not-K(P’)) OR (P’’ and not-K(P’’)) OR … 5) It’s not the case that (P and not-K(P)) 6) Thus: (P’ and not-K(P’)) OR (P’’ and not-K(P’’)) OR …

82  Rhys Borchert, Juan Comesaña, and Timothy Kearl Again, her justification for premise 4 is her knowledge of her own fallibility, and her justification for premise 5 is her double-checking. At a certain point in this process of double-checking, perhaps as she nears the end of her manuscript, disjunction elimination will lead her to a blindspot. If disjunction elimination is good enough in the first n − 1 steps of this process to result in justified belief, why wouldn’t it be good enough in the nth step to result in justified belief (even if it could not result in knowledge)? One answer to this question may be that by the point the author gets to the last disjunct, she can double-check the embedded claim. If the double-checking fails, then she will not believe the last blindspot, and if the double-checking justifies the last proposition—well, what then? Is she to believe that she is the first one ever to write a completely error-free book? Should she start planning for the global accolades that are sure to come (Christensen, 2004)? No, of course not: rather, she should think that she is still as fallible as the rest of us, and that the fallibility seeped into her double-checking procedures. So, if she double-checks the claim embedded in the last blindspot, she would either go back to believing a disjunction of blindspots or no blindspot at all. But what if she doesn’t double-check the claim embedded in the last blindspot? Isn’t she then justified in believing it? Maybe—or maybe she is not, because she is already (propositionally) justified in believing that there is nothing special about that last claim, and so that she should have the same attitude toward it as she does toward all the others. Perhaps, then, the preface problem does not present a watertight argument for justified belief in blindspots. What about its cousin, the lottery problem? Suppose that Tim is convinced that he does not know, merely on the basis of statistical evidence, that his lottery ticket is a loser. Nevertheless, Tim is justified in believing that his lottery ticket is a loser. He puts two and two together and believes “My ticket is a loser but I don’t know that it is”. Of course, he doesn’t know this, but what is the argument that he doesn’t justifiably believe it?8 In any case, regardless of whether either the preface or the lottery is examples of justifiably believed blindspots, we think that Bamboozled certainly is. Testimony is an extremely powerful source of justification, and in the right circumstances, it can justify subjects in believing blindspots. What of the worry that Rhys can himself figure out that the proposition in question is a blindspot? Doesn’t that in itself count against the possibility of his justifiably believing it? Not really: Rhys already believes that Yul Brynner was bald, that Brian May is not bald, and that one hair doesn’t make the difference between being bald and not being bald. Rhys realizes that this set of beliefs entails a contradiction. Nevertheless, he neither believes a contradiction nor does he give up his belief in the inconsistent triad. Many philosophers have tried to convince Rhys that he should give up his belief in the tolerance principle that one

Knowledge from Blindspots 83 hair doesn’t make the difference between being bald and not being bald, but Rhys is more confident of that tolerance principle than he is of any philosophical theory against it. We think Rhys may very well be justified in this set of attitudes. If he is justified in believing each one of an inconsistent set of propositions, though, why would he not be justified in believing a blindspot? One reaction to our claim that Rhys knows that there is an external world by way of competent deduction from a justifiably believed blindspot proposition was that such propositions cannot even be justifiably believed. This is too strong to be plausible, as the above considerations show. Another reaction is that, if Rhys knows that there is an external world, he knows it in some other way, not via competent deduction from a blindspot proposition. In some respects, this second reaction is related to Coffman’s idea that nearby truths are doing all the heavy lifting. We now turn to that objection.

5.4  The Justificatory Heavy Lifting We claimed that Rhys can know that there is an external world by inferring it from the justifiably believed blindspot proposition There is an external world but I do not know it. But if one could locate a nearby proposition, one that Rhys was in a position to know, to do the justificatory heavy lifting, our claim would lose some of its appeal. So, what are some candidate nearby knowable propositions, and how does this “knowledge despite blindspots” strategy work? Consider, as a surrogate, the nearby proposition Juan said that there is an external world but I do not know it. This proposition is not in Rhys’s blindspot; there is, after all, no felt inconsistency, Moorean absurdity, or other abominableness to it. Compare: as Rhys looks in the fridge, he sees and thereby comes to know that there’s no beer. Juan might, from the couch, insist that there’s beer (maybe he remembers seeing some just yesterday). In some such situations, of course, Juan’s testimony might undermine Rhys’s knowledge that there’s no beer, perhaps prompting him to further rummage around in pursuit of some. But in other situations, ones in which Rhys has already searched high and low, or ones in which the fridge is empty and so the presence or absence of beer would be obvious, Rhys in a position to know the conjunction Juan said there’s beer but there’s no beer. The issue is now to determine whether Juan said that there is an external world but I do not know it can really do the justificatory heavy ­lifting. The inference would have to go via Rhys’ trust in Juan’s testimony. But this way of recasting Rhys’s predicament seems to introduce a new problem. Namely, that Juan, insofar as he is providing Rhys with evidence via testimony, is providing evidence for both the proposition

84  Rhys Borchert, Juan Comesaña, and Timothy Kearl that Rhys doesn’t know that there is an external world and the proposition that there is an external world. If, instead, what Rhys has as evidence is merely that Juan says so, this should apply to each conjunct to which Juan testifies. Rhys would be left with the proposition Juan said that I don’t know there is an external world and Juan said there is an external world. But from this, Rhys could only deduce that there is an external world by way of a background premise to the effect that Juan is a reliable informant, a premise which would interact with the first conjunct no less than to the second so as to support precisely that blindspot proposition this strategy was meant to circumvent. We think this points to a general problem with the “knowledge despite blindspots” strategy, concerned as it is with looking for nearby knowable proposition to serve as explanatory surrogates for the relevant blindspot proposition: as we already said, the falsity of the initial proposition is an essential part of what puts Rhys in a position know that there is an external world. After all, if the initial proposition is true, then it’s true that Rhys doesn’t know there’s an external world. At this point, a friend of “No False Lemmas” or of “Counter Closure” might be borderline exasperated. As if it weren’t bad enough that we argued that one could justifiably believe a proposition in one’s blindspot, we also argued that attempts to find nearby, knowable propositions to do the justificatory heavy lifting were doomed to fail. The problem, they might suggest, resides in the initial suggestion that Rhys knows there is an external world via inference from such a problematic proposition. Instead, more plausibly (the suggestion continues), Rhys knows there is an external world via whatever way he acquired justification for believing the blindspot in the first place. If that’s right, the problem of knowledge from blindspots doesn’t even get off the ground. One way to make this sentiment more precise, we’ll call the “disqualification strategy”. According to it, even though Juan’s testimony ­provides evidential support for Rhys to believe the blindspot proposition, this evidence is disqualified, so Rhys’s belief in the external world on its basis is irrational. Evidence E in favor of a proposition P is disqualified by other evidence E’ when E’ is stronger—or perhaps somehow “more direct”—evidence in favor of P than is E.9 For instance, Alice tells Bob that there is beer in the fridge, and that this is sufficient evidence in Bob’s situation to justify him in believing it. Suppose that Bob opens the fridge and sees a six-pack of Hazy IPAs. Seeing a six-pack of Hazy IPAs is also sufficient evidence to justify Bob in believing that there is beer in the fridge. Although both Alice’s testimony and Bob’s perceptual experience are each sufficient to justify Bob’s belief, Bob’s perceptual evidence disqualifies his testimonial evidence from serving as the (a?) basis for his belief. According to this line of thought, Bob ought to base his belief that there is beer in the fridge on his perceptual experience but not on Alice’s testimony because

Knowledge from Blindspots 85 the former is stronger or more directly relevant to the question at hand, namely whether there’s beer. Returning to the original example, Rhys has overwhelming evidence that there is an external world via his perceptual experiences; perhaps the evidence provided by perceptual experience in favor of the external world is far stronger than any evidence that could be provided by testimony or by logical inference. Consequently, Rhys’ belief that there is an external world is improperly based; if Rhys ought to believe in an external world, he ought to believe in an external world on the basis of his perceptual experiences, not on the basis of testimony-plus-inference. The problem with the disqualification strategy is that, even if in some sense Rhys would be better off by basing his beliefs differently, this does not show that Rhys, as he in fact is, is epistemically irrational; it only shows that Rhys is epistemically suboptimal. For instance, in his analysis of disqualification, Muñoz appeals to Harman’s (1986) Principle of Clutter Avoidance in order to defend the claim that we ought not to base our beliefs on disqualified evidence. Qua thesis about epistemic optimality, the Principle of Clutter Avoidance is compelling. Qua thesis about epistemic rationality, the Principle of Clutter Avoidance is controversial. For instance, claiming that the Principle of Clutter Avoidance places constraints on epistemic rationality would conflict with many forms of Evidentialism, according to which one’s beliefs (at a time) are justified by one’s total evidence (at that time), since it would require that agents ignore some of the evidence of which they could avail themselves. An agent can violate the Principle of Clutter Avoidance while also having all of their beliefs supported by the evidence. For example, we can imagine that, for some reason, Douglas cannot get rid of their knowledge of all of the starting lineups of the Chicago Bears from 2000 to 2010. Given that Douglas is a finite creature, this leads to unfortunate situations where Douglas is unable to learn, or maintain knowledge in, more important propositions, in part, due to the fixity of their knowledge of the Bears’ starting lineups at the beginning of the century. However, even if this is all true, this does not make Douglas epistemically irrational for believing that safety Mike Brown only played in six games for the Bears in the 2006 season. Indeed, he knows it despite failing to avoid clutter. Similarly, we think for an agent who believes on the basis of disqualified evidence. While there may be something odd and perhaps suboptimal about the way that Rhys comes to believe that there is an external world, this does not undermine the claim that Rhys’ ultimate belief in the external world is epistemically rational. Furthermore, it does not undermine the claim that Rhys’ belief in the external world amounts to knowledge. Just like how Bob can know that there is beer in the fridge

86  Rhys Borchert, Juan Comesaña, and Timothy Kearl on the basis of Alice’s testimony even while he’s looking at beer, Rhys can know that there is an external world via inference from belief in a blindspot even while he’s perceiving the external world. Thus, the disqualification strategy fails to secure the verdict that Rhys cannot know that there is an external world via inference from a proposition in his blindspot. Still, one could think that it is not possible for Rhys to be justified in believing the blindspot proposition without antecedent justification for believing that there is an external world, and if Rhys has such antecedent justification, then Rhys’ knowledge from blindspot is, if not disqualified, at the very least superfluous. To address this worry, let us first note that we have so far been studiously avoiding the question of what kind of inference Rhys performs in going from his justified belief that there is an external world but he doesn’t know it to his knowledge that there is an external world. Perhaps the obvious answer is that Rhys performs conjunction elimination, because the blindspot proposition is a conjunction of the propositions There is an external world and Rhys doesn’t know that there is an external world. But it need not be this way. Suppose that, before presenting the skeptical arguments, Juan introduced a super-factive operator THAT. Importantly, THAT p is super-factive because it entails p even when embedded in contexts which would normally cancel those kinds of implications. It is thus unlike “it is true that”, for I believe that it is true that it is raining does not entail that it is raining, but I believe THAT it is raining does entail that it is raining. What Rhys is convinced of, then, is the proposition I do not know THAT there is an external world. Although that proposition has interesting structure in virtue of embedding a super-factive operator, it is not (we can suppose) a conjunction. Thus, it is not that Rhys first becomes justified in believing a conjunction one of whose conjuncts is that there is an external world, and then performs conjunction elimination to come to know that there is an external world. Perhaps it is not possible to know a conjunction without thereby knowing its conjuncts. But it is perfectly possible to learn that, say, Argentina won the 1986 Soccer World Cup on the basis of Juan saying “It has been reported THAT Argentina won the 1986 Soccer World Cup”. And that brings us to another important remark. We have so far concentrated on Rhys coming to know that there is an external world, but there is no need for the example to involve such a heavy-weight philosophical thesis. Instead, suppose that, on the basis of skeptical arguments, Juan convinces Rhys of the proposition I don’t know THAT Argentina won the 1986 Soccer World Cup. That Argentina won the 1986 Soccer World Cup is not a proposition whose knowledge must antecede any other knowledge, and so that reason for supposing that knowledge from blindspots is impossible does not really get to the heart of the matter.

Knowledge from Blindspots 87

5.5  Defective Knowledge The arguments in Sections 5.3 and 5.4 purported to cast doubt on two popular and mutually supportive theses: NFL and Counter-Closure. As a reminder, they say: No False Lemmas: Necessarily, S’s belief that p is knowledge only if it is not inferred from any falsehood. Counter-Closure: Necessarily, if (i) S knows that p entails q and (ii) S comes to believe q solely on the basis of competently deducing it from p, and (iii) S knows q, then S knows p. Our position is at odds with NFL and Counter-Closure because knowledge from blindspots is inferential knowledge from a proposition that is itself unknowable and false. Zooming out from our particular examples, we hope to show that our position—that there is sometimes knowledge from blindspots—has wider appeal and motivation. As might be apparent from the discussion in the last section, we think that an agent’s being able to gain knowledge from a proposition in his blindspot is one among many ways of defective or non-ideal knowledge-acquisition, the others of which most of us are already familiar. Consider, for example, epistemic akrasia, a special sort of inner conflict wherein an agent believes both p and that her evidence fails to support p. Akratic agents are, like Rhys or Tim, liable to utter Moore-paradoxical-sounding sentences like “P, but I shouldn’t believe that P”; whether or not these are truly abominable or have the same felt inconsistency as “P, but I don’t know that P”, there is surely a family resemblance between akratic agents and those who believe propositions in their blindspots. While this is not totally uncontroversial, we think, along with many contemporary epistemologists, that some cases of epistemic akrasia are rationally permissible. Here is one case (of many) discussed recently by Hawthorne et al. (2021): Unger Games: Following Unger, Stew believes that a belief in p is rational only if one has reasons for believing p and that something can be a reason for belief only if it is known. Moreover, Stew reasonably trusts an epistemologist—Peter Unger, in fact—who tells him that knowledge is unachievable. He thus believe that none of his beliefs are rational, thinking that the best that can be hoped for is some lesser status. But Unger has got all this wrong, and in fact many of Stew’s beliefs are rational. (adapted from Hawthorne et al., 2021, p. 4)

88  Rhys Borchert, Juan Comesaña, and Timothy Kearl Peter Unger convinces his impressionable student, Stew, that no one rationally believes anything because rational belief (that there are dark clouds forming above Tucson, say) requires a preponderance of epistemic reasons that no one in fact possesses, nor could possess. Stew might, after introspectively attending to his occurrent commitments, think to himself there are dark clouds forming above Tucson, but (given Unger’s lecture) I shouldn’t believe it. Suppose that Stew, in the midst of his aporia, infers that it will probably rain in Tucson later that day, so that he should ensure he has an umbrella handy. Perhaps his inference is a bit lucky in the sense that, had he spent more time reflecting on Unger’s lecture and the misleading higher-order evidence it provides, he may have hesitated and remained agnostic about rain. Perhaps also his inference is defective because it flows from a state of inner conflict, global coherence being an ideal of epistemic rationality. But, as one of us has argued, at least some cases like Stew’s are ones of inadvertent epistemic virtue, wherein one forms the right belief for the right reasons while thinking otherwise.10 In particular, Stew infers for reasons he thinks do not suffice to justify his inference. This, of course, is compatible with claiming that Stew cannot know the conjunction there are dark clouds forming above Tucson, but (given Unger’s lecture) I shouldn’t believe it. Even if, that is, Stew is in the unfortunate position of being unable to know what he believes, he can still generate knowledge from this position by way of manifesting certain epistemic virtues, responding to the right reasons being chief among them. Stew’s predicament in Unger Games is only superficially different from Michael’s in Churchlandia: Churchlandia: Following Churchland, Michael believes that there are no beliefs; completed cognitive science has no use for such folk psychological notions. Moreover, Michael reasonably trusts a philosopher—Paul Churchland, in fact—who tells him that beliefs are nothing but a fiction borne of ignorance and social utility. Michael thus believes that none of his (or anyone’s) mental states are beliefs. But Churchland has got all this wrong, and in fact some of Michael’s mental states are beliefs, the belief-seeming ones chief among them. Michael might, after introspectively attending to his occurrent commitments, think to himself there are dark clouds forming above Tucson, but (given Churchland’s lecture) I don’t believe it. This is a proposition in Michael’s blindspot. Suppose further Michael, in the midst of his aporia, infers that it will probably rain in Tucson later that day, so that he should ensure he has an umbrella handy. Like Stew, Michael’s inference might be thought lucky and defective: lucky because it could have easily been led astray by Churchland’s misleading testimony, and defective because it falls short of an ideal of

Knowledge from Blindspots 89 coherence. Nevertheless, Michael might display inadvertent virtue in inferring as he does; after all, he forms the right belief for the right reasons, while thinking otherwise. In particular, Michael infers from reasons he doubts exist (assuming that reasons for this or that inference are at least partly—if not wholly—constituted by one’s beliefs). The parallels between Stew and Michael, on the one hand, and Rhys and Tim, on the other, suggest that, at least sometimes, agents can gain inferential knowledge despite the defectiveness or non-ideality of the state from which they infer.

5.6 Conclusion We have argued for the possibility of knowledge from blindspots. This presents adherents of NFL and Counter-Closure with a distinctive challenge, since, not only is the blindspot in fact false, but also its falsehood is essential. In other purported cases of knowledge from falsehood, it has been argued that there are nearby true propositions with sufficient justificatory power to provide the agent with knowledge, rendering the falsehood inessential—it is a case of knowledge despite falsehood. This strategy of defending NFL cannot work against our cases, since cases of knowledge from blindspot satisfy not only Fitelson’s counterfactual condition, but also our stronger essentiality condition: necessarily, knowledge is lost if the initial proposition is true. Not only does knowledge from blindspots undermine NFL and Counter-Closure, but also it is interesting in its own right. It exemplifies one way in which we can gain inferential knowledge defectively or non-ideally. Viewed against this backdrop, perhaps what is most attractive about NFL and Counter-Closure is that they codify certain paradigmatic cases of inferential knowledge. But that still permits a great deal of wiggle room for discussing the varieties of non-paradigmatic inferential knowledge, perhaps understood in terms of theoretical distance from those ideals.

Notes 1. For support of the No False Lemmas thesis, see Harman (1973) and, more recently, Coffman (2008), Schnee (2015), Montminy (2014), and Lee (2021). For arguments against it, see Warfield (2005) and Fitelson (2010). Some of these views will be discussed in more detail below. This idea has taken on a life of its own, even if it is widely agreed to fail to provide a general solution to Gettier-style examples; not all Gettier-style examples are inferential, so not all Gettier-style examples will turn on inference via a false lemma. 2. To be fair, Luzzi was not offering a full-throated endorsement of Counter-­ Closure, but rather a critical discussion of that principle in light of problem cases, alongside a menu of solutions. See also Warfield (2005) and Klein (2008), and Ball and Blome-Tillman (2014) for a recent defense of Counter-Closure.

90  Rhys Borchert, Juan Comesaña, and Timothy Kearl 3. Fitch (1963) and Church (2009). Of course, what makes them peculiar— however that peculiarity is described—is that they are unknowable de se. There is nothing, after all, odd about someone else believing that Rhys doesn’t know there is an external world and there is an external world. 4. DeRose (1995). 5. Moore (1942). 6. The terminology is taken from Sorensen (1988). 7. The example is taken from the anthropologist Gontran de Poncins (1941). 8. One argument against both our preface and lottery examples comes from extreme versions of knowledge-first epistemology which identify justification with knowledge. This is not the place to argue against such views, so the reader is invited to either agree with us regarding the implausibility of such positions or to take the results of this note to be some consequences of the rejection of those positions. 9. See Muñoz (2019). Disqualifiers “take a would-be justifier and make it irrelevant” (888). In this respect, disqualification is similar to defeat and evidential screening-off (on the latter, see Weatherson, 2019, ch. 11). The question of whether disqualification is, as Muñoz argues, irreducibly distinct from defeat and screening-off we set aside. 10. Kearl (2022). See also Weatherson (2019).

References Ball, B., & Blome-Tillman, M. (2014). Counter-closure and knowledge despite falsehood. Philosophical Quarterly, 64, 552–568. Christensen, D. (2004). Putting logic in its place. Oxford University Press. Church, A. (2009). Referee reports on Fitch’s ‘A definition of value’. In Salerno (Ed.), New essays on the knowability paradox (pp. 13–20). Oxford University Press. Coffman, E. J. (2008). Warrant without truth? Synthese, 162(2), 173–194. DeRose, K. (1995). Solving the skeptical problem. The Philosophical Review, 104(1), 1–52. Fitch, F. (1963). A logical analysis of some value concepts. The Journal of Symbolic Logic, 28, 135–142; reprinted in Salerno (ed.) 2009, 21–28. Fitelson, B. (2010). Strengthening the case for knowledge from falsehood. Analysis, 70(4), 666–669. Gettier, E. L. (1963). Is justified true belief knowledge? Analysis, 23, 121–3. Harman, G. (1973). Thought. Princeton University Press. Harman, G. (1986). Change in view: Principles of reasoning. MIT Press. Hawthorne, J., Isaacs, Y., & Lasonen-Aarnio, M. (2021). The rationality of epistemic akrasia. Philosophical Perspectives, 35, 206–228. Kearl, T. (2022), Evidentialism and the problem of basic competence. Ergo, 9(0). https://doi.org/10.3998/ergo.2271/ Klein, P. (2008). Useful false beliefs. In Q. Smith (Ed.), Epistemology: New essays (pp. 25–62). Oxford University Press. Lee, K. (2021). Reconsidering the alleged cases of knowledge from falsehood. Philosophical Investigations, 44(2), 151–162. Luzzi, F. (2010). Counter-closure. Australasian Journal of Philosophy, 88(4), 673–683.

Knowledge from Blindspots 91 Montminy, M. (2014). Knowledge despite falsehood. Canadian Journal of Philosophy, 44(3–4), 463–475. Moore, G. E. (1942). A reply to my critics. In P. A. Schilpp (Ed.), The philosophy of G. E. Moore. Northwestern University. Muñoz, D. (2019). Defeaters and disqualifiers. Mind, 128(511), 887–906. Poncins, G., (1941) [1988], Kabloona in collaboration with Lewis Galantiere. Carroll & Graff Publishers. Schnee, I. (2015). There is no knowledge from falsehood. Episteme, 12(1), 53–74. Sorensen, R. (1988). Blindspots. Clarendon Press. Warfield, T. A. (2005). Knowledge from falsehood. Philosophical Perspectives, 19, 405–416. Weatherson, B. (2019). Normative externalism. Oxford University Press.

Section II

Gettier, Safety, and Defeasibility

6

Knowledge from Error and Anti-Risk Virtue Epistemology Duncan Pritchard

1. Generally speaking at least, one cannot acquire knowledge by inference from a false belief, even if one happens via this route to form a true belief in the inferred proposition, and even if one’s beliefs in the entailing and entailed proposition are both justified. Edmund Gettier’s (1963) original cases illustrate this point nicely, as they both essentially concern a subject inferring a true belief from a false, but justified, belief, and yet no-one is inclined to think that the inferred belief, even while justified, amounts to knowledge. To take one of Gettier’s own examples, imagine that our hero justifiably believes the false proposition that Jones owns a Ford and infers from this that either Jones owns a Ford or Smith is in Barcelona. The inferred disjunctive belief is, plausibly, justified too, and we can also stipulate that it happens to be true—on account of the incidental truth of the second disjunct—but it hardly seems to qualify as knowledge. In particular, it looks to be just a matter of luck that the inferred belief is true, given how it was formed, in that it is just a ­coincidence that the second disjunct of the inferred belief happens to be true. Epistemic luck of this sort is, however, usually thought to be incompatible with knowledge. Or consider the following Gettier-style scenario: Detective Chandler, a private investigator, believes that the murderer is Jones, since all the evidence is pointing that way. As a result he infers that the murderer has blood type O, since that is Jones’ blood type. Chandler is wrong about the identity of the murderer, but his inferred belief is true regardless, as the murderer does have blood type O. Chandler doesn’t know what he infers, even though what he infers is true and even though he inferred it from a justified belief (such that it is plausible that this inferred belief is also justified). In particular, he doesn’t know what he infers because, as in the previous Gettier case, it seems to be just a matter of luck that his belief is true, given how it was DOI: 10.4324/9781003118701-10

94  Duncan Pritchard formed—i.e., it is just a coincidence that the murderer and the person who Chandler believes is the murderer happen to have the same blood group. Of course, Chandler could subsequently acquire independent reasons for holding the inferred belief, and hence come to know it, but in such a case, the inferred belief would no longer be epistemically based on the inference from a false belief (but based on the new independent grounds instead). The upshot, it seems, is that one can’t gain knowledge by inferring it from a falsehood, at least to the extent that the epistemic basis for the inferred belief remains the inference from a falsehood. There do, however, seem to be some exceptions to this rule. Consider the following, now familiar, case: Handout Ted needs to determine how many handouts he will need for his talk. Carefully counting the number of people present he forms the belief that there are 53 people in the room. He accordingly infers that the 100 handout copies that he has will be sufficient. But Ted miscounted, and there are in fact 52 people in attendance.1 Structurally Handout looks very similar to a Gettier-style case like Detective. Ted’s initial belief that there are 53 people at the talk is justified, but false, and the inferred belief is also justified, but true. The difference, however, is that a number of epistemologists have argued that the inferred belief amounts to knowledge. 2 In particular, it seems that Ted’s inferred belief is not luckily true like Chandler’s inferred belief. Indeed, this looks like a very secure way of forming a true belief. But if one can’t in general gain knowledge from inferring a true belief from a false (even if justified) belief, then why can one gain such knowledge in this scenario? Or consider this scenario: President CNN breaks in with a live report. The headline is ‘The President is speaking now to supporters in Utah’. Fritz reasons that since the President is in Utah, he is not attending this morning’s NATO talks in Brussels. The President is not in Utah, however, but Nevada—he is speaking at a ‘border rally’ at the border of those two states and the speaking platform on which he is standing is in Nevada (it is the crowd listening to the speech that is in Utah).3 As with Handout, the subject is forming a true belief by responsible inferring it from what is, unbeknownst to them, a false belief. Nonetheless,

Knowledge from Error and Anti-Risk Virtue Epistemology 95 the resulting belief doesn’t seem to be luckily true in the way that the corresponding belief in Detective is. Relatedly, it also seems to amount to knowledge. It thus seems that there is an important epistemic difference between cases like, on the one hand, Handout and President, and cases like, on the other hand, Detective, despite their structural similarities. This therefore calls for a diagnosis of what grounds this epistemic difference. 2. Call cases like Handout and President, where knowledge seems to result from an inference from a falsehood, knowledge from falsehood cases. Some commentators have disputed whether such cases really are instances of deriving knowledge from falsehood, at least to the extent that a falsehood is playing a necessary role in the resulting inferential knowledge anyway.4 For example, in a case like Handout, one could argue that what’s really load-bearing in the relevant inference is not so much the subject’s (false) belief that there are 53 people in the room as their more general (and true) conviction that there are a lot less than 100 people in the room. In contrast, while I would grant that some cases might be amenable to such an interpretation, I’m not convinced that this strategy has general application. Accordingly, I propose to proceed by granting that there are genuine cases of knowledge from falsehood. The challenge I face is thus to account for why knowledge is bona fide in such cases when it is not generally possible to gain knowledge by inferring it from a falsehood. One straightforward way of explaining why such cases are possible is to appeal to the safety condition on knowledge.5 Many epistemologists hold that a necessary condition for knowledge is that the target belief is formed on a safe basis, such that it could not have very easily been a false belief, so formed.6 With this constraint on knowledge in mind, one has a plausible way of distinguishing between the usual cases involving inference from falsehood that don’t result in knowledge (like Detective), and the more specific class of cases like Handout which, it is claimed, do result in knowledge. In particular, while in general one cannot gain a safe belief from inferring it from a false belief, there are some cases, like Handout, where this is possible, and which are thus compatible with the acquisition of knowledge. That reasoning from a false belief is generally an unsafe basis for belief ought to be uncontentious. The Detective case illustrates why. While Chandler formed a true belief via the target inference and did so by inferring it from a justified belief, the inferred belief is not safe. Via this reasoning, Chandler could have very easily formed a false belief, given that it is just a coincidence that Jones’ blood type happens to be the same as the murder’s. Or consider the Gettier case described above. Although the inferred belief is true, it could very easily have been false, given how it was formed, such as the scenario in which Jones hadn’t recently traveled to Barcelona.

96  Duncan Pritchard Notice too that it doesn’t matter in this regard whether the inferred belief happens to be a necessary truth, as it would still be the case that the subject’s inferred belief would be unsafe. For example, imagine that in the Gettier case the second disjunct happened to be necessarily true, such that the disjunction as a whole was also necessarily true. This is still compatible with a belief in this disjunction being unsafe since, as safety is usually understood, what’s important is that one’s basis for belief couldn’t have easily resulted in a false belief, where the proposition believed needn’t be the same proposition that the subject actually believes. For instance, suppose one forms one’s belief that 2 + 2 = 4 by flipping a coin. There is obviously no close possible world where one forms that same belief on the same basis and believes falsely, since there is no possible world where the target proposition is false. But there is a close possible world where that basis for belief results in a false belief, such as the possible world where flipping a coin leads one to believe that 2 + 2 = 5. In this way, safety theorists are able to explain why even beliefs in necessary truths can be unsafe.7 Accordingly, even if our hero had inferred a disjunction with a random claim as the second disjunct that happened to be (unbeknownst to our agent) a necessary truth, his belief would still be unsafe, as there will be a close possible world where that way of forming a belief in a disjunction results in a false belief. With the foregoing in mind, we are now in a position to draw a contrast between typical instances of inferences from falsehood, like Detective, and knowledge from falsehood cases like Handout and President. In Handout, while the initial belief might be slightly inaccurate, the belief inferred on this basis is nonetheless safe. In particular, Ted could not very easily form a false belief via the relevant inference, given that the number of handouts that he has is comfortably sufficient for the actual number of people in the room. The crux of the matter is that false beliefs that are approximately true can provide safe bases for true beliefs which aren’t affected by the relevant margin of error. The same is also true of the inferred belief in the President case. Although the initial belief is false, it is nonetheless sufficiently in the ballpark of the truth to ensure that the inferred belief is safe. Whereas the Handout case ensures this by having a numerical distance between the number at issue in the original belief and the threshold at issue in the inferred belief, the President case ensures this by there being a geographical distance. Given how far Brussels is from the region where the President is located, it follows that there is no close possible world where he makes it across to Brussels for that morning’s NATO meeting, which is why Fritz’s inferred belief is safe regardless. The geographical distance in this case, like the numerical distance in the Handout scenario, ensures there is the required modal distance to underwrite the safety of the inferred belief.

Knowledge from Error and Anti-Risk Virtue Epistemology 97 This way of explaining knowledge from error cases can also account for why analogous scenarios where there isn’t the same modal distance wouldn’t generate knowledge, because the resulting belief would not be safe. Consider this variant on the Handout scenario: Handout* Ted* needs to determine how many handouts he will need for his talk. Carefully counting the number of people present he forms the belief that there are 99 people in the room. He accordingly infers that the 100 handout copies that he has will be sufficient. But Ted* miscounted, and there are in fact 98 people in attendance. I take it that there is no temptation to ascribe knowledge to Ted* in this scenario. The only difference, however, is the numerical gap between the counted number of people in the room and the threshold number at issue in the inferred belief. The natural explanation of why this change makes such a difference to the case is that with this numerical gap reduced to a sliver, it is now very easy for Ted* to end up with a false belief by making this inference. In particular, it could have easily been the case that Ted*’s counting of the people in the room was off such that his inferred belief about his 100 handout copies being sufficient is false. Ted* thus has an unsafe basis for belief, in contrast to Ted. We can easily imagine a parallel variant of President where similar points apply (e.g., where the subject infers that the President is not attending this morning’s NATO meeting in Oregon), and thus where the inferred belief is unsafe. 3. Since safety is only held to be necessary for knowledge, it obviously doesn’t follow that the inferred belief in the Handout case, or cases like it, amount to knowledge, but at least we are able to point to a condition on knowledge which is present here but absent in the relevant Gettierstyle cases. Moreover, insofar as the rationale behind safety is to accommodate intuitions about the incompatibility of knowledge with veritic epistemic luck or high levels of epistemic risk, then we can unpack the claim being made here in these terms.8 Consider anti-risk epistemology, for example. This appeals to a modal account of risk which holds that, roughly, we should evaluate risk by considering the modal closeness of the target risk event. So, for example, if the target risk event associated with plane travel is dying in a plane crash, then to treat such a mode of transport as high risk is to contend that there is a close possible world where someone who takes a flight ends up dying in a plane crash.9 With the modal account of risk applied to knowledge, and the target epistemic risk event identified as the formation of a false belief, we can see how anti-risk epistemology provides a rationale for the safety condition on knowledge. Knowledge excludes high levels of epistemic risk in that it

98  Duncan Pritchard is incompatible with one forming one’s true belief on a basis that could very easily have led to the epistemic risk event of forming a false belief. Ergo, knowledge entails safety.10 With this rationale in mind, we have a way of explaining why knowledge might be in general incompatible with inference from falsehood but not universally so. In short, the explanation is that while inferring a true belief from a falsehood would generally make one’s belief subject to high levels of epistemic risk—in that the epistemic risk event of forming a false belief is modally close—this is not always the case. In particular, there is a particular class of cases, exemplified by Handout and President, such that a true belief formed on this basis is not subject to a high level of epistemic risk, in that the epistemic risk event is not modally close. Relatedly, we can use anti-risk epistemology explain why there would be norms of epistemic responsibility what would be applicable to these inferences in the way articulated above. As epistemically responsible believers, Ted and Fritz, the subjects in the Handout and President cases, are surely aware of how the inference they are undertaking is by its nature epistemically low risk, and this informs their willingness to make this inference. So long as their initial belief is roughly correct, their inferred belief will be bound to be safe. In contrast, given the moderate nature of their epistemic support for the entailing belief, as epistemically responsible believers, they would be wary about undertaking corresponding inferences that lack the modal distance to the target risk event, and which are thus epistemically high risk. Counting large numbers of people, even carefully, is prone to error, and as a responsible believer, Ted will be aware of this. Relatedly, catching a single news item on the TV is a relatively fallible way of determining the current location of the President. In both cases, this level of fallibility is compatible with an epistemically responsible subject forming a belief in the target proposition, but not with the agent drawing inferences from it where there is no margin for error, and which are thus epistemically high risk. This is why Ted*’s willingness to infer that the number of handouts is sufficient indicates that he is not an epistemically responsible believer, as he ought to be sensitive to the fact that this is an epistemically highrisk inference. Part of what it is to be epistemically responsible in one’s inferences is thus to be alert to the extent to which the inference is epistemically risky. 4. We thus have a potential rationale, in terms of the safety condition—and thus the anti-luck/anti-risk epistemology that motivates the safety condition—for distinguishing between genuine cases of knowledge from error from corresponding inferences from falsehood where the inference does not generate knowledge. So are we then home and dry in terms of accounting for the phenomenon of knowledge from error? I don’t think so.

Knowledge from Error and Anti-Risk Virtue Epistemology 99 In order to see why, we need to note that this account of knowledge from falsehood as it stands implies that any case where a subject infers a true belief from a falsehood where the inferred belief is safe ought to be in the market for knowledge. After all, the reason why knowledge was lacking in a case like Detective is that the inferred true belief is unsafe, unlike the inferred true belief in a case like Handout. Accordingly, provided that the true belief is safe, and there is nothing independently epistemically amiss with the target belief, then why wouldn’t it amount to knowledge? Think of this in terms of epistemic risk. If the reason why knowledge is lacking in a case like Detective is that the level of epistemic risk is too high, but safety ensures that levels of epistemic risk are low, then why wouldn’t a safe true belief formed on the basis of an inference from falsehood (and which is not epistemically amiss in any other way) not be an instance of knowledge, just like the Handout case? The reason why I am laboring this point is that there are instances with a similar structure to knowledge from error cases where the inferred true belief is safe but where it does not amount to knowledge. Consider the following scenario: Temperature Thom is tasked with doing regular temperature readings of the industrial oven in operation at his workplace. For this he uses a digital thermometer that provides exact readings in Centigrade. He then converts the readings to Fahrenheit and writes down on a chart which 10-degree temperature range that reading belongs to (e.g., 220°F–229°F, etc.,). Unbeknownst to Thom, the thermometer is broken, and the readings it is generating are completely random and hence almost always false. (Thom has no idea what readings he should be getting, and so he remains oblivious to the random nature of the readings). Also unbeknownst to Thom, however, there is someone secretly observing him for whom it is crucially important that the results that Thom enters into the chart are correct. As a result, whenever this person sees Thom take a reading from the thermometer he adjusts the temperature in the oven to ensure that it is comfortably within the appropriate Fahrenheit range by the time that Thom enters the converted reading into his chart. Thom is thus responsibly drawing inferences from false beliefs (regarding the temperature readings in Centigrade from the faulty digital thermometer) that result in true beliefs about which Fahrenheit temperature band corresponds to the temperature in the industrial oven. Moreover, these inferred beliefs are entirely safe, given how they are formed, as Thom could not easily form a false belief in this manner; indeed, he is to

100  Duncan Pritchard all intents and purposes guaranteed to form a true belief in this regard because of the intervention of the hidden agent. This case is thus unlike a standard case involving an inference from a false belief, in that the inferred belief is safe. Crucially, however, this scenario is also very different from the kind of knowledge from error cases that we looked at above, Handout and President. In particular, Thom’s inferred belief doesn’t seem to be in the market for knowledge at all, even despite its safety. This is because the safety of Thom’s belief has nothing whatsoever to do with his exercise of cognitive agency but is rather entirely due to the intervention of the hidden agent. Compare Thermometer in this regard with a case like Handout where the subject’s inferred safe belief does amount to knowledge. Although Ted is inferring a true belief from a false one, the safety of the inferred belief is significantly due to his exercise of cognitive agency—i.e., the cognitive agency involved in his false-but-approximately-true original belief and his inference to the entailed true belief, the truth of which lies a comfortable numerical distance from the original belief. Indeed, as we noted above, the intuition that Ted is a knower in this case depends on treating him as the kind of subject who wouldn’t have made this inference had it concerned a threshold that was close to the numerical value of his entailing belief, and which would thus have resulted in an unsafe (albeit true) belief. The crux of the matter is that Ted’s cognitive agency is playing an important explanatory role with regard to his safe cognitive success. This is not replicated in the Temperature case, however, as here the safety of the inferred belief is entirely attributable to the intervention of the helper rather than the cognitive agency of our hero. 5. We thus have three types of scenario that concern us. First, there are standard cases involving inferring a true belief from a falsehood, such as inferential Gettier cases or Detective, where the subject responsibly infers an unsafe true belief from (unbeknownst to them) a false belief and hence lacks knowledge. Second, there are genuine knowledge from error cases, like Handout and President, where the subject responsibly infers a safe true belief from (unbeknownst to them) a false belief, but where the safety of the true belief is significantly creditable to the subject’s cognitive agency. The subject thereby gains knowledge, even despite the false belief involved. Finally, we have a third kind of case, illustrated by Temperature, where the subject responsibly infers a safe true belief from (unbeknownst to them) a false belief, but where the safety of the true belief is not significantly creditable to the subject’s cognitive agency. I’m suggesting that in these third kind of cases the subject does not gain knowledge. In particular, if the foregoing is correct, then it is a mistake to think that what differentiates genuine knowledge from error cases from other scenarios where an inference from a false belief doesn’t lead to knowledge is simply whether the inferred belief is safe, as there is a further factor that we need to be alert to here (but which

Knowledge from Error and Anti-Risk Virtue Epistemology 101 is easily overlooked). This is whether the safety of the subject’s belief is significantly attributable to her manifestation of cognitive agency. Relatedly, insofar as the rationale for safety is an anti-risk (or antiluck) epistemology, one can’t differentiate the second and third category of cases by appealing to this rationale alone, as they both concern a subject forming a safe true belief in the inferred proposition. Nonetheless, this way of explaining the difference is on the right lines. What it is missing is the further explanatory relation that the subject’s safe cognitive success should satisfy if it is to amount to knowledge. This point is obscured in genuine knowledge from error cases as the subject’s ­cognitive success is both safe and the safety of this cognitive success is significantly attributable to her manifestation of cognitive agency. As we have seen, however, where the latter is absent there is no longer any temptation to ascribe knowledge, which indicates that it is this aspect of knowledge that is crucial to understanding why knowledge is present in such cases. Moreover, we can also adapt the motivation offered by anti-risk epistemology to explain why there is knowledge in genuine knowledge from error cases. It is not enough for knowledge that one’s cognitive success is not subject to high levels of epistemic risk (i.e., safe), as it is also required that this feature of one’s cognitive success is significantly attributable to one’s cognitive agency. Elsewhere I have argued for this general way of thinking about knowledge under the description antirisk virtue epistemology, in that it maintains that knowledge involves an interplay between the exclusion of epistemic risk (as represented by the necessity of the safety condition) and the manifestation of cognitive ability (as represented by the required explanatory relation between one’s safe cognitive success and one’s cognitive agency).11 In particular, what is important is that the immunity of one’s true belief to high levels of epistemic risk bears an appropriate explanatory connection to one’s manifestation of cognitive ability. This is why knowledge is lacking in cases like Temperature, even though the subject’s responsibly inferred true belief is safe. But it is also why knowledge is present in genuine cases of knowledge from error, in that while the inference is made from a false belief, the resulting belief is not only safe, but safe in a manner such that it is significantly attributable to the agent’s cognitive agency. 6. We have claimed that knowledge from error is a genuine phenomenon and also explained how it can be possible, given that in general one does not acquire knowledge by inferring a true belief from a falsehood. Our explanation primarily concerned an appeal to the safety of the beliefs so formed (and the general unsafety of beliefs inferred from falsehoods), and the underlying anti-risk motivation for safety. As we saw, however, this story needs to be complicated, for what is in fact doing the work of distinguishing knowledge from error cases from parallel cases where knowledge is not acquired is not just the safety of the inferred

102  Duncan Pritchard belief, but also the fact that this safe true belief is significantly attributable to the subject’s manifestation of cognitive ability. We thus get an explanation for why there can be knowledge from error via appeal to a distinctive and independently motivated way of thinking about knowledge known as anti-risk virtue epistemology.12

Notes 1. This example is originally due to Warfield (2005, pp. 407–408). 2. See, for example, Warfield (2005), Klein (2008), Fitelson (2010), and Luzzi (2014). 3. This case is also originally due to Warfield (2005, p. 408). 4. For some discussions of this way of responding to putative knowledge from error cases, see Warfield (2005), Ball and Blome-Tillmann (2014), Montminy (2014), and Schnee (2015). 5. The first to notice this point was, I believe, Warfield (2005, pp. 414–415), albeit not quite in these terms (i.e., while he doesn’t mention safety specifically, this does seem to be the kind of condition on knowledge that he has in mind). 6. We will expand on the notion of safety in a moment. For some of the main defenses of safety, see Sainsbury (1997), Sosa (1999), and Williamson (2000). I develop my own account of safety in Pritchard (2002, 2005, passim; 2007, 2012a, 2012b; 2015a). 7. For further discussion of this point, see Pritchard (2007a, 2012a, 2012b). 8. In earlier work, I offered a detailed defense of the idea that the safety condition on knowledge should be understood as excluding veritic epistemic luck—see Pritchard (2004, 2005, passim; 2007, 2012a, 2012b). In more recent work, I have further developed this claim by focusing on epistemic risk specifically—see Pritchard (2015c, 2016, 2017, 2020). 9. For further defense of the modal account of risk, see Pritchard (2015c). See also the closely related modal account of luck, as articulated, for example, in Pritchard (2014). 10. Indeed, anti-risk epistemology—and, for that matter, anti-luck epistemology—motivates a particular way of interpreting the safety principle, including inter alia the idea that safety is not proposition-specific, and so can accommodate unsafe beliefs in necessary truths, as noted above. See Pritchard (2007, 2012a, 2012b, 2016, 2020). 11. See especially Pritchard (2020). Note that anti-risk virtue epistemology is a refinement of my earlier account of knowledge, anti-luck virtue epistemology, and as such shares many of its core structural features. For further discussion of the latter, see Pritchard et al. (2010, ch. 4) and Pritchard (2012a). 12. For helpful discussion of topics related to this paper, I am grateful to Sven Bernecker and Bin Zhao.

References Ball, B., & Blome-Tillmann, M. (2014). Counter closure and knowledge despite falsehood. Philosophical Quarterly, 64, 552–568. Fitelson, B. (2010). Strengthening the case for knowledge from falsehood. Analysis, 70, 666–669. Gettier, E. (1963). Is justified true belief knowledge? Analysis, 23, 121–123.

Knowledge from Error and Anti-Risk Virtue Epistemology 103 Klein, P. (2008). Useful false beliefs. In Q. Smith (Ed.), Epistemology: New essays (pp. 25–61). Oxford University Press. Luzzi, F. (2014). What does knowledge-yielding deduction require of its premises? Episteme, 11, 261–275. Montminy, M. (2014). Knowledge despite falsehood. Canadian Journal of Philosophy, 44, 463–475. Pritchard, D. H. (2002). Resurrecting the Moorean response to the Sceptic. International Journal of Philosophical Studies, 10, 283–307. Pritchard, D. H. (2004). Epistemic luck. Journal of Philosophical Research, 29, 193–222. Pritchard, D. H. (2005). Epistemic luck. Oxford University Press. Pritchard, D. H. (2007). Anti-luck epistemology. Synthese, 158, 277–297. Pritchard, D. H. (2012a). Anti-luck virtue epistemology. Journal of Philosophy, 109, 247–279. Pritchard, D. H. (2012b). In defence of modest anti-luck epistemology. In T. Black & K. Becker (Eds.), The sensitivity principle in epistemology (pp. 173– 192). Cambridge University Press. Pritchard, D. H. (2014). The modal account of luck. Metaphilosophy, 45, 594–619. Pritchard, D. H. (2015a). Anti-luck epistemology and the Gettier problem. Philosophical Studies, 172, 93–111. Pritchard, D. H. (2015b). Epistemic angst: Radical skepticism and the groundlessness of our believing. Princeton University Press. Pritchard, D. H. (2015c). Risk. Metaphilosophy, 46, 436–461. Pritchard, D. H. (2016). Epistemic risk. Journal of Philosophy, 113, 550–571. Pritchard, D. H. (2017). Anti-risk epistemology and negative epistemic dependence. Synthese. https://doi.org/10.1007/s11229-017-1586-6 Pritchard, D. H. (2020). Anti-risk virtue epistemology. In J. Greco & C. Kelp (Eds.), Virtue epistemology (pp. 203–224). Cambridge University Press. Pritchard, D. H., Millar, A., & Haddock, A. (2010). The nature and value of knowledge: Three investigations. Oxford University Press. Sainsbury, R. M. (1997). Easy possibilities. Philosophy and Phenomenological Research, 57, 907–919. Schnee, I. (2015). There is no knowledge from falsehood. Episteme, 12, 53–74. Sosa, E. (1999). How to defeat opposition to Moore. Philosophical Perspectives, 13, 141–154. Warfield, F. (2005). Knowledge from falsehood. Philosophical Perspectives, 19, 405–416. Williamson, T. (2000). Knowledge and its limits. Oxford University Press.

7

Epistemic Alchemy? Stephen Hetherington

Can a belief be knowledge when its justificatory basis falls non-trivially short of being knowledge – so that knowledge is emerging, in a justified way, from what is definitely not knowledge? I will argue, indirectly, that this can occur. We will witness the failure of one apparently strong reason for denying that it can happen. I will briefly sketch a revisionary – and deflationary – conception of knowledge that flows from my argument.1

7.1  Motivating a Hypothesis We are asking whether this hypothesis is true of inferential knowledge: nKnK  It is impossible for a belief to be knowledge if its justificatory basis falls non-trivially short of being knowledge. (Only non-knowledge can emerge justifiedly from what is substantially non-knowledge.)2 There are a few possible paths that we might follow if seeking to motivate belief in nKnK. Let us focus on one especially direct path – the modal-worry argument for nKnK. That argument concentrates on the epistemic danger supposedly present when evidence is nowhere as strong, in epistemic terms, as whatever is being claimed to emerge from it.3 We may work with a specific version of that disparity in epistemic strengths: because this book is about knowledge, I will discuss the epistemic danger to the presence of ­knowledge when a justificatory basis falls non-trivially short of being knowledge. Can the resulting belief be knowledge? Or is nKnK true? We may render that question more specific still. Knowledge’s most indisputable feature is truth – a way of being ‘linked to’ (aspects of) the world. So, the most indisputable nKnK-motivating possible demand might be this: any evidence adduced as justification helping to make a belief knowledge must not weaken or undermine that link; otherwise, the resulting belief will fail in that (truth-linking) way to be knowledge. DOI: 10.4324/9781003118701-11

Epistemic Alchemy? 105 This is not to require any such base for knowledge to be infallible, with the justification not allowing even an epistemic possibility of mistake within the knowledge. It is to require the justificatory base to not be weaker than is needed – no matter what this sort and degree of justificatory strength must be if knowledge is to result from reliance upon it.4 And nKnK is one way to capture this picture. We might parse the point thus: it is not that relying on non-knowledge flirts with a mere possibility of mistake; rather, one is opening the door (at least when the non-knowledge falls well short of being knowledge) to a more substantial possibility of mistake. So, we can accommodate infallible knowledge and fallible knowledge in discussing nKnK, even when talking generically about knowledge. Here is an expansion of that thinking, making clearer how it might lead to nKnK. a

If a belief’s justificatory base is non-trivially non-knowledge, by being false in a substantial way, 5 then (with all else being equal) there is a worryingly increased possibility of the belief being false – hence not being knowledge. b Whenever there is that worryingly increased possibility of a belief’s not being true (hence not knowledge), the belief is already not knowledge. That is, a belief’s having a justificatory base that (due to non-trivial falsity) falls well short of knowledge suffices for the belief’s not being knowledge – because, given that base, there was a significant possibility of the belief’s not being true, hence not knowledge. That is the modal-worry argument for nKnK. It is this chapter’s concern. Although not the only possible way to argue for nKnK, it has the dialectical virtue of focusing on knowledge’s most unquestionably needed feature – truth.

7.2  Gettier’s First Case How does the modal-worry argument for nKnK fare when applied to what most epistemologists see as a case where a belief clearly fails to be knowledge, and where that belief is based on what is largely non-­ knowledge (thanks at least to falsity)? This is perhaps the most famous Gettier case – Edmund Gettier’s (1963) job/coins case.6 Gettier introduces us to Smith, whose company president has assured him that a job available within the company will be filled by Jones. As it happens, Smith knows that Jones has ten coins in his pocket. Smith infers, combining these gems of apparent information, that Jones will get the job and that Jones has ten coins in his pocket. (Call this Smith’s intermediate belief.) He infers from it a further belief that the person

106  Stephen Hetherington who will get the job has ten coins in his pocket. (Call this Smith’s final belief.) Does the case end there? Far from it. Smith’s final belief is true – yet not for reason upon which he would call, in explaining the belief’s truth. He would cite his evidence, direct from the company president, of Jones’s impending success. But wait: Smith will be offered the job. Wait again: Smith also has ten coins in his pocket. Was he aware of this? No – no more than he was of the job’s heading his way. So, Smith’s final belief – surprisingly – is true. It is also (epistemically) justified, being based on a combination of direct testimony from the company president, direct observation by Smith of the coins in Jones’s pocket, and deliberative deductive inference by Smith from his thereby justified intermediate belief to his final belief. Thus, Smith’s final belief is true and justified. Surely, though, it is not knowledge: something is amiss in attributing to Smith the knowledge that the person who will get the job has ten coins in his pocket. QED – the end, of Gettier’s first story. His two tales tokened the start of an ensuing epistemological saga, one still taking shape. Here, I will not enter the vastness of post-Gettier epistemology.7 I will prepare for my linking of Gettier’s case with the modal-worry argument for nKnK.

7.3  Gettier’s Challenge I did not finish describing the supposed point of Gettier’s case. Here is how it would standardly be concluded. The hypothesis JTB is therefore false: a belief’s being true and justified does not entail its being knowledge. What is ‘JTB’? Gettier lobbed a conceptual grenade into a comfortable neighborhood – professional epistemology. He asked us to infer the ­falsity of a hypothesis, couched by him as a definition, about knowledge’s nature. That definition was soon called ‘JTB’ by many. Someone, S, knows that p (for a proposition ‘p’) = df. It is true that p, S believes that p, and this belief is justified (S has good epistemic justificatory support for the belief’s being true). Gettier supposedly falsified this definition by describing two possible situations where someone – yes, Smith – forms a belief, after what would ordinarily be enough investigative work, that is true and is epistemically justified, yet is not knowledge. Lo and behold: Gettier had shown that knowledge is not merely, as a matter of definition, a true and epistemically justified belief.

Epistemic Alchemy? 107 We have met Gettier’s first case. Epistemologists quickly wondered: how are we to define knowledge, if not as ‘justified true belief’? They began asking why Smith’s final belief, for example, is not knowledge although justified and true. To uncover that reason would (they ­presumed) be to understand how knowledge could have been present within the case, suitably modified. We are invited to change the imagined situation in … this imagined way (we are offered a description of that ‘way’) … and thus ‘we will agree’ that now we see how, in imagination, to change a situation where a justified true belief fails (in a Gettier-distinctive way) to be knowledge, so that now knowledge is present.

7.4  One Way of Responding to Gettier A simple post-Gettier response was Michael Clark’s (1963): no belief is knowledge if its justificatory grounds include something false. Such thinking might still tempt us, once we direct it more broadly, against justification that includes non-trivial or substantial falsity.8 But why would we be tempted? Let us blend our discussion of Gettier’s challenge with our earlier linking of the regress problem to the question of whether nKnK is true. If we wish to explain, even partly, Smith’s belief’s not being knowledge, by highlighting his relying on evidence that is false in a substantial respect, we might reach for nKnK. When highlighting Smith’s using substantially false evidence, we might add that such evidence is substantially short of being knowledge. Hence, Smith is exemplifying nKnK, the thesis that knowledge cannot arise from what is substantially non-knowledge. And this (we might infer) could be part of why his final belief is not knowledge. If you have what is non-trivially short of being knowledge, and you use it as evidence in forming a further belief, the latter belief is therefore not knowledge. So, even if there are further possible reasons why Smith’s final belief is not knowledge, we might suspect that there is at least this reason: the belief is based on non-trivially false evidence, and thus is not knowledge – since no belief is knowledge when its justificatory basis fails non-trivially to be knowledge. Yet is even that seemingly sensible post-Gettier reasoning so strong?

7.5  Gettier and Epistemic Regress I have focused on Gettier’s case, for two reasons. First, it is a professionally pointed way to hone our question of whether knowledge can arise from justification falling substantially short of being knowledge. A substantial part of Smith’s justificatory base is his (false) intermediate belief and, prior to that, his (false) presidentially derived belief that Jones will get the job. Nothing is clearer within epistemology than that no false belief is knowledge.

108  Stephen Hetherington Second, I will linger longer with the potentially explanatory linking of Gettier’s case to the idea of epistemic regress (whose link with NKnK was noted in Section 7.1). Does Smith’s evidence’s falling non-­ trivially short of knowledge (by being substantially false) explain why his consequent final belief is not knowledge? D.M. Armstrong (1973, pp. 152–153) described the case in those classically influenced terms. His diagnosis was that inferential knowledge must be based on knowledge, and that Smith’s belief would be inferential knowledge if knowledge at all, being based on his intermediate belief and its justificatory basis (the company president’s testimony and Smith’s observational evidence of Jones’s coins): because possession of [false] grounds could not constitute possession of knowledge, I should have thought it obvious that they are too weak to serve as suitable grounds. It is not surprising, therefore, that Gettier is able to construct examples where a true belief is justified in an ordinary sense of ‘justified’, but the true belief is clearly not a case of knowledge. Armstrong was adverting to the epistemic regress challenge. Some philosophers regard Gettier’s challenge as an epistemological distraction. The epistemic regress challenge is ancient; Gettier’s is not. But we can engage with both by talking about Gettier’s case. In asking anew whether his is a case where a belief fails to be knowledge, due to its justificatory basis falling non-trivially short of knowledge, I will also be questioning the ancient line of thought at the heart of the nKnK-encapsulated worry about how knowledge is present if not based ultimately on further knowledge – indeed, on basic knowledge.9 What will be the result? Gettier questioned a seemingly venerable and inviting conception of knowledge. He did so with what epistemologists have overwhelmingly agreed is a portrayal of a belief failing to be knowledge. Not everyone, when aiming to understand why Smith’s belief is not knowledge, has concentrated on the belief’s justificatory base being substantially false (and thereby not knowledge). But some have done so.10 The resulting discussion – of a paradigmatic instantiation of nKnK – is thus of a view with historical resonance and conceptual plausibility. If I can show that, even here, the justificatory base’s falling well short of knowledge does not suffice for the inferred belief’s not being knowledge, we will have reason to conclude that it is not clearly true that knowledge cannot arise, even justifiedly, from what falls well short of being knowledge.11

7.6  The Pivotal Section Enough already with the stage-setting, let us start evaluating nKnK. I suggested that one underlying motivation for it might be the modal-worry

Epistemic Alchemy? 109 argument. How strong is that argument? Can it explain why Smith’s belief fails to be knowledge? Here is the element of that argument (from Section 7.1) upon which I will focus12: (a) If a belief’s justificatory base is non-trivially non-knowledge, by being false in a substantial way, then (with all else being equal) there is a worryingly increased possibility of the belief being false – hence not being knowledge. When applying that claim to Gettier’s job/coins case, we meet this: Smith’s justificatory base falls well short of being knowledge, since a non-trivial portion of it is false. So, although his final belief is true, there was a worryingly increased likelihood of his forming a false final belief. Whereupon different precisifications beckon, which is why, for ­example, the ideas of epistemic safety and epistemic luck receive much attention.13 Think of how easily we talk of its being ‘just lucky’ that Smith’s belief is true. This might feel as though we really are explaining, in modal terms, the unfortunate effect of justificatory reliance upon evidence ­falling well short of knowledge, if one’s goal is to attain knowledge. But, as we will now find, a serious problem limits the immediate relevance of those attempts – a problem of methodological principle that undermines the modal-worry argument as a way to understand Smith’s belief’s failing to be knowledge. I expect anyone tempted by the modal-worry argument to reach for thinking along these lines. There are worryingly many possible worlds where Smith has the same evidence as in Gettier’s case, also using the evidence in the same way as in that case, and forming the same final belief – yet where (unlike in the actual case) his final belief is false. There are ‘many’ such worlds because in most worlds, when using that same evidence (about Jones) in that same way to form the same final belief (‘The person who will get the job has ten coins in his pocket’), one is in a situation where that final belief is made true by Jones, in accord with that same evidence. So, Smith’s actual world (described by Gettier) is unrepresentative, compared with that larger group of worlds: in Gettier’s case, the same final belief is made true, not by Jones, but (surprisingly) by Smith – a fact at odds with Smith’s actual evidence, which talks only about Jones. But that familiar thinking does not help our present explicative quest: it is not showing how to understand Gettier as telling a salutary tale of

110  Stephen Hetherington Smith’s relying on justification that falls substantially short of knowledge (by being non-trivially false), and thereupon forming a belief that is not knowledge (in spite of being true). I include ‘in spite of being true’, because in Smith’s case, a true belief has resulted, and so, in evaluating whether his true belief is knowledge, we need to consider only possible worlds where that outcome remains. This is so, even when we might sense the approach of the dead hand of the modal-worry argument, assuring us that no knowledge has resulted, since a true belief was unlikely to be the result (with this being modeled by those many worlds where a false belief has resulted from that same evidence that falls clearly short of being knowledge). Let us expand on that critical point. In evaluating nKnK, we are asking whether a belief can be knowledge when justified by evidence falling well short of knowledge. It will seem to many epistemologists (after setting aside any independently motivated search for a conceptually reductive definition of knowledge) that Smith’s reaching his final belief is an instance of manifest non-knowledge giving rise to … non-knowledge. Consider, accordingly, an epistemologist who sees in Smith’s plight clear evidence supporting nKnK. But consider also asking that epistemologist to explain why Smith’s belief is not a belief, justified partly by evidence falling well short of knowledge, that nonetheless manages to be knowledge. (I am not insisting that it is knowledge; I am asking why it is not.) Since Smith’s final belief is Gettiered, it is true – hence at least that close to being knowledge. So, why is it not knowledge? Can the modal-worry argument help us here? Being told (in the standard thinking proposed a moment ago) about worlds where Smith’s justificatory base does not result in a true (final) belief is no way to meet the explicatory challenge described just now. The proposed standard reasoning, applying the modal-worry argument, reminds us that there is a worryingly increased chance of a false belief arising once the justificatory base is substantially short of knowledge (by being substantially false). But we are talking of a case – Gettier’s first – where the justificatory base has led Smith to a true belief. So, by adhering to the modal-worry argument’s thinking, we cannot be evaluating the epistemic prospects, modeled by suitable reidentification across possible worlds, attached to forming a true belief on the justificatory basis of evidence falling well short of knowledge. Again, we need to be told why even a true belief derived from such clear non-knowledge fails to be knowledge (as nKnK insists). We might wish to describe independently the likelihood of a farfrom-knowledge justificatory base (including some substantially false evidence) leading to a false belief. Perhaps nKnK is true of such cases. But this is not my focus. I am asking whether, once that non-­knowledge justificatory base has led to a true belief (as in Smith’s case), this true belief can be knowledge arising from a non-knowledge justificatory

Epistemic Alchemy? 111 base. If this state of affairs has not been shown, via Gettier’s case, to be impossible, then we have not seen nKnK’s being supported by this case – one that we might have confidently expected to support nKnK. We should, accordingly, be less confident of nKnK’s being true. I do not doubt that justificatory reliance on evidence falling substantially short of being knowledge often fails to result in a belief that is knowledge: within many relevant possible worlds, a false belief is the result. But even this concession is not enough, when relying on the modal-worry argument, to establish nKnK. The concession does not show that reliance on clear non-knowledge never results in knowledge, because it cannot be modeling this for occasions when a true belief has resulted from the reliance, and because the only failing to which the modal-worry argument points is a false belief’s being formed (in supposedly relevant worlds). We thus have an indirect argument against nKnK, opening the epistemic door to the possibility of at least some beliefs being knowledge even when formed from a justificatory base falling substantially short of knowledge (by being substantially false). Do we therefore have a possible path taking shape, directing our gazes toward the still-distant conclusion that knowledge can arise, in a justificatory way, even from marked non-knowledge – at least once a true belief has arisen in that way?

7.7  Knowledge-Minimalism Introduced I now venture to strengthen that conventionally discordant note. Might it be that, once a true belief has arisen, this is enough to constitute knowledge’s presence? So far, my reasoning has needed little more than sensitivity to truth’s being necessary to a belief’s being knowledge. Should something even more striking be considered – truth’s being sufficient for a belief’s being knowledge? So, let us meet, in an exploratory spirit, the heterodoxy that is knowledge-minimalism. The previous section showed why, insofar as a belief is true, we cannot show that it is not knowledge by noting how easily it might not have been true. Why so? The key here is that use of ‘it’. The ‘it’ that is the true belief as such cannot fail to be true; only the ‘it’ that is the belief as such could fail to be true. And the latter allowance reflects only – and unremarkably – that not all beliefs as such are knowledge. This hardly establishes, however, that being a true belief is insufficient for being knowledge. Here, then, is an idea, motivated by that failure of the modal-worry argument, with which we may take a step toward knowledge-minimalism. It is possible (for all that we have shown to the contrary) for a true belief – by being true – to be knowledge, even when its justificatory base falls well short of knowledge.

112  Stephen Hetherington A fortiori, that idea seems to lead naturally to this one. It is possible for a true belief to be knowledge – by being true – when its justificatory base is knowledge. If even a justificatory base of clear non-knowledge cannot be shown – in an obvious way, such as via the modal-worry argument – to leave a true belief short of being knowledge, then we have no reason for pessimism about a true belief being knowledge when its justificatory base is knowledge. Thus, we meet knowledge-minimalism, at least as a possibility. We have reason to regard it as possible, in at least the epistemic sense that what is seemingly a clear way not to be a knowledge-minimalist (imposing upon knowledge the requirement that a justificatory base be present, particularly one unlikely to impede the justified belief’s chance of being true) cannot explain why even Smith’s final belief is not knowledge. I am working here with the idea that, once one has a true belief, it does not matter – for that true belief’s being knowledge – whether its justificatory base is knowledge. I have not relied on, for example, the thesis that a true belief can be knowledge even if its justificatory base falls short of knowledge only by not being infallibly justified. Far from it: we have seen that, even if a justificatory base is not true, the resulting true belief cannot be shown not to be knowledge (due to a correlatively increased chance of falsity’s resulting). That understates the point: we have found that, even when the justificatory base is false in a substantial way, a resulting true belief cannot be shown – by describing a correlatively increased chance of falsity – to fall short of knowledge. This is because there is no possible world where that increased chance has been actualized, for that true belief as such – the true belief qua true belief. Once a belief is true, then, could it be knowledge, regardless of whether it is based at all on justification? We might continue demanding, perhaps for social reasons, that people have a justificatory base for beliefs that we are happy to call ‘knowledge’. But maybe this is only something that we must do for social purposes. It might not be reflecting a fact that, in order to be knowledge, a true belief must be accompanied by justification. So, I am not arguing for our never having good reasons, of some sort, to ask people to seek justification, even when their ultimate quarry is knowledge. Feel free to seek good evidence; even for epistemic reasons, do so. A knowledge-minimalist’s central point remains that no such evidence was needed to satisfy the metaphysics of the knowing: a true belief is knowledge, if it is, purely insofar as it is a true belief; a justified true belief is also knowledge, if it is, purely insofar as it is a true belief. Practicality can enter this story. If in fact, as a contingent matter about

Epistemic Alchemy? 113 empirical realities of seeking knowledge within an exacting domain, good evidence is needed as a causal means for discovering a particular truth, this should be acknowledged. Still, that evidence is not thereby metaphysically essential to any resulting knowledge. A physicist seeking knowledge about particles probably seeks evidence. But once a true belief is thereby acquired, the evidence has done its job – which was causal, not metaphysical. The evidence is not thereby a lingering component of the knowledge – the accurate belief – to which it has led the grateful physicist. That is the core knowledge-minimalist picture.14

7.8  Knowledge-Minimalism Defended This is not the first time that I have defended a conceptual minimalism about knowledge’s nature.15 But because it is an epistemologically heterodox thesis, any such defense faces high hurdles. Here, I offer a few further comments supporting it. Epistemologists often say that ‘intuitively’ it is ‘clear’ or ‘obvious’ that even a true belief is not knowledge if it is not justified (such as by excellent evidence). How is one to argue against that? Should one lay claim to a competing ‘intuition’? I eschew reliance on intuitions in my epistemological writing. If we do look to them, I agree that it can feel odd to call ‘knowledge’ a true belief neither accompanied by apparently good evidence nor apparently produced in a truth-sensitively reliable way. Perhaps, equally, it feels odd to deem only the true belief within a ­justified true belief as the knowledge, strictly speaking. Thankfully, though, epistemology can do more than defer to ‘intuitions’ in such instances. We might even regard these cases as not so intuitive. For example, when an epistemologist claims that it is ‘intuitive’ that a ‘mere’ – an unjustified – true belief is not knowledge, how is she to distinguish between those two states of affairs described at the end of the previous section? Even if we concede that no true belief has been knowledge without being accompanied by justification, how does that concession decide between the following options? • •

As a matter of contingent fact, no true belief is knowledge without being justified. As a matter of necessary fact, no true belief is knowledge without being justified.16

Those are distinct states of affairs. They open the door to another choice that should not be made with confidence on the basis of ‘intuitions’ – between justification’s accompanying some knowledge and its being a metaphysically constitutive part of the knowledge. If we agree that knowledge is at least a true belief, this distinction is easily

114  Stephen Hetherington formulable – and in a way that accommodates knowledge-minimalism. Schematically, the difference is at least this: • •

[T + B] + J [T + B + J]

No wonder it might be difficult, even impossible, for intuitions to show that an instance of knowledge constitutively is, in part, the presence of justification. I am not, in these schemas, assuming that knowledge is only ever a true belief, as knowledge-minimalism claims. But I am assuming that, even if knowledge is always at least a true belief, and even if (either necessarily or contingently) justification is also present, then the justification might not be part of the knowledge. How could we know, especially by mere ‘intuition’, that this is not an aspect of the true metaphysics of knowing? Here, we may adapt an idea from W. V. Quine (1975). He asked us to imagine what he called empirically equivalent theories; we may envisage empirically equivalent theories of knowledge. Compare these two theories. • •

Knowledge = true belief. (That equation is knowledge-minimalism.) As it happens, when seeking knowledge, we also want (a true belief accompanied by) good justification. Knowledge = true belief accompanied by good justification. (That equation is JTB.) So, of necessity, when seeking knowledge, we thereby want a true belief accompanied by good justification.

The first theory adds to knowledge-minimalism the empirical claim that people seek justification when seeking knowledge. This empirical claim might, or might not, be true, given knowledge-minimalism. In contrast, the second theory infers, from JTB, a metaphysical insistence that people are seeking justification in seeking knowledge. How can we choose between these theories? Psychological profiles of actual knowledge-­ seekers are not decisive; in any case, they need not differ in that respect, since they might be seeking justification just as keenly, when seeking knowledge, no matter whether justification is literally a metaphysical part of knowledge.17 How will an ‘intuition’ reveal, with certainty, that one of the theories is epistemologically preferable? The second one fits better within contemporary epistemological theorizing. But this is hardly conclusive. Much of that theorizing is already built around the presumption of the second schema’s correctness. If we were to start epistemology afresh, building anew our theories after having thought in as theoretically innocent a way as we can about what we want from such theories, we might find that knowledge-­ minimalism – the first half of the first theory-schema – is not so alien,

Epistemic Alchemy? 115 especially given its allowing, no less so than JTB does, knowledge-­seekers to be justification-seekers. It also allows, unlike JTB, that not gaining such justification is compatible with gaining knowledge. So, since presumably our world includes many true beliefs unaccompanied by good justification, in practice the first theory allows our world to include more knowledge than the second theory does. Again, though, how does ‘intuition’ show us that the second theory is therefore – ­independently of already being committed to the theory – to be preferred? For example, suppose that we seek to do justice, through our choice of theory of knowledge, to what we think the value should be in knowing. How is it more valuable to know than not to know? Knowledgeminimalism is standardly dismissed as not doing justice to knowing’s value. But, we have in effect seen, that charge might not be fair. Even if at least part of knowing’s value is tied to how the true belief arises, such as by being based aptly on good evidence, this value can still be present whenever the true belief is, since even knowledge-minimalism allows that the true belief can have been caused by being based aptly on good evidence. All that must differ in practice is the metaphysics: the knowing as such would not (metaphysically) include the presence of the justification (such as the evidence) – which would have played a causal role in bringing the knowing into existence. Why must that combination have less value for the knower as such (even with all else being equal)? A justified piece of knowledge – where the knowledge is, in itself, simply a true belief – can be exactly as valuable as a justified true belief – which is, as this whole, the knowledge.18

Notes 1. I am grateful to discussants in an excellent Zoom meeting (organized by this book’s editors) on a draft of this chapter, and to John Biro, Rodrigo Borges, and Mike Veber for helpful post-discussion comments. 2. Terms such as ‘non-trivially’ and ‘substantially’ are vague; trying to precisify them could dominate this chapter. I will discuss clear applications of them. By chapter’s end, it will be manifest that – given my argument’s details – we will not have needed to commit ourselves to such precisifications. 3. Why is this idea pertinent to whether nKnK is true? One available linking hearkens back to epistemology’s venerable attempts to understand – and defuse – a threat posed, by the concept of epistemic regress, to our ability to understand a belief’s having a desirable epistemic status (such as knowledge) by being based on a further belief, say. That linking is relevant because nKnK is about putatively inferential knowledge. Here is some thinking that starts from the possible threat of epistemic regress, before leading to nKnK’s putative truth. How is a belief to be knowledge, when based on (such as by being inferred from) another belief? How can inferential knowledge ever exist? For a start, the basis belief had better be knowledge – thereby having no lesser an epistemic status than the resulting belief will have in being knowledge.

116  Stephen Hetherington Otherwise, the basis belief lacks too much of ‘the right epistemic stuff’ needing to be imparted if a belief based on it is thereby to be knowledge – inferential knowledge. 4. I make this distinction to preserve the conceptual possibility that some knowledge is fallible rather than infallible. On the difference between fallible knowledge and infallible knowledge, see Hetherington (1999, 2016b). 5. Some falsity, it is often conceded, can be present without preventing knowledge’s arising. But when the falsity is front and centre, seemingly playing a notable role in the epistemic agent’s thinking (without her noticing its falsity), this – we might fear, pretheoretically – is a different story. See Warfield (2005), Klein (2008), Luzzi (2010), de Almeida (2017), and Borges (2020). 6. I use the term ‘Gettier case’ in a standard way, meaning ‘one of Gettier’s two cases, in his 1963 paper, or any of many cases that subsequent epistemologists have deemed to be sufficiently similar to Gettier’s’. I do not presume that all Gettiered beliefs (another standard term, designating the belief at the core of a Gettier case) fail to be knowledge. Epistemologists typically regard all Gettiered beliefs as failing to be knowledge. But that orthodoxy should not be part of the definition, in effect, of a Gettier case or a Gettiered belief. Not every epistemologist concludes that all Gettiered beliefs fail to be knowledge. For a start, I do not (e.g. Hetherington, 1998, 1999, 2011a, ch. 4; 2016a, ch. 7). Nor, perhaps, do all non-epistemologists: see Weinberg et al. (2001) for the original X-phi (experimental philosophy) paper alerting us to some ‘folk reactions’ to Gettier cases (among other stories). See also Starmans and Friedman (2020) for some reactions by ‘academics across a wide range of disciplines’. 7. On Gettier’s cases, and resulting decades of post-Gettier epistemology, see Shope (1983), Lycan (2006), and Hetherington (e.g. 2011b, 2016a). This chapter’s argument adapts the main anti-orthodoxy argument in my Knowledge and the Gettier Problem (2016a). Note that the idea that someone might deduce a justified true belief from something false, with the resulting belief thereby not being knowledge, was present in Russell (1959, p. 76). 8. A well-known instance of that sort of broadening was Lehrer’s (1965), improving on Clark’s version. 9. Discussion of epistemic regress was out of fashion for a while but maybe is returning. See Turri and Klein (2014) and Atkinson and Peijnenburg (2017). 10. Perhaps more should have done so. There is another reason why few epistemologists have also highlighted Smith’s evidence’s not being knowledge. Gettier’s target was a definition of knowledge. The usual response was that a better definition was needed – with no such definition allowed to include the term ‘knows’ (or a cognate), since it must be conceptually reductive. Williamson (2000, ch. 1) made much of this. For critical discussion of his argument, see Cassam (2009). For a non-reductive response to Gettier, see Hetherington (2016a, ch. 7). 11. Might we then need to conceive anew of what knowledge is? I return below to this question. 12. The following argument is similar in spirit to one that I have developed elsewhere (e.g. Hetherington, 2016a, 2019a, 2019b). 13. On these ideas, see, for example, Sosa (1999) and Pritchard (2005, 2014). For critical engagement, see Hetherington (2014, 2016a, ch. 3). 14. Not all of its details are unique to knowledge-minimalism. Sayre (1997, pp. 124–125, 127–128) sees justification as contributing causally to generating knowledge, without being a knowledge-constituent. He does not endorse knowledge-minimalism, though; he sees knowledge as not any kind of belief.

Epistemic Alchemy? 117 15. For earlier defences, see Hetherington (2001, ch. 4; 2011a, ch. 4; 2018a, 2018b, 2020). 16. (i) In each of these, too, we might change ‘is’ to ‘has been’. Has the evidence so far, perhaps our ‘intuitive’ reactions, been inductive and observational, not automatically evidence of what knowledge will always be, even contingently, let alone necessarily? (ii) What of ‘necessary fact’? Should that be, more cautiously, ‘conceptual fact’? Should we then look again at (i), insofar as we might treat conceptual matters as (conceptually) distinguishable from what is (metaphysically) necessary? 17. Apposite, too, is this thought of Danto’s (1984: 13): The form of a philosophical question is given – I would venture to say always but lack an immediate proof – when indiscriminable pairs with nevertheless distinct ontological locations may be found or imagined found, and we then must make plain in what the difference consists or could consist. 18. Would it be useful to interpret minimalism as describing only knowledge’s essence, not its entirety? Could we then conceive of justificatory support as able to be part, in a metaphysically accidental way, of knowing? Any instance of knowledge is thereby a blend of essence (true belief) and accident (e.g. justification). Any instance of knowledge is reidentified across possible worlds as a particular true belief; and in some, but not all, of those worlds it includes justification. Maybe we are confident that all instances of knowledge in our world include justification; in which case, we will reject this chapter’s preferred version of knowledge-minimalism. Even so, we will not be entitled to reject an essence-accident version (mentioned just now). I lack the space here to examine this more metaphysically complex version.

References Armstrong, D. M. (1973). Belief, truth and knowledge. Cambridge University Press. Atkinson, D., & Peijnenburg, J. (2017). Fading foundations: Probability and the regress problem. Springer. Borges, R. (2020). Knowledge from knowledge. American Philosophical Quarterly, 57, 283–298. Cassam, Q. (2009). Can the concept of knowledge be analysed? In P. Greenough & D. Pritchard (Eds.), Williamson on knowledge (pp. 12–30). Oxford University Press. Clark, M. (1963). Knowledge and grounds: A comment on Mr. Gettier’s paper. Analysis, 24, 46–48. Danto, A. C. (1984). Philosophy as/and/of literature. Proceedings and Addresses of the American Philosophical Association, 58, 5–20. De Almeida, C. (2017). Knowledge, benign falsehoods, and the Gettier problem. In R. Borges, C. de Almeida, & P. D. Klein (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 292–311). Oxford University Press. Gettier, E. L. (1963). Is justified true belief knowledge? Analysis, 23, 121–123. Hetherington, S. (1998). Actually knowing. The Philosophical Quarterly, 48, 453–469. Hetherington, S. (1999). Knowing failably. The Journal of Philosophy, 96, 565–587.

118  Stephen Hetherington Hetherington, S. (2001). Good knowledge, bad knowledge: On two dogmas of epistemology. Clarendon Press. Hetherington, S. (2011a). How to know: A practicalist conception of knowing. Wiley-Blackwell. Hetherington, S. (2011b). The Gettier problem. In S. Bernecker & D. Pritchard (Eds.), The Routledge companion to epistemology (pp. 119–130). Routledge. Hetherington, S. (2014). Knowledge can be lucky. In M. Steup, J. Turri, & E. Sosa (Eds.), Contemporary debates in epistemology (2nd ed., pp. 164–176). Wiley-Blackwell. Hetherington, S. (2016a). Knowledge and the Gettier problem. Cambridge University Press. Hetherington, S. (2016b). Understanding fallible warrant and fallible knowledge: Three proposals. Pacific Philosophical Quarterly, 97, 270–282. Hetherington, S. (2018a). The redundancy problem: From knowledge-­infallibilism to knowledge-minimalism. Synthese, 195, 4683–4702. Hetherington, S. (2018b). Knowing as simply being correct. In B. Zhang & S. Tong (Eds.), A dialogue between law and philosophy: Proceedings of the international conference on facts and evidence (pp. 68–82). Chinese University of Political Science and Law Press. Hetherington, S. (2019a). Conceiving of knowledge in modal terms? In S. Hetherington & M. Valaris (Eds.), Knowledge in contemporary philosophy (pp. 231–48). Bloomsbury. Hetherington, S. (2019b). The luck/knowledge incompatibility thesis. In I. M. Church & R. J. Hartman (Eds.), The Routledge handbook of the philosophy and psychology of luck (pp. 295–304). Routledge. Hetherington, S. (2020). Knowledge-minimalism: Reinterpreting Plato’s Meno on knowledge and true belief. In S. Hetherington & N. D. Smith (Eds.), What the ancients offer to contemporary epistemology (pp. 25–40). Routledge. Klein, P. D. (2008). Useful false beliefs. In Q. Smith (Ed.), Epistemology: New essays (pp. 25–62). Oxford University Press. Lehrer, K. (1965). Knowledge, truth, and evidence. Analysis, 25, 168–175. Luzzi, F. (2010). Counter-closure. Australasian Journal of Philosophy, 88, 673–683. Lycan, W. G. (2006). On the Gettier problem problem. In S. Hetherington (Ed.), Epistemology futures (pp. 148–168). Clarendon Press. Pritchard, D. (2005). Epistemic luck. Clarendon Press. Pritchard, D. (2014). Knowledge cannot be lucky’. In M. Steup, J. Turri, & E. Sosa (Eds.), Contemporary debates in epistemology (2nd ed., pp. 152–164). Wiley-Blackwell. Quine, W. V. (1975). On empirically equivalent systems of the world. Erkenntnis, 9, 313–328. Russell, B. (1959). The problems of philosophy. Oxford University Press. (Original work published 1912.) Sayre, K. M. (1997). Belief and knowledge: Mapping the cognitive landscape. Rowman & Littlefield. Shope, R. K. (1983). The analysis of knowing: A decade of research. Princeton University Press. Sosa, E. (1999). How must knowledge be modally related to what is known? Philosophical Topics, 26, 373–384.

Epistemic Alchemy? 119 Starmans, C., & Friedman, O. (2020). Expert or esoteric? Philosophers attribute knowledge differently than all other academics. Cognitive Science, 44, e12850. Turri, J., & Klein, P. D. (Eds.) (2014). Ad infinitum: New essays on epistemological infinitism. Oxford University Press. Warfield, T. A. (2005). Knowledge from falsehood. Philosophical Perspectives, 19, 405–416. Weinberg, J. M., Nichols, S., & Stich, S. (2001). Normativity and epistemic intuitions. Philosophical Topics, 29, 429–460. Williamson, T. (2000). Knowledge and its limits. Clarendon Press.

8

The Benign/Malignant Distinction for False Premises Claudio de Almeida

8.1  Introduction: The Battlefield at a Glance Prolonged inspection of scenarios in which seemingly clear instances of inferential knowledge do not survive the removal of false premises in one’s reasoning leaves no room for doubt: We’re looking at a bona fide challenge to the traditional, Aristotelian view of inferential knowledge as exclusively arising from true premises. Risto Hilpinen (1988) opened this can of worms. Peter D. Klein (1996, 2008) turned it into the major issue that it has become.1 And we are now looking at three pairwise incompatible kinds of responses to the challenge. This chapter aims at developing what I submit is the winning proposal. The most conservative of the available approaches is what we may call a ‘true-blue KFK view’ (‘KFK’ being short for ‘knowledge-­fromknowledge’). KFK-ers—such as Rodrigo Borges (2017, 2020) and Clayton Littlejohn (2016)—hold that, appearances to the contrary notwithstanding, there is no inferential knowledge in episodes of reasoning containing evidentially indispensable premises that are not cases of knowledge. KFK-ers hail from the Williamsonian, ‘knowledge-first’ ranks. On the upside, they are relieved of the need to distinguish between apparent ‘KFF’ cases (that is, cases of apparent knowledge-­ from-falsehood) and the inferential Gettier cases that contain evidentially essential false premises. And this is important relief indeed, since the need for that distinction is the most alarming threat to the other two approaches. As Littlejohn (2016, p. 65) puts it, ‘let the critics of knowledge-­fi rst try their hand at explaining why it should be that an alleged piece of evidence is properly included in reasoning to some known logical consequences rather than others and I expect that they will start to see the virtues of the knowledge-first approach’. 2 However, on the downside, KFK-ers face the immensely difficult task of making their error theory plausible. As they explain, its plausibility is supposed to be derived from some of the central claims of the knowledge-first project in epistemology. But, to some of us (myself included), that project is ill-motivated and some of its key explanations are fallacious. DOI: 10.4324/9781003118701-12

The Benign/Malignant Distinction for False Premises 121 KFK-ers hold a prohibitively ambitious error theory. Let supporters of a knowledge-­fi rst account try their hand at persuading impartial minds that there is a clear, intuitive distinction between a belief for which one has a good excuse—specifically, an ‘excusable’ belief that cannot serve as a basis for inferential knowledge—and a belief that has knowledge-grade justification—specifically, the kind of justified belief on which inferential knowledge can be based—and I expect that they will start to see the virtues of an anti-knowledge-first approach. 3 In view of how so many of us react to cases of apparent inferential knowledge based on false belief, the ‘excuse’ gambit should seem question-begging.4 Much more influential than the KFK view is the view according to which there is, indeed, inferential knowledge in cases of reasoning ostensibly derived from false beliefs, but appearances are partly deceptive, in that the falsehoods in those episodes of reasoning do not play the evidential role that one might prima facie think they do. This is the knowledge-despite-falsehood view of the matter (the ‘KDF view’, for short). It’s the majority view in the debate, with a number of variants on offer—from authors such as E. J. Coffman (2008), Peter D. Klein (2008, 2017), Neil Feit and Andrew Cullison (2011), Martin Montminy (2014), Brian Ball and Michael Blome-Tillmann (2014), Ian Schnee (2015), Fred Adams, John Barker, and Murray Clarke (2017), and (occasionally) Rodrigo Borges (2017, 2020), among others. Objections to both KFK and KDF are piling up in the literature. Be that as it may, the case against either KFK or KDF is not what I propose to pursue in this short chapter. My aim is at once both narrower and more ambitious. It is much narrower than a full review of the alternatives to my own take on the KFF problem, as any such review would require a country mile of printed paper (or too many pixels on your screen). But it’s also more ambitious than a detailed critique of the opposition, in that I seek to develop a view according to which appearances are not deceptive: in the cases where we see reasoning that ostensibly derives knowledge from false belief, there is, indeed, inferential knowledge, and the false beliefs may, indeed, be what accounts for the production of inferential knowledge.5 That is the KFF view in the dispute. It’s a minority view, but it’s been garnering support at an impressive rate.6 Making the case for it is my main goal here. But none of this will make much sense to the uninitiated until we look at the evidence for the KFF debate. The evidence comes from scenarios that, to most, friends and foes of the KFF view alike, do seem to indicate that false belief plays an interesting role in what may look like cases of inferential knowledge—and then, as noted, war breaks out over how to explain what ‘interesting’ should mean to the epistemology of reasoning. Such scenarios have become impressively abundant in the recent literature on KFF, but I submit that the problematic features of such

122  Claudio de Almeida scenarios are, for present purposes, efficiently represented in the following vignettes, our test cases for the remainder of this chapter. Spiderman: The CIA Director is watching an evening newscast with her six-year-old grandson, when a ‘breaking news’ announcement comes on. Under dramatic narration, the newscast starts showing a live feed from terrorists somewhere in the Middle East. The terrorists show a blindfolded hostage with a sabre pressed against his neck and promise that the hostage will die in the next few hours. In view of the horrific scene, the boy panics and starts crying. Meanwhile, his grandmother is on a phone call. When the brief call is over, she quickly finds a way of comforting the boy while keeping to her vows of protecting state secrets. Knowing that he believes that Spiderman is a flesh-andblood superhero, the CIA Director tells the boy that (q) Spiderman will rescue the hostage within the next ten minutes. Based on what he hears from his grandmother, the boy immediately infers that (p) the hostage will be safe in the morning. Five minutes later, an army commando breaks into the terrorists’ hideout and rescues the hostage. Doesn’t the boy have knowledge when he forms the belief that p?7 The handout: ‘Counting with some care the number of people present at my talk, I reason: “There are 53 people at my talk; therefore my 100 handout copies are sufficient”. My premise is false. There are 52 people in attendance—I double counted one person who changed seats during the count. And yet I know my conclusion’. (Warfield, 2005, pp. 407–408) Chief of Staff: Unsure about whether the President has already left for his tour of the Middle East, General S asks the White House Chief of Staff for information on the President’s whereabouts and hears that (q) the President is in Jordan. Based on her belief that q, the general forms the belief that (p) the President is not in the Oval Office. However, while talking to the general, the Chief of Staff was uncharacteristically ill-informed. Due to a last-minute change in his schedule, the President had, at that time, just landed in Israel, coming from Jordan a few minutes earlier than planned. Doesn’t S know that p?8 The appointment: ‘[S]uppose that my belief that I have an appointment at 3:00 p.m. on April 10th is based on my warranted but false belief that my secretary told me on April 6th that I had such an appointment. If my secretary told me that I had such an appointment, but she told me that on April 7th (not the 6th), my belief that I have an appointment at 3:00 p.m. can still be knowledge, even though the belief that supports it is false’. (Klein, 1996, p. 106)

The Benign/Malignant Distinction for False Premises 123

8.2  Easy Does Not Do It: Truth-Tracking and KFF If you think that the easiest way with KFF cases is the truth-tracking way, you must be right. It’s ancient wisdom: Don’t try to take the bull by the horns if you can simply shoot the bull from a safe distance. And truth-tracking theorists—either sensitivity-based or safety-based ones— are our sensible snipers in epistemology. They go for the loudest bang at the lowest cost. But, sometimes, the safe distance is too distant for the job, and you find that you must grapple with the bull. The KFF problem seems to be one of those occasions where easy does not do it. Let’s look at the details. There are numerous intriguing objections to both sensitivity-based truth-tracking and its safety-based variant. Some of the best among objections pose counterexamples that seem effective against both versions of modal epistemology. Some counterexamples seem to show that both epistemologies are too strong, for ruling out cases of knowledge as if they were cases of ignorance—e.g., counterexamples by Jonathan Vogel (1987), Ram Neta and Guy Rohrbaugh (2004), and Juan Comesaña (2005), even if their authors don’t explicitly target both epistemologies. Some other counterexamples seem to show that both epistemologies are too weak, for allowing cases of ignorance to be regarded as cases of knowledge—e.g., the counterexample by John N. Williams and Neil Sinhababu (2015).9 But, once we put these generic objections aside, we find objections that specifically target the modalist’s way with KFF cases. One of these more specific objections has recently been put forward by Bin Zhao (2022). According to him, the safety theorist faces a dilemma: S/he will either satisfactorily handle KFF cases or satisfactorily handle knowledge of modally stable truths, but not both. But I’m about to suggest that even Zhao’s dilemma, as lethal as it seems to me, is a tad optimistic for the modal epistemologist. As we shall see, the truth-­ tracking ways with KFF cases seem subject to counterexemplification. But, before we look at my counterexample, let’s see why the modalists are natural KFF-ers. When Nozick (1981, pp. 230–233) considers how knowledge by deduction might be explained within the conceptual confines of his epistemology, he aligns himself with the Aristotelian tradition by requiring that, in every case of knowledge by deduction, the premises be cases of knowledge. But his allegiance to Aristotelianism does not seem to be a consequence of any of the tenets of his epistemology. If this is right, we should be free to bracket his Aristotelian claim, ignore his focus on deduction, and ask: How might a Nozickian account of inferential knowledge accommodate KFF cases? The answer seems to be: ‘prima facie, very well’. On a Nozickian account of inferential knowledge, a reasoner’s true conclusion is a case of knowledge only if, were the concluding belief false, the reasoner wouldn’t have believed the false proposition

124  Claudio de Almeida on the basis of the premises actually used in the inference (Nozick, 1981, pp. 230–233). This Nozickian idea does seem to discriminate between KFF cases and inferential Gettier-type cases containing false premises. Consider an inferential Gettier-type case of ignorance. You look at your trusty, well-maintained clock, see the hands showing two o’clock, and reason as follows: ‘The clock is in perfect working order, and it’s showing two o’clock; so, it’s two o’clock’.10 Your conclusion is true and justified, but the clock is not in perfect working order. It stopped working twelve hours ago. There is a Nozickian explanation for why your conclusion is a case of ignorance. In a nearby world, you look at the clock a minute later and falsely believe, based on the same false premise, that it’s two o’clock. By contrast, in none of our KFF cases does the agent form a false belief in the nearest world(s) where s/he forms any of those concluding beliefs based on any of the relevant false premises. Consider, for instance: We cannot plausibly conclude that the world where Klein’s—by hypothesis, reliable—secretary tells him that he has an appointment on a given day, but he has no such appointment, is as near the actual world as worlds where he does have the appointment. So, it looks like Klein’s belief that he has an appointment on that given day does track the truth in nearby worlds if based on that relevant false premise. Mutatis mutandis, the same can be said about the other KFF cases. So, it looks like a win for the Nozickian explanation.11 Now, consider the safety variant. In the spirit of Ernest Sosa’s (1999) proposal, we would say that there is inferential knowledge whenever you infer that p and there is no nearby world where your belief that p is false, but still inferred, in that non-actual world, from the premises that actually turn it into a case of knowledge. Again, the contrast between KFF and Gettier-type ignorance seems smoothly to flow from the safety proposal. Plausibly, in the Russellian clock case, there is a nearby world where you believe that it’s two o’clock and your belief is false: for instance, the world where you look at the clock a minute after two. By contrast, there is no nearby world where Klein forms the false belief that he has an appointment on a given day based on his secretary’s testimony—given that, by hypothesis, she is a truth-teller in the modal neighborhood of the actual world.12 And the contrast, mutatis ­mutandis, seems to hold for the other KFF cases as well. But a counterexample turns up. To see how it makes trouble for the modalist, recall the well-known ‘Water’ case (Neta & Rohrbaugh, 2004) and how it’s supposed to be a problem for safety-based truth-tracking. Here’s that familiar vignette. Water: ‘I am drinking a glass of water which I have just poured from the bottle. Standing next to me is a happy person who has just won the lottery. Had this person lost the lottery, she would have maliciously polluted my water with a tasteless, odorless, colorless toxin.

The Benign/Malignant Distinction for False Premises 125 But since she won the lottery, she does no such thing. Nonetheless, she almost lost the lottery. Now, I drink the pure, unadulterated water and judge, truly and knowingly, that I am drinking pure, unadulterated water… Despite the falsity of my belief in the nearby possibility, it seems that, in the actual case, I know that I am drinking pure, unadulterated water’. (Neta & Rohrbaugh, 2004, pp. 399–400) As you may recall, the problem is that the belief that I am drinking pure, unadulterated water very clearly seems to be a case of unsafe knowledge. There is a very close world in which the believer falsely believes that she is drinking pure water, to wit, the one where the would-be criminal loses the lottery and poisons her water. Still, no such crime is committed in the actual world. And I trust that, to most of us, the relevant belief is a case of knowledge.13 Furthermore, it’s not only a problem for safety-based modal epistemologies; it’s a counterexample to sensitivity-based ones too. For, in that case, there is a very close (non-actual) world where it’s false that I am drinking pure, unadulterated water, but the believer is undeterred by the falsity of her belief. If you see that case as one of unsafe/insensitive knowledge, that very same scenario can be adapted to give us a case that frustrates both modalist explanations of the KFF phenomenon. Consider: Fresh Water: Dr. Psycho, a brilliant chemist, is sitting alongside Professor Thirsty. Both are about to speak at a conference. Psycho is Thirsty’s deadly enemy, and he knows that his hatred for her is a well-kept secret. As in the original Water case, Dr. Psycho will poison the water offered to the speakers if and only if he loses the lottery, to kill Thirsty. (If he wins, he will spare her life, believing that she will suffer as she sees him enjoying his wealth.) His moves are well-trained. He can imperceptibly deliver the colorless, odorless poison. As he waits to be introduced, he’s listening to the radio on his cell phone for news about the lottery draw. There is only one small water bottle on the conference table for the two speakers. The bottle is closer to Psycho than to Thirsty, but Psycho is pretending not to care for the water, as he fakes a phone call. While the speakers are introduced, Psycho learns that he has won the lottery. Meanwhile, Thirsty is thinking to herself: 1 Psycho is not paying attention to the water bottle. 2 If he’s not paying attention to the bottle, I can reach for the bottle and grab it before he does. 3 Therefore, I can grab it before he does. Now, consider why the cases go together—that is, why, if you see unsafe/ insensitive knowledge in Water, we should expect you to render the same

126  Claudio de Almeida verdict for Fresh Water; in which case, the latter should be deemed a case of unsafe/insensitive KFF. In Water, the evidence is not misleading as regard there being pure water in the bottle. In Fresh Water, likewise, although premise (1) is false, the evidence is not misleading with regard to the availability of the bottle for Thirsty to grab. Dr. Psycho does want Thirsty to believe that the bottle will be grabbed by her first, and he does make it available to her. (3) is a known truth. In both cases, however, the same evidence would be had if the relevant beliefs were false. So, both beliefs are ignorant in nearby possible worlds where the causal history of the relevant beliefs is kept fixed. And we seem to have a case of unsafe/ insensitive KFF. The concluding belief is unsafe: There are many nearby worlds where Thirsty falsely believes that (3) based on reasoning from (1) and (2). In those worlds, Psycho loses the lottery and goes for the water bottle first, to poison it for Thirsty. And the concluding belief is insensitive as well: As in the original Water case, in the relevant nearby worlds, (3) is false, since Thirsty will be reaching for a bottle that has been poisoned by Psycho; but, there, Thirsty still believes that (3) based on reasoning from (1) and (2). To my mind, for the Water case, Neta and Rohrbaugh have effectively blocked the only way out for the modalist: the claim that the relevant belief is Gettierized. But that attempted way out is a strange one to begin with. To anyone who sees the Water scenario as one where the relevant belief is a bona fide case of knowledge—which is how a good counterexample is expected to do its job—any explanation according to which that belief is a case of ignorance should seem implausible. So, what they do for the case is simply to highlight features of it that are relevantly dissimilar from the Gettier-type cases with which the Water case might be confused—but, again, confused by whom? Only by those to whom the case does not look like a clear case of knowledge. And that clientele includes the ones who will bend over backward to rescue the modal epistemologies from the clutches of a counterexample. Here, I cannot address the theorizing of those who have rushed to defend the modalist against counterexamples such as the Water case. (But, in fact, I don’t know if there is anything that might persuade one who denies that the Water scenario portrays a clear case of knowledge. Maybe that’s where the debate hits the brickwall of irreducible disagreement.) Instead, I’ll offer an alternative to those who look for an explanation of why there seems to be inferential knowledge in the Fresh Water case.

8.3  Hard-Earned Gain: Defeasibilist KFF-ing I sell comfort: If I’m right, we can stop torturing ourselves and accept that the reason why KFF cases look so much like cases of inferential knowledge is that they are cases of inferential knowledge. That’s the easy part. The hard part is facing the fact that, all these many centuries after

The Benign/Malignant Distinction for False Premises 127 Aristotle wrote on inferential knowledge, not a single epistemology can consistently explain the fact that there is such a thing as KFF, not one. The epistemology of reasoning has uniformly been Aristotelian in that regard. But I also sell ambition: I claim that a refurbished defeasibility theory plausibly accounts for KFF. The defeasibility theory that we’ve known—the one whose core elements can be found in works by Klein (1980, 1981), Hilpinen (1971), Marshall Swain (1996), John Pollock (1986), Paul Moser (1989), and Keith Lehrer (2000, 2017), among others—is one that cannot consistently account for KFF. Klein (2008) was the first to explain why. Here’s why he thought that the theory must be reformed, in just enough detail to make the story intelligible to those who are not familiar with the problem.14 The defeasibility theory assumes that no unjustified belief can be a case of knowledge, a claim that very few have challenged in the millennia since it received Socrates’ lukewarm endorsement. With that assumption in tow, the theory (initially) explains the kind of ignorance that we’ve come to know as ‘Gettierization’ as follows. A belief is Gettierized iff its justification does not survive the unrestricted addition of truths to that belief’s doxastic system, that is, the justification of that belief is not ‘truth-resistant’. By contrast, if a belief is a case of knowledge, no truth—no actual truth—can destroy its justification. This will easily account for why a Gettierized belief is an ignorant one. For instance, in Russell’s case, your justification for believing that it’s two o’clock does not survive the addition of the proposition that you’re looking at a broken clock to your doxastic system. There, your justification suffers from this defect: it is not truth-resistant. You cannot learn that you’re looking at a broken clock and still be justified in believing that it’s two o’clock. But no actual learning is necessary. It is a fact that your justification for that belief would have succumbed to the addition of the proposition about the condition of your clock to your belief system, and that suffices for us to call it ‘defective’. Call that theoretical claim ‘TR’ (short for ‘truth-resistance’): knowledge-yielding justification is truth-resistant. So far, so good. Then we learned about why Lehrer and Paxson’s (1969) case of the demented Mrs. Grabit makes trouble for the theory, and why the theory’s survival depended on Klein’s (1980, 1981) distinction between ‘genuine’ and ‘misleading’ defeaters. As it turned out, not every truth that seems to destroy a given justification actually does so. There might be another truth, a ‘defeater-eater’, or ‘restorer’, canceling the defeating effect of that ‘misleading defeater’.15 As Klein tells us, the lesson from Mrs. Grabit’s case is that, sometimes, a truth, an ‘initiating defeater’, is not the ‘effective defeater’: it simply justifies belief in a false would-be effective defeater. But the defeating effect of the falsehood is

128  Claudio de Almeida neutralized by some other truth, a ‘restorer’. Some apparent defeat is illusory.16 So, when there is pseudo-defeat, the truth still prevails, so to speak; the justification remains truth-resistant when enough relevant truths have been gathered, so to speak, by the epistemological observer of a scene like Mrs. Grabit’s. The defeasibility theory survived that hiccup, and it thrived.17 But KFF cases pose this new problem for the defeasibilist: If the justification of your belief that p evidentially depends on the false belief f, from which you inferred that p, then there is a truth, namely ~f, the inclusion of which in your belief system would destroy (that is, would genuinely defeat) your justification for believing that p. For instance, consider the Appointment case. Klein’s belief that (p) he has an appointment on April 10th is based on his false belief that (f) his secretary told him, on April 6th, that he had that appointment. His justification for believing that p does not survive the addition of ~f to his belief system (while f is there). Why? Because, as Klein notes, the believer would then have the contradictories f and ~f in her belief system; in which case, neither of the contradictories could be regarded as a good reason for anybody to believe anything, on any tenable parsing of ‘good reason’.18 So, KFF refutes the old, pre-2008 version of the defeasibility theory. But maybe the theory can be patched. Klein (2008) led the way here. I was dissatisfied with his patching of it (de Almeida, 2017). Here’s why. What I require from the defeasibility theory is very different from what Klein expects from it, and not just because he is a KDF-er. ­Post-2008, Klein (2017, pp. 49–51) remains satisfied with the fact that the defeasibility theory makes the truth-condition on knowledge redundant. His refurbished defeasibility theory remains a TR theory. Herein lies the rub. Truth-redundancy is a consequence of the way the TR condition on knowledge-yielding justification was implemented by the leading defeasibilists. Recall that TR is the claim that knowledge-yielding justification must resist the unrestricted addition of truths to the believer’s doxastic system. TR’s unsavory consequence is that every justified false belief has a defective justification. This, I have argued (de Almeida, 2017), is a form of ‘back-door infallibilism’. To a fallibilist, there is no useful notion of defectiveness that can be used to degrade the justification of a belief just because the belief is false. A tenable notion of justification must allow for a false belief to be as well-justified as a true one. Knowledge-grade justification—that is, justification that is good enough for knowledge in every way—may be truth-conducive, but it may not be (universally) truth-entailing. That’s the fallibilist ideology in a nutshell. But Klein’s post-2008 defeasibility theory remains infallibilist, which may not be a problem for a KDF-er.19 But it remains unacceptable to one (like me) who would like to be a defeasibilist KFF-er, as we shall see.

The Benign/Malignant Distinction for False Premises 129 I have put forward a conservative alternative to Klein’s TR defeasibilism (de Almeida, 2017). You can remain a consistent defeasibilist if, instead of TR, your defeasibility theory has the following consequence: (TR*) If S has knowledge-grade justification for believing that p (at t), then only actual-world truths which are not evidence for believing anything logically incompatible with p (i.e., neither ~p, nor ­evidence for ~p) can be (what Pollock called) ‘rebutting defeaters’ of that j­ustification (at t), but there are no such truths. 20 Notice that, by TR*, we still get all the good consequences that we expect from TR, without the bad one: the one according to which every justified false belief has a defective—that is, a non-truth-resistant—­justification. But all false beliefs might still have their justification defeated. There might be what Pollock (1986) called ‘undercutting defeaters’ of those justifications. For instance, suppose that, at five o’clock, you abductively believe that it’s five o’clock based on your false belief that the clock’s hands show five o’clock, and the clock is in perfect working order. But your justification is defeated by the (actual-world) truth that you’re under the effect of LSD as you look at the clock. And there might also be what Pollock called ‘rebutting defeaters’ of those justifications. For instance, while visiting a farm, you believe that there is a sheep in the farm based on your false belief that the animal over there is a sheep. But your justification is defeated by the true proposition that the farmer testifies that the animal over there is a sheep-looking dog (which justifies the contradictory of your premise in a way that does not allow for the restoration of that justification). So, it seems that TR* imposes no undue restriction on epistemic defeasibility. But, by TR*, no truth will defeat one’s justification for believing a falsehood just because the would-be defeater implies the negation of that falsehood. 21 A moment’s reflection should bear this last thought out. You shouldn’t lose your justification for believing a falsehood just because it’s a falsehood. Suppose that, standing where you are, you have excellent (non-overridden) evidence for believing that the building over there is a barn. Your belief is false, however. I recommend that you lose it. You ask me why. I answer that the building over there is not a barn. Would you be satisfied if that is all—absolutely all—that you get from me as a reason against your belief? Of course, not! You would also have to believe that I’m some kind of authority in the matter, or else you would ask me for evidence that your belief is false. And you wouldn’t accept this answer: ‘but it is false’. A belief’s falsehood is not evidence that it is false—if such a platitude is any help. But the useful idea, the one worth preserving, in a TR condition on knowledge does not conflict with that platitude. It merely requires that your justification survive the addition of truths to your belief system. It doesn’t tell you which truths

130  Claudio de Almeida can destroy a justification. We buy into it expecting that those relevant truths will not conflict with this other irresistible idea: sometimes, your justified belief is ignorant just because it’s false. Now, suppose that you are a (consistent) TR* defeasibilist who keeps every other conceptual resource from the original (particularly, Klein’s) defeasibility theory. I claim that you will, then, be the most conservative fallibilist who can exploit those conceptual resources to make a distinction between the benign, knowledge-yielding falsehoods and the malignant, knowledge-suppressing ones that we expect to find in some of the inferential Gettier-type cases. Let’s see how. Recall that, according to the thought experiment proposed by the defeasibility theory, your justification for the belief that p is genuinely defeated only if, when an initiating defeater of that justification—that is, a truth that either ‘rebuts’ or ‘undercuts’ that justification, either by itself or by justifying belief in another proposition that does the defeating—is added to your stock of beliefs, the initiating defeater does not turn your stock of true beliefs, minus the belief whose justification is defeated, into an incoherent set. 22 In other words, in genuine defeat, an initiating defeater must not be incoherent with any other (non-p) truth in your belief system. 23 It must work in cahoots, so to speak, with every other relevant truth in your belief system. But that initiating defeater will promote incoherence in your belief system if it justifies belief in a proposition that is incompatible with your other true beliefs, or, in the case of undercutting defeat, if it justifies a proposition that otherwise simply lowers the degree of coherence in that set of true beliefs. 24 The useful metaphor here is that the defeater must ‘hook up’ with whichever other relevant true beliefs you may have. 25 With that in mind, we can give traction to the intuitive perception that a benign falsehood is one that is ‘just as good’ as a truth in a given episode of reasoning, as follows: A false belief is a benign falsehood (at a given moment t) iff (at t) it is a false premise the justifying effect of which cannot genuinely be defeated; and it is a malignant falsehood (at t) iff (at t) it is a false premise that is not benign—where genuine defeat is constrained by our TR* condition. Let’s try that thought out on our test cases. When we look at the Spiderman case, we naturally assume that (a) the testifier (the CIA Director) is knowledgeable about secret military missions, that (b) she’s inviting her grandson to infer that all will eventually be well, that (c) she’s trying to keep the boy’s trust (which is incompatible with the boy’s learning that the hostage has died), that (d) she knows the boy believes that Spiderman is a flesh-and-blood superhero who’s up to the rescue job, but (e) she cannot tell the boy how the rescue will take place. Given those assumptions, any good reason to believe that the boy’s justification for his concluding belief is defeated by the contradictory of the benign falsehood in the case, the truth that Spiderman will not

The Benign/Malignant Distinction for False Premises 131 rescue the hostage—such as that there is no Spiderman, or that the CIA Director is lying about the rescue—must use a truth that seems to defeat by justifying belief in a false would-be effective defeater, which shows that the initiating defeater, that chosen truth, is incoherent with one of (a)-(e). Not only is it incoherent with our beliefs about salient features of the scene, but it’s also incoherent with relevant truths in the boy’s (presumably, less inclusive) belief system, such as I can trust granny, or Spiderman is powerful enough to rescue the hostage. 26 Consider: If, for instance, the initiating defeater is the truth that there is no Spiderman, how would that defeat the boy’s justification for the concluding belief that the hostage will be safe in the morning? Most obviously, by justifying—that is, by confirming to whichever degree—the falsehood that there will be no rescue, which would be a reason to believe the contradictory of the boy’s conclusion. It seems that the same can be said about the truth that the CIA Director is lying about the rescue, or Granny is lying, about how it justifies a false belief that I cannot trust granny, or that Spiderman is not powerful enough. 27 Mutatis mutandis, we get the same result for the other three test cases: there is pseudo-defeat in all of them. Those are simpler than the Spiderman case, in that our assumptions about the scene can be thought to coincide with beliefs held by the agent whose justification is in ­question. Here are the key elements for each case. For the Handout, assume that (f) only one copy of the handout will be given to each attendee, that (g), once distributed, copies don’t mysteriously disappear, that (h) nobody else will be allowed in the room after the head count is finished, and that (i) Warfield knows he is (fallibly) reliable at counting heads. With those assumptions in mind, ask yourself: How would the contradictory of the relevant benign falsehood, there are 53 people at my talk, defeat Warfield’s justification for believing that his 100 copies of the handout are sufficient for his audience? The answer seems to be: Only if, once it is added to Warfield’s evidence for the concluding belief, the negation of that benign falsehood justifies the false belief that my 100 copies may not suffice for my audience. But it should be clear that this would-be effective defeater is false. It is incompatible with assumptions (f)–(i). (By no reasonable standard can Warfield be reliable at counting heads and still miss 48 in a crowd of 101!) For the Chief of Staff, assume that (j) he is not trying to mislead the general, and that (k) he can’t be ill-informed about the broad geographical area where the President is at any given time. How would the negation of the relevant benign falsehood, the President is in Jordan, defeat the general’s justification for her concluding belief, the President is not in the Oval Office? Apparently, only if it were part of the evidence justifying belief in the false would-be effective defeater according to which the President may be in the Oval Office, which, according to assumptions (j)-(k), is a nomic impossibility.

132  Claudio de Almeida For the Appointment, assume that (l) the secretary is reliable when speaking to Klein about appointments and she told him about one on April 10th, that (m) Klein’s memory is not very reliable about the exact dates when he is told about his appointments by his secretary, and that (n) he reliably remembers the appointments made for him by his secretary. Now, notice that, in order to defeat the justification for Klein’s conclusion, the negation of the relevant benign falsehood, namely, the secretary did not tell me about an April 10th appointment on April 6th, must justify belief in a false effective defeater, such as my secretary does not believe that I have an April 10th appointment, or the secretary is not trustworthy when speaking about my appointments, both of which are incompatible with assumptions (l)-(n). Now, recall that, in every case of pseudo-defeat, the misleading defeater is ‘chased’ by a defeater-eater. There might be numerous such true propositions, but, invariably, at least one.28 And recall that a truth is a defeater-­ eater only if (1) it restores the justifying effect of the original justifiers, and (2) it coheres with the believer’s set of relevant true beliefs (which includes those original justifiers).29 With that in mind, it’s not at all hard to find defeater-eaters for each KFF case. In the Spiderman case, suppose that the misleading defeater is the truth that Granny is lying. The proposition that, although Spiderman is not available for the rescue, somebody who’s just as powerful for the job will do it clearly restores the original justification. Some version of this defater-eater will also obviously restore the justification threatened by the truth that there is no Spiderman. It’s still by the false premise provided by Granny’s testimony that the boy’s inferential knowledge is acquired. For the Handout case, try the truth that the speakers’s count is wrong by only one. For the Chief of Staff case, try the truth that the President is at one of his Middle East stops. And, for the Appointment, use the truth that, although Klein misremembers the date when he was told about the April 10th appointment, the secretary did tell him about that appointment. But you may easily find alternative defeater-eaters by the thought experiment that looks for them. And notice that, in each case, the believer would not be a knower if, ceteris paribus, the benign falsehood were removed from her belief system. Given the salient features of each of our KFF test cases, a fairly simple thought experiment should reveal that, in each case, the contradictories of the relevant false premises hook up with misleading defeaters: they help justify belief in a false (would-be) effective defeater that is incompatible with—or lowers the coherence of the set containing—some of the assumptions (a)-(n). So, in KFF cases, the relevant false premises prove benign. In Gettier-type cases containing evidentially essential falsehoods, by contrast, the contradictories of the false premises genuinely defeat the believer’s justification for the concluding belief, as explained in the defeasibility literature. So, in those cases, the relevant false premises are as malignant as we have always thought that they were.

The Benign/Malignant Distinction for False Premises 133

8.4  Concluding Remarks How can we plausibly become KFF-ers? I see only two options as I look at the epistemological landscape. The simplest route to KFF-ing, hands-down, is a truth-tracking account. As expected, a truth-tracking view will be either based on a sensitivity condition or based on a safety condition. I trust that, to most of us, both succumb to the Water counterexample even before we ponder the KFF cases. But an inferential analogue of that counterexample, our Fresh Water case, seems to make those modal epistemologies specifically useless to explain the KFF phenomenon. We then turn to the (less popular) arch-rival of the modal epistemologies: defeasibilism. Is there a tenable defeasibilist explanation of the KFF phenomenon? I believe that this chapter offers you such an explanation. Elsewhere (de Almeida, 2017), I have told you why I don’t see Klein’s refurbished, 2008, defeasibility theory as an appealing option. That 2008 version of his theory is much too conservative to be of use for our purposes.30 And here, I, once again, have claimed that Klein’s infallibilist TR theory is not all that there is to defeasibilism on the KFF front. A defeasibilist TR* theory keeps all the main conceptual resources of the old defeasibility theory, and it allows us to apply those resources and become KFF-ers.31

Notes 1. Ted Warfield (2005) published the first paper exclusively dedicated to the issue, but he was then making a move in an on-going debate initiated by Klein and advertised in Jonathan Kvanvig’s blog Certain Doubts. Some of the history of the topic can be found in de Almeida and Fett (2019). 2. Littlejohn’s reference to logical implication is superfluous. The problem arises for probabilistic support as well, as noted in the literature on the issue. 3. In Williamsonian epistemology, only true beliefs may be endowed with knowledge-grade justification—that is to say, the Williamsonian knowledge-firster is an infallibilist (Williamson, 2011, p. 215): ‘a “justified” false belief is a belief for which the agent has a good excuse…[but] only knowledge constitutes full justification’. This ‘excuse’ gambit is exemplified in Borges (2017, 2020). Littlejohn (2016) defends Williamsonian infallibilism by a different, but no less problematic, route. A discussion of the knowledge-first project is not our object in this chapter. 4. I think Littlejohn’s (2016) route to infallibilism is no less question-begging, but I don’t have the space to discuss it. 5. Here, I do not discuss the cases in which false belief does not seem to be evidentially indispensable for the production of inferential knowledge. Such cases are identified and satisfactorily discussed by Klein (2008). My focus is exclusively on the cases in which false beliefs clearly seem both causally and evidentially indispensable to a given knowledge-yielding episode of reasoning. But no account of causal basing need be provided in what follows. As Schnee (2015, p. 19) aptly puts it, our KFF problem is ‘dowstream from the basing debate’.

134  Claudio de Almeida 6. Some have hesitatingly supported the KFF view—e.g., Warfield (2005), John Hawthorne and Dani Rabinowitz (2017); others have been more sanguine—e.g., Branden Fitelson (2010, 2017), Christopher Buford and ­ Christopher Cloos (2018), Federico Luzzi (2019), Roy Sorensen (this volume). A rudimentary version of my own KFF proposal became an unpublished talk at a 2003 Chicago meeting of the Central States Philosophical Association. That version was first aired at the 2001 edition of the International Principia Conference, at Federal University of Santa Catarina (UFSC), in Florianopolis, Brazil. The present version builds on my 2017 account of the issue. 7. This is a variation on Klein’s (2008) Santa Claus case—originally aired by Klein in an unpublished 1999 talk at Pontifical Catholic University of Rio Grande do Sul (PUCRS), in Porto Alegre, Brazil—that may strike some as a more compelling case of inferential knowledge than the Santa case. But I don’t make any important assumptions in that regard. Under different wording, the Spiderman case was put forward by me in a 2004 thread on KFF in the now-defunct Certain Doubts blog (edited by Jonathan Kvanvig). 8. This is a lightly edited version of a case originally put forward by me in the 2003 unpublished talk referenced in note 6, above. Notice how the essential feature of the case—false information provided by a generally reliable testifier as a premise in knowledge-yielding inference—reappears in the ­literature under the guise of false information provided by a fancy watch, as in Warfield (2005). And that feature of both cases is arguably indistinguishable from what gives Hilpinen’s (1988) groundbreaking thermometer case its bite. 9. But see Williams (2016, 2017) for the most forceful version of their counterexample, with replies to critics. 10. The 1948 clock case is only Bertrand Russell’s most memorable Gettier-type case. But I argue for evidence of two 1912 Russellian Gettier-type cases (de Almeida, 2018). 11. Notice that Nozick carefully individuates the ‘method’ used in a case of knowledge by deduction as the one by which no false conclusion would be derived in nearby worlds from those specific premises that actually produce deductive knowledge. Had he identified the method as, say, ‘valid deduction from cases of knowledge’, his views wouldn’t have allowed for such an easy application to KFF cases. But, in such a case, neither would he be a knowledge-closure denier. 12. For the safety theorist who is a closure defender, the Nozickian discomfort regarding identity criteria for ‘methods’, or ‘grounds’, or ‘bases’, is not readily apparent. Such a theorist might be tempted to individuate the relevant method as ‘valid deduction, or strong induction, from premises which are cases of knowledge’. But, as Zhao (2022) argues, there will be no accounting for KFF cases on the basis of ‘globalized safety’. So, for the safety-theorist, too, the ‘method’, or ‘ground’, of belief-formation must be narrowly individuated. Also relevant here: I’d be remiss if I didn’t note that I see no comfort for any epistemology under epistemic closure principles (de Almeida 2019, 2021). 13. As we know, some safety loyalists put themselves through hoops to disqualify the objection. Addressing Vogel’s (1987) ‘ice cubes’ case—a case used by Vogel as a counterexample to the sensitivity condition, but one that can be used against a safety condition as well—Steven Luper (2003, p. 200) writes: ‘In my view, if there is someone standing by seriously considering putting the tray [with ice cubes left under scorching sunlight] back into the freezer, or someone who just might do so, then I do not know the ice has melted [while, away from the ice cubes, I believe they have melted]’. Duncan Pritchard

The Benign/Malignant Distinction for False Premises 135 (2014) has his own version of the very same idea turned against the Water counterexample. I cannot discuss it here any further. But think for a moment: How much knowledge can survive a safety-loyalist’s condition? 14. For more detail, see Klein (1981), de Almeida and Fett (2016), and de Almeida (2017). 15. The term ‘deafeater-eater’ was suggested to me by Klein in conversation. 16. Surprisingly, in a book published several years after Klein’s (1981), P ­ ollock (1986) failed to acknowledge Klein’s distinction between genuine and ­misleading defeaters while acknowledging the need for such a distinction. 17. But did the theory really thrive? Not much as a matter of popularity. On the very same year when Klein published his groundbreaking book, R ­ obert Nozick (1981) took the epistemological community by storm, cavalierly claiming that the Gettier literature—by which he clearly meant the defeasibility literature—was ‘messy’ (Nozick, 1981, p. 169), that he could do better, and that was all that so many, in the epistemological community, needed to skip studying the defeasibility theory. Since then, Williamson’s equally ­cavalier attitude, coupled with his massive influence on the community, has only deepened the problem. Here’s a representative excerpt from the Williamsonian corpus (Williamson, 2011, p. 211): ‘[A]ttempts to state strictly necessary and sufficient conditions in non-circular terms typically lead to a regress of ever more complex analyses and ever more complex counter-examples’. 18. In Pollockian terminology (Pollock 1986), any reason that you may have to believe that f is a ‘rebutting defeater’ of any (prima facie) justification that you may have to believe that ~f (and, conversely, a reason to believe the former is a rebutting defeater of a justification for believing the latter). So, neither of the contradictories is ultima facie justified. So, on a hereditary conception of justification-transmission, neither of the contradictories can justify anything (while both are in one’s doxastic system). 19. Although I have claimed (de Almeida, 2017) that Klein’s brand of KDF view suffers from insurmountable difficulties, those specific difficulties are ­independent from his infallibilism. 20. J. R. Fett (2019) objected to my 2017 version of the TR* claim on the grounds that it failed to exclude contraries of f as defeaters of the falsehood’s justification. He was right. This version of that original TR* claim takes care of the problem. Furthermore, notice that this is compatible with thinking, as I do, that a complete epistemology will, elsewhere, analyze every important normative concept in non-normative terms. 21. A corollary: On this proposal, while the counterfactual inclusion of the contradictory of your false justifier j in your belief system destroys any justification provided by j, that contradictory is not available to destroy any justification you may have for believing that j. 22. Mrs. Grabit’s case gives dramatic expression to the fact that there may be incoherence in a set of beliefs all members of which are true. That’s how the initiating defeater in the case works. But then, with Klein, we attend to the fact that the defeating effect of that initiating defeater—the fact that it promotes incoherence—involves a false effective defeater, which finally gives us the impression of pseudo-defeat. An important aspect of pseudo-defeat is discussed in de Almeida and Fett (2016). 23. This simplifies matters a little. See the following footnote. 24. Undercutters and rebutters work in different ways. Recall that incoherence is not the complement of coherence. So, while a rebutter defeats a justification by (counterfactually) promoting incoherence in one’s set of justifiers, an undercutter defeats a justification simply by lowering the degree of coherence in one’s set of justifiers.

136  Claudio de Almeida 25. Jaakko Hintikka (1962) expressed an incontestable truth when he noted that we expect knowledge to be compatible with more knowledge. The genuine defeat of your justification for believing that p must not deprive you of knowledge that you already have. 26. Without discussing the options for a semantic account of truth in fiction, I simply assume that, to most of us, the claim about Spiderman’s powers is unproblematically true. 27. Notice that the boy’s knowledge that he can trust granny, or that Spiderman is powerful enough, will be lost if he simply finds reason to doubt such truths. That is to say, epistemic defeat simply requires evidence that promotes the loss of a belief. 28. Discussion of these issues with Eduardo Alves has shown me that I should be explicit about the defeater-eaters in our KFF cases. See Alves (2021) for replies to objections to my 2017 KFF proposal. 29. As argued in de Almeida and Fett (2016), John Turri’s (2012) objection to the defeasibility theory ignores condition (1). 30. And that’s how his 2017 revision of the 2008 proposal still is: way too ­conservative for comfort. But I cannot go into the details here. 31. Discussion of these issues with Eduardo Alves, Rodrigo Borges, J. R. Fett, and Peter D. Klein, over the course of many years, has been stimulating and informative. I’m indebted to them for it.

References Adams, F., Barker, J., & Clarke, M. (2017). Knowledge as fact-tracking true belief. Manuscrito, 40(4), 1–30. Alves, E. (2021). Restaurando a explicação do anulabilismo falibilista sobre o conhecimento a partir de crença falsa. Intuitio, 14(2), 1–14. Ball, B., & Blome-Tillmann, M. (2014). Counter-closure and knowledge despite falsehood. Philosophical Quarterly, 64(257), 552–568. Borges, R. (2017). Inferential knowledge and the Gettier conjecture. In R. Borges, P. D. Klein, & C. de Almeida (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 273–291). Oxford University Press. Borges, R. (2020). Knowledge from knowledge. American Philosophical Quarterly, 57(3), 283–298. Buford, C., & Cloos, C. (2018). A dilemma for the knowledge despite falsehood strategy. Episteme, 15(2), 166–182. Coffman, E. J. (2008). Warrant without truth? Synthese, 162(2), 173–194. Comesaña, J. (2005). Unsafe knowledge. Synthese, 146, 395–404. de Almeida, C. (2017). Knowledge, benign falsehoods, and the Gettier problem. In R. Borges, P. D. Klein, & C. de Almeida (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 292–311). Oxford University Press. de Almeida, C. (2018). On our epistemological debt to Moore and Russell. In S. Hetherington & M. Valaris (Eds.), Knowledge in contemporary philosophy (pp. 27–49). Bloomsbury. de Almeida, C. (2019). Epistemic closure and post-Gettier epistemology of reasoning. In S. Hetherington (Ed.), The Gettier problem (pp. 27–47). Cambridge University Press. de Almeida, C. (2021). Epistemic closure and epistemological optimism. Philosophia, 49, 113–131. Published online: May 28, 2020.

The Benign/Malignant Distinction for False Premises 137 de Almeida, C., & Fett, J. R. (2016). Defeasibility and Gettierization: A reminder. Australasian Journal of Philosophy, 94(1), 152–169. Published online: February 10, 2015. de Almeida, C., & Fett, J. R. (2019). Review of Federico Luzzi, Knowledge from non-knowledge. Notre Dame Philosophical Reviews. https://ndpr.nd.edu/ reviews/knowledge-from-non-knowledge-inference-testimony-and-memory/ Feit, N., & Cullison, A. (2011). When does falsehood preclude knowledge? Pacific Philosophical Quarterly, 92(3), 283–304. Fett, J. R. (2019). O que é o conhecimento? EdiPUCRS. Fitelson, B. (2010). Strengthening the case for knowledge from falsehood. Analysis, 70(4), 666–669. Fitelson, B. (2017). Closure, couner-closure, and inferential knowledge. In R. Borges, P. D. Klein, & C. de Almeida (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 312–324). Oxford University Press. Hawthorne, J., & Rabinowitz, D. (2017). Knowledge and false belief. In R. Borges, P. D. Klein, & C. de Almeida (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 325–344). Oxford University Press. Hilpinen, R. (1971). Knowledge and justification. Ajatus, 33, 7–39. Hilpinen, R. (1988). Knowledge and conditionals. In J. Tomberlin (Ed.), Philosophical perspectives 2: Epistemology (pp. 157–182). Ridgeview. Hintikka, J. (1962). Knowledge and belief (new edition, 2005, V. F. Hendricks & J. Symons (Eds.). King’s College Publications. Klein, P. D. (1980). Misleading evidence and the restoration of justification. Philosophical Studies, 37(1), 81–89. Klein, P. D. (1981). Certainty: A refutation of scepticism. University of Minnesota Press. Klein, P. D. (1996). Warrant, proper function, reliabilism, and defeasibility. In J. L. Kvanvig (Ed.), Warrant in contemporary epistemology (pp. 97–130). Rowman & Littlefield. Klein, P. D. (2008). Useful false beliefs. In Q. Smith (Ed.), Epistemology: New essays (pp. 25–61). Oxford University Press. Klein, P. D. (2017). The nature of knowledge. In R. Borges, P. Klein, & D. de Almeida (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 35–56). Oxford University Press. Lehrer, K. (2000). Theory of knowledge (2nd ed.). Westview Press. Lehrer, K. (2017). Defeasible reasoning and representation. In R. Borges, P. D. Klein, & C. de Almeida (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 167–178). Oxford University Press. Lehrer, K., & Paxson, T., Jr. (1969). Knowledge: Undefeated justified true belief. Journal of Philosophy, 66(8), 225–237. Littlejohn, C. (2016). Learning from learning from our mistakes. In P. Schemechtig & M. Grajner (Eds.), Epistemic reasons, norms and goals (pp. 51–70). De Gruyter. Luper, S. (2003). Indiscernability skepticism. In S. Luper (Ed.), The skeptics (pp. 183–202). Ashgate. Luzzi, F. (2019). Knowledge from non-knowledge: Inference, testimony and memory. Cambridge University Press. Montminy, M. (2014). Knowledge despite falsehood. Canadian Journal of Philosophy, 44(3–4), 463–475.

138  Claudio de Almeida Moser, P. K. (1989). Knowledge and evidence. Cambridge University Press. Neta, R., & Rohrbaugh, G. (2004). Luminosity and the safety of knowledge. Pacific Philosophical Quarterly, 85, 396–406. Nozick, R. (1981). Philosophical explanations. Harvard University Press. Pollock. J. L. (1986). Contemporary theories of knowledge. Hutchinson Education. Pritchard, D. (2014). Knowledge cannot be lucky. In M. Steup, J. Turri, & E. Sosa (Eds.), Contemporary debates in epistemology (2nd ed., pp. 152–164). Wiley-Blackwell. Schnee, I. (2015). There is no knowledge from falsehood. Episteme, 12(1), 53–74. Sorensen, R. (this volume). Mini-Meno: A rationalist’s history of the rise and fall of counter-closure. Sosa, E. (1999). How to defeat opposition to Moore. Philosophical Perspectives, 13, 141–154. Swain, M. (1996). Warrant versus indefeasible justification. In J. L. Kvanvig (Ed.), Warrant in contemporary epistemology (pp. 131–146). Rowman & Littlefield. Turri, J. (2012). In Gettier’s wake. In S. Hetherington (Ed.), Epistemology: The key thinkers. Continuum. Vogel, J. (1987). Tarcking, closure, and inductive knowledge. In S. Luper-Foy (Ed.), The possibility of knowledge (pp. 197–215). Rowman & Littlefield. Warfield, T. (2005). Knowledge from falsehood. Philosophical Perspectives, 19, 405–416. Williams, J. N. (2016). There’s nothing to beat a backward clock. Logos & Episteme, 7(3), 363–378. Williams, J. N. (2017). Still stuck on the backward clock. Logos & Episteme, 8(2), 243–269. Williams, J. N., & Sinhababu, N. (2015). The backward clock, truth-tracking, and safety. Journal of Philosophy, 112(1), 46–55. Williamson, T. (2011). Knowledge first epistemology. In S. Bernecker & D. Pritchard (Eds.), The Routledge companion to epistemology (pp. 208–218). Routledge. Zhao, B. (2022). Knowledge from falsehood, ignorance of necessary truths, and safety. Philosophia, 50: 833–845.

9

Knowledge, Falsehood, and Defeat Sven Bernecker

9.1  The What and How of Knowledge from Falsehood The received wisdom in epistemology has it that for a person to come to know something on the basis of deductive reasoning they must know each of the premises from which they essentially infer the conclusion. This idea is called counter-closure (Luzzi, 2010).1 Edmund Gettier (1963) showed that justification when added to true belief is not sufficient for knowledge. If justification, truth, and belief are at least necessary for knowledge, counter-closure entails that a person deductively knows some conclusion only if each premise they essentially infer the conclusion from is true, believed, and justified.2 This chapter focuses on challenges to the truth condition of counter-closure. A growing number of epistemologists maintain that the conclusion of a deductive inference can be known even if some of the premises from which the conclusion is derived are false and thus not knowable. Among those defending the possibility of knowledge from falsehood are Arnold (2013), Balcerak Jackson and Balcerak Jackson (2013), Bernecker and Grundmann (2019), Coffman (2008), de Almeida (2017), Fitelson (2010, 2017), Goldberg (2001), Hawthorne (2004, p. 57), Hilpinen (1988, pp. 163–164), Klein (2008), Luzzi (2010, 2014, 2019), Murphy (2013), Sorensen (2016), and Turri (2011, p. 8; 2012, p. 217). The debate between friends and foes of knowledge from falsehood tends to focus on actual or hypothetical situations. I will focus on Warfield’s (2005, pp. 407–408) handout case.3 Handout: Counting with some care the number of people present at his talk, Ted reasons: ‘There are 53 people at my talk; therefore, my 100 handout copies are sufficient’. The premise is false. There are 52 people in attendance. Ted double counted one person who changed seats during the count. He knows that 100 handout copies are sufficient even though he inferred the belief from the false belief that there are 53 people at the talk.

DOI: 10.4324/9781003118701-13

140  Sven Bernecker Cases like Handout are not uncommon. We frequently rely on poor counting, rough calculation, and imprecise remembering in deriving precise conclusions we claim to know. If we could not know things on the basis of inexact methods, then knowledge would become an exceedingly rare commodity. This is why Warfield holds that falsehoods can ‘play a central epistemizing role in inference.’4 Moreover, behavioral experiments support the idea that the ordinary knowledge concept includes in its extension cases of knowledge essentially inferred from false premises (Turri, 2019). The method of cases, by itself, is unable either to establish or to rebut the possibility of knowledge from falsehood. The reason is twofold. First, it is possible to interpret cases such as Handout in such a way that they are compatible with counter-closure. Second, for these cases to count toward the possibility of knowledge from falsehood we would have to be able to distinguish them from falsehood-involving Gettier cases. Let me explain. (1) One can grant that Ted in Handout knows that 100 handout copies are sufficient but deny that this knowledge is essentially derived from a false lemma. The idea is that Ted forms not just one but two beliefs when he counts the people present at his talk. He forms the false and explicit belief that there are (exactly) 53 people in the audience and he forms the true and implicit belief that there are approximately 53 people in the audience. What is doing the epistemic works is the less ­precise true belief as opposed to the more precise false belief. Ted uses the ≈53-people-belief in combination with his background knowledge that 53 < 100 to conclude that 100 handouts are enough.5 In this reading of the case, which Luzzi (2019, pp. 10–12) dubs the proxy-premise strategy, Ted’s false belief that there 53 people in the audience is inferentially inert. The false belief may figure in the causal production of the conclusion-belief, and it may contribute to the justificatory status of the conclusion-belief, but its contribution does not play an essential role. Given the proxy-premise strategy, Handout is a case of knowledge despite falsehood (KDF for short), as opposed to a case of knowledge from falsehood (KFF for short).6 The possibility of KDF is not contested. Suppose I base my belief that p on a dozen good reasons, but one of them is false. If the good 11 reasons are strong enough to justify p, then the one bad reason is dispensable. I can know that p even though one of my reasons is false.7 If it turns out that alleged cases of KFF are really cases of KDF, the excitement around these cases is inappropriate. (2) Alleged cases of KFF must be distinguished not only from cases of KDF but also from certain cases of non-knowledge. There are different kinds of knowledge-undermining epistemic luck. Some of them manage without false lemmas, while others, including Gettier’s original examples, rely on the subject reasoning from a false lemma. For

Knowledge, Falsehood, and Defeat 141 cases such as Handout to establish the possibility of KFF, we need to be able to tell when the essential reliance on a falsehood leads to knowledge and when not. What distinguishes KFF-cases (assuming there are any) from falsehood-involving Gettier cases (FIG-cases for short)? In other words, what distinguishes knowledge-yielding from knowledge-suppressing falsehoods in reasoning? I dub this the false lemma problem.8 Three explanations of the difference between knowledge-yielding and knowledge-suppressing falsehoods can be distinguished in the literature. According to closeness-to-the-truth accounts, reasoning from a false premise yields knowledge if the false premise is either semantically or epistemically close to the truth (Baumann, 2020; Hilpinen, 1988). Reliability accounts have it that reasoning from a false premise generates knowledge if the inferential path from the false premise to the true conclusion is modally stable – if it could not have easily given rise to a false conclusion (Grundmann, 2020, p. 5179; Luzzi, 2019, pp. 30, 70–71; Warfield, 2005, p. 414). Defeater-based accounts maintain that reasoning from a false premise yields knowledge if either the negation of the false premise is not a defeater for the true conclusion, or the true conclusion is indefeasibly justified by a truth entailed by the false premise. This chapter focuses on defeater-based accounts of KFF. Section 9.2 introduces the notion of a defeater. Section 9.3 discusses Feit and Cullison’s defeater account of KFF. Section 9.4 is devoted to de Almeida’s defeasibility account, and Section 9.5 explores Klein’s defeasibility account. Section 9.6 contains some concluding remarks. The upshot will be that none of the defeater-based accounts succeed in explaining KFF and distinguishing it from both KDF and FIG.

9.2  Defeater and Defeasibility In general terms, a defeater is a proposition such that if it is added to the subject’s evidence for the target proposition, the total evidence no longer amounts to knowledge-grade justification.9 A justificational defeater is a true or false proposition for which one has evidence. A factual defeater is a true proposition for which one has no evidence (Steup, 1996, p. 14). Within the group of justificational defeater, a distinction is made between doxastic and normative defeater and between rebutting and undercutting defeater. A rebutting defeater is a proposition that indicates that the target belief that p is false. An undercutting defeater is a proposition that indicates that the target belief is unreliably formed. A doxastic defeater is a proposition that one believes to be true and that indicates that one’s belief that p is false or unreliably formed. A normative defeater is a proposition that one would believe to be true, if one performed one’s epistemic duties, and that indicates that one’s belief that p is false or unreliably formed.

142  Sven Bernecker Many accounts of justification and knowledge make use of the notion of a defeater in one way or another. For instance, it is common to give a positive characterization of justification and knowledge in terms of reliability and to then add a no-defeater condition, whereupon the subject may not believe that the target belief is unreliably formed.10 What sets the defeasibility theory of justification apart from the no-defeater condition is that the notion of a defeater is used not as a negative but as a positive condition of justification. A person’s justification of p is knowledge-grade if it is not a coincidence that their justification results in them arriving at the truth. Reliabilists conceive of knowledge-undermining coincidence in terms of lack of modal stability. Defeasibility theorists, on the other hand, claim that a belief’s truth is coincidental if the belief’s justification is defeasible. A belief’s justification is defeasible if there is some defeater d such that the conjunction of d with the subject’s justification (total evidence) for p fails to justify p (Klein, 2010, p. 157). There are three defeater-based accounts of KFF. Feit and Cullison maintain that a false premise is knowledge-yielding if its negation does not function as a defeater. They employ the notion of a defeater to explain the knowledge-yielding power of falsehoods but they do not classify their position as a defeasibility theory.11 Unlike Feit and Cullison (2011), Klein (2008) and de Almeida (2017) adopt the framework of the defeasibility theory to explain KFF. Feit and Cullison maintain that a false premise is knowledge-yielding, if its negation does not function as a defeater. de Almeida agrees with Feit and Cullison, albeit for different reasons, that the hallmark of KFF is that the negation of the falsehood does not function as a defeater. On Klein’s account of a ‘useful false belief,’ a false premise is knowledge-yielding if there is a true proposition that is propositionally justified by whatever justifies the false premise, which in turn justifies the known conclusion. Klein’s defeater account of KFF is the most sophisticated of the three and it was also the first such account.12

9.3  Feit and Cullison’s DJDD Feit and Cullison’s starting point is the no-false-grounds view – an early proposal by Clark (1963) to rule out Gettier cases. The gist of this view is that a belief can justify another only if it is true. To see how this view handles FIG-cases, consider Lehrer’s (1965) well-known Nogot/Havit case.13 Nogot/Havit: Smith has reasons to believe (q) that Nogot owns a Ford, and (r) that Nogot works in his office. From (q) and (r) Smith infers (p) that someone in the office owns a Ford. As it turns out, (q) is false while (r) and (p) are true. Unsuspected by Smith, there is another person in the office, Havit, who owns a Ford.

Knowledge, Falsehood, and Defeat 143 On the no-false-ground view, Smith is not justified in believing that someone in the office owns a Ford because the belief is grounded on the false belief that Nogot owns a Ford. Even though this view can handle classical FIG-cases such as Nogot/Havit, it was soon shown to be both too strong and too weak and thus gave way to the no-essential-false-grounds view.14 As the name suggests, the no-essential-false-grounds view states that knowledge can be based on false grounds, so long as no essential (or dispensable) element in the justification is false. Feit and Cullison’s reason for rejecting the no-essential-false-grounds view is that it is incapable of acknowledging the possibility of KFF. Their goal is to devise a theory of knowledge that excludes FIG-cases such as Nogot/Havit but includes KFF-cases such as Handout. They call their theory the doesn’t-justify-the-denial-of-a-defeater view (DJDD, for short). DJDD: S knows p if and only if (i) S believes p, (ii) p is true, (iii) S is justified in believing p, and (iv) no ground that is essential to S’s justification for p justifies S in believing the negation of a defeater.15 According to DJDD, what sets KFF apart from FIG is that the denial of the inferentially essential falsehood does not function as a defeater. The negation of the falsehood does not defeat the subject’s justification for their belief in the target proposition. How does DJDD handle a standard FIG-case such as Nogot/Havit? Smith does not know that someone in the office owns a Ford (p) because the essential ground of this belief – the falsehood that Nogot owns a Ford – is the negation of a defeater for Smith’s belief that p. Feit and Cullison explain: If the true proposition that Nogot does not own a Ford were added to his evidence, Smith would not be justified in believing that someone in his office owns a Ford. So, the falsehood justified by an essential ground is here the negation of a defeater. (Feit & Cullison, 2011, p. 297) And given that a justified true belief qualifies as knowledge only if no essential ground is the negation of a defeater, the DJDD view correctly implies that Smith does not know that someone in the office owns a Ford. Next consider a KFF-case such as Handout. Here the falsehood is the belief that there are 53 people in the audience. What would happen if we informed Ted that the number of people in the audience is not 53? According to Feit and Cullison, Ted would still be justified in believing that the number of people in the audience is not far from 53 and thus he would still be justified in believing (and know) that the 100 handout

144  Sven Bernecker copies are sufficient. This shows, they claim, that ‘the falsehood justified by [Ted’s] essential grounds (that is, the proposition that there are 53 people at his talk) is not the negation of a defeater’ (Feit & Cullison, 2011, p. 296). Ted knows that the 100 handout copies are sufficient because the negation of the false premise does not function as a defeater. The handout case is an instance of knowledge because the justification for the target belief is sufficiently robust that it survives the subject’s learning that the ground is false. The problem with DJDD is twofold. First, the view does not account for non-inferential cases of knowledge (from falsehood). Second, it conflates KFF either with KDF or with FIG. (1) DJDD is in conflict with the existence of cases of ‘animal knowledge,’ – cases where knowledge that p does not require the existence of prior justification to believe that p. Feit and Cullison must either deny such cases or restrict DJDD to cases of inferential knowledge. Neither option is attractive. Presumably, only rational creatures have justified beliefs. But it is hard to deny that non-rational and rational animals alike have the ability to know (cf. Littlejohn, 2015; Sylvan, 2018). If Feit and Cullison restrict DJDD to inferential knowledge, they rob themselves of the ability to handle non-inferential KFF-cases. To see that KFF can happen non-inferentially, consider the following variation on the handout case. Handout*: Subitizing is the ability to perceptually recognize the number of a small group of items without counting. Suppose Fred is super-subitizer. He does not have to count items between 1 and 100 but can typically just tell that there are 52 people in the room when there are. This ability, like all belief-forming methods, is fallible. Suppose Fred can also tell non-inferentially by looking whether two groups of items have the same cardinality or not, again up to 100. For instance, he can look at the room full of people and a stack of handouts and tell if there are more handouts than people. One day, Fred comes to know that his 100 handout copies are sufficient even though his subitizing was slightly off, and he ‘perceived’ 53 people in the room when there are only 52. If KFF can happen non-inferentially, as Handout* suggests, an inferentially restricted version of DJDD does not cover the right range of cases to count as explanatorily basic. (2) Feit and Cullison claim that Ted is justified in believing that the 100 handouts are sufficient even if he is informed of the falsity of his 53-people belief. A critic may object that this claim assumes that the false 53-people belief is dispensable. For how could the false 53-people belief be essential for the justification if the justification is not destroyed by the removal of the belief? It thus seems that what is doing the justificatory

Knowledge, Falsehood, and Defeat 145 work in Feit and Cullison’s reading of the handout case is not the false 53-people belief but the true ≈53-people belief. The upshot of the objection is that the DJDD view is unable to explain the KFF/FIG difference because it conflates KFF (where the false premise is essential) with KDF (where it is not). This objection assumes that if Ted’s justification for the conclusion-­ belief survives defeat in the counterfactual scenario where he learns that his explicitly believed premise is false, then Handout is treated as a case of KDF as opposed to a case of KFF. But is this assumption correct? If the ≈53-people belief is doing the justificatory work in the ­counterfactual situation, is it also doing the justificatory role in the actual situation? The answer is clearly ‘no.’ The primary grounds for one’s belief in the actual situation can be different from the back-up grounds in a counterfactual situation where the primary grounds are compromised. Just because Ted’s justification survives defeat does not mean that the ground for the conclusion is the true ≈53-belief rather than the false 53-belief. Luzzi (2019, p. 29) reminds us that ‘[c]ounterfactual support should not be confused with actual basing.’ Counterfactual support is propositional justification (having justification to believe that p), whereas actual basing is doxastic justification (justifiedly believing that p) (Silva & Oliveira 2022). The problem with this defense on behalf of Feit and Cullison is that the propositional/doxastic justification distinction works against, not for DJDD. Feit and Cullison (2011, p. 284) explicitly state that condition (iii) of DJDD refers to doxastic justification. Yet for condition (iv) to be met in cases of KFF (as Luzzi proposes), it has to be understood along the lines of propositional justification. Condition (iv) states that the subject’s justification is knowledge-grade if it survives defeat in the counterfactual situation where she learns that her belief basis is false. The kind of justification that is able to survive the defeat of the actual belief basis is obviously not of the doxastic kind; it must be of the propositional kind. Thus, the notions of justification in conditions (iii) and (iv) are different. DJDD rests on an equivocation in the use of the term ‘justification.’ At the end of the day, we have to choose between two unattractive options: either DJDD refers to propositional justification throughout, in which case it conflates KFF with KDF, or it refers to doxastic justification throughout, in which case it conflates KFF with FIG.

9.4  de Almeida’s NDT de Almeida (2017) agrees with Feit and Cullison that it is possible to know something on the basis of a false premise – provided the false premise is not the negation of a defeater. We saw that Feit and Cullison maintain that the false 53-people belief in Handout is not the negation of a defeater because if Ted were informed that there are not 53 people in the

146  Sven Bernecker audience, he would still be justified in believing that the 100 handout copies are enough. The negation of the false premise is not strong enough to defeat the justification of the target belief. According to de Almeida, the reason the false 53-people belief is not the negation of a defeater is not, as Feit and Cullison claim, that the ¬53-people-belief does not have enough defeating power to erode the justification in the conclusion below the threshold required for knowledge. Instead, the false 53-people proposition does not qualify as a defeater because the ¬53-people proposition does not meet the condition of defeat. On the standard definition, a defeater is defined as a proposition that, if added to the subject’s total evidence, prevents the target belief from being justified and/or known (cf. Section 9.2). de Almeida strengthens the conditions on defeat. For a proposition to qualify as a defeater, according to de Almeida (2017, p. 308), it is not enough that it be counter-evidence for p; it must be ‘good evidence’ for ¬p (de Almeida, 2017, p. 308). A defeater for the proposition that p is a true proposition that, if added to S’s total evidence, ‘provides a justification’ for S to believe that ¬p (de Almeida, 2017, p. 308). de Almeida calls this view the new defeasibility theory of knowledge (NDT, for short). To see NDT at work, consider Handout. The true ¬53 people proposition qualifies as a defeater for the justification that the 100 handout copies are enough: only if the proposition that there aren’t 53 people in the audience justifies the belief that, for all [Ted] knows, the 100 copies [he’s] carrying may not be sufficient for that audience. But that proposition is false. … [I]t’s just not plausible to think that there is genuine defeat in the case. (de Almeida, 2017, p. 309) Given the strengthened condition of defeat, the truth that there are ¬53 people in attendance does not qualify as a defeater because it does not justify Ted in believing in the negation of the conclusion – it does not justify him in believing that there are not enough handouts. The problem with de Almeida’s explanation of the ‘benign falsehood phenomenon’ is that it makes it too easy to know. According to NDT, knowledge is compatible not only with undercutting and partial defeat, but also with gettierization. Let me explain. What de Almeida calls a ‘defeater,’ Pollock calls a ‘rebutter.’ A rebutter is a reason to think that the target belief that p is false. An undercutter is not evidence against p, but evidence that the belief is not based on a truth-indicator of p (Pollock, 1986, pp. 38–39). The consequence of defining defeaters as rebutters, as de Almeida does, is that the undercutters no longer qualify as defeaters. And if an undercutter no longer qualifies as defeater, then, given that indefeasibility is sufficient for justification, there is no reason to not attribute knowledge in cases of undercutting defeat.

Knowledge, Falsehood, and Defeat 147 Consider the following example of undercutting defeat. The true proposition There is a red light shining on the wall is an undercutting defeater for my perceptually based belief that the wall is red and it prevents my belief from qualifying as knowledge because I am not in a position to rule out that the wall is really white. However, the proposition There is a red light shining on the wall does not meet de Almeida’s definition of defeat, for it does not justify the belief that, for all I know, the wall is not red. The proposition that the wall is not red is only justified if I am in a position to eliminate the relevant alternative that the wall is red and that it is illuminated by a red light.16 Just as NDT cannot handle undercutting defeat, it cannot handle partial defeat. A partial defeater is a true proposition whose addition results in a loss of some, but not all, of the justification of the belief that p.17 If the belief that p barely meets the threshold for knowledge-grade justification, a partial defeater might be powerful enough to destroy the belief’s positive epistemic status but be too weak to provide ‘positive evidence’ for the belief that not-p.18 Last but not least, de Almeida’s NDT is too weak to eliminate Gettier cases. To see this, consider a typical FIG-case, such as Havit/Nogot. According to NDT, for Smith to not know that someone in the office owns a Ford (p), the following has to be the case: if Smith were informed that Nogot does not own a Ford, he would be justified in believing that no one in the office owns a Ford (¬p). But this is not the case. The negation of the falsehood is not positive evidence for the negation of the conclusion. Just because Nogot does not own a Ford does not mean that no one in the office owns a Ford. Contrary to what de Almeida (2017, p. 310) claims, NDT is unable to differentiate between KFF and FIG. In neither case does the negation of the falsehood on which the justification is based qualify as positive evidence for the belief in the negation of the conclusion. NDT mischaracterizes not only classical Gettier cases but also failed threat cases such as Goldman’s (1976, pp. 772–773) Fake Barn. If Henry had sufficient reason to believe that there are fake barns in the area, he would no longer be justified in believing that what he is looking at is a genuine barn. Yet the true proposition that there are fake barns in the environment, by itself, does not justify the belief in the negation of the conclusion – it does not justify the belief that Henry is looking at a papier-­mâché barn. In sum, by raising the bar for what counts as a defeater, NDT makes it too easy to know.

9.5  Klein on UFB The defeasibility theory has it that a gettierized belief does not qualify as knowledge because the justification is not good enough to withstand the addition of true beliefs to the subject’s body of evidence (cf. de Almeida

148  Sven Bernecker & Fett, 2016). The problem with the defeasibility theory, according to Klein (2008, p. 38), is that it does not allow for KFF. KFF is said to be impossible because the justification for a belief based on a false premise is straightforwardly defeated by the negation of the false premise. Consider once again the handout case. Ted does not know that the 100 handout copies are enough because the justification for the belief – the proposition that there are 53 people in attendance – is defeated by the truth that the number of people in the room is not 53. If Ted were made aware of the fact that he miscounted the number of people in the room, his degree of justification would drop below the threshold required for knowledge.19 In an effort to countenance knowledge-yielding falsehoods within the defeasibility framework and to distinguish these falsehoods from FIG-cases, Klein supplements the defeasibility theory with an account of ‘useful false beliefs’ (UFB, for short). A UFB is a false belief that plays ‘an essential role in both the justification and causal production’ of knowledge (Klein, 2008, pp. 25–26). According to Klein (2008, pp. 48–50), a false belief q is useful in the sense of being the basis for a doxastically justified true belief p which can, barring gettierization, constitute knowledge if and only if the following conditions are met: (i) S’s belief that q is doxastically justified, (ii) the belief that q is essential in the causal production of the belief that p, (iii) q propositionally justifies p, (iv) q relevantly entails some truth t, (v) t propositionally justifies p, and (vi) whatever doxastically justifies the belief that q for S also propositionally justifies the belief that t for S. The basic idea is that a knowledge-yielding false premise (or UFB) must entail a true proposition that is propositionally justified by whatever justifies the false premise and which justified the known conclusion. This true proposition, which is doing the justificatory work, need not be believed by S. To illustrate how Klein’s proposal is supposed to work, consider Handout. Ted is doxastically justified in believing the false proposition that there are 53 people in attendance. This proposition entails the truth that there are approximately 53 people in attendance, which, in turn, propositionally justifies the target proposition that the 100 handout copies are enough. Whatever doxastically justifies Ted in falsely believing that there are 53 people in attendance also propositionally justifies the true proposition that there are less than 100 people in attendance. This is why the false 53-people belief qualifies as a knowledge-generating UFB. Given Klein’s UFB-enriched defeasibility theory, the conditions of justification from truth are different from the conditions of justification from falsehood. When the premise q* is true, indefeasible justification requires that there be no undefeated defeater of the justification (i) of any of the propositions in the inferential path up to and including q* and (ii) of any proposition in the inferential path between q* and the target proposition p. Yet when the premise q is false, indefeasible justification

Knowledge, Falsehood, and Defeat 149 requires only that there be no undefeated defeater of the justification (i) of any of the propositions in the inferential path up to and including t (the truth entailed by q) and (ii) and of any proposition between t and the target proposition p (Klein, 2008, p. 50). Klein solves the problem UFBs pose for the original defeasibility theory by claiming that the inferential path leading up to and including q need not live up to the defeasibility standards. The defeasibility standards are applied to the perfect inferential path (from t to p) the subject does not take, not to the imperfect inferential path (from q to p) the subject actually takes. Klein writes: a belief [p], causally based upon a false belief … is knowledge just in case there is some t [a truth entailed by the false belief] for which conditions [(i)-(iv)] are satisfied, and there is no genuine defeater of the propositional justification of any of the propositions in the evidential path to [p] that includes t. (Klein, 2008, p. 57) After having presented Klein’s account of UFB, I will make two points. First, the notion of a UFB does not neatly fit either of the two categories we have been discussing – KFF and KDF. A UFB combines aspects of both KFF and KDF. Second, the crux of Klein’s position is that UFBs are indistinguishable from certain FIGs. Let me explain these points in turn. (1) First, consider the difference between UFB and KDF. The hallmark of KDF is that the falsehood plays a causal role in bringing about the conclusion and that it contributes to the epistemic status of the conclusion but that its justificatory role is dispensable. UFBs are not dispensable in the same way. They are causally essential in so far as ‘if the false belief were simply removed from the actual causal chain that resulted in knowledge, no causal chain resulting in the cognition would remain’ (Klein, 2008, p. 41). What is more, UFBs are epistemically important in two respects: the falsehood propositionally justifies the target belief p (by condition (iii)) and it entails the truth t, which also propositionally justified the target belief p (by condition (v)). Second, UFB is different not only from KDF but also KFF. Although a UFB provides propositional justification for the target belief p (by condition (iii)), its contribution is not essential. The essential justificatory role is played by the truth t, which is entailed by the false belief q and which the subject need not have cognitive access to. If it were not for the propositional justification of the truth t, the conclusion p would not be (completely) propositionally justified and known. Klein is right in claiming that if the false belief q were removed from the actual causal chain that results in the conclusion, the conclusion p would not be generated. However, it is not the false belief’s justificatory status that is essential for the generation of knowledge. What is doing the work here is the fact that the false belief entails a propositionally justified truth of which the

150  Sven Bernecker subject may be ignorant. Thus, the notion of a UFB cuts across the KFF/ KDF distinction. (2) The major shortcoming of Klein’s position is that the definition of a UFB is too weak to rule out FIG-cases. To drive this point home, compare an FIG-case such as Nogot/Havit with an alleged KFF-case such as Handout. In Havit/Nogot, the false belief that Nogot owns a Ford implies the true proposition that either Nogot owns a Ford or Havit owns a Ford, which propositionally justifies the conclusion that someone in the office owns a Ford. In Handout, the false belief that there are 53 people in attendance implies the true proposition that either there are 53 or 52 people in attendance which propositionally justifies the conclusion that 100 handout copies are sufficient. Why should the disjunctive truth in Handout yield knowledge-grade justification but not the disjunctive truth in Nogot/Havit? Klein maintains that in Nogot/Havit: the only evidence for the disjunction [either Nogot owns a Ford or Havit owns a Ford] is the false disjunct [Nogot owns a Ford]. In other words, although either Nogot owns a Ford or Havit owns a Ford propositionally justifies someone in S’s class owns a Ford, there is a genuine defeater of that justificational path prior to the disjunction, and hence S does not know that someone in the class owns a Ford. (Klein, 2008, pp. 56–57) The defeater in question is the truth that Nogot does not own a Ford. This truth defeats the justification for the disjunctive proposition. Thus, what sets Handout (KFF) apart from Nogot/Havit (FIG), according to Klein, is that there is no genuine defeater of the inferential path leading up to the disjunction that either there are 52 or 53 people in attendance. Why does Klein hold that there is no genuine defeater of the inferential path leading up to the disjunction that either there are 52 or 53 people in attendance? Is the truth that there are not 53 people in attendance not such a defeater? No. For remember that the path leading to the disjunction either there are 52 or 53 people may use only whatever doxastically justifies Ted in falsely believing that there are 53 people in attendance (maybe something like ‘I counted 53, but there is a slight chance I counted the same person twice’). This is due to condition (vi) on UFB. So even if Ted were informed that there are not 53 people in attendance, he would still be justified in believing that either there are 52 or 53 people in attendance. The upshot is that in FIG-cases, but not in KFF-cases, the inferential path is defeated; or so Klein claims. To see the problem with Klein’s account of UFB, consider a variation of the Nogot/Havit case – let us call it Nogot/Havit* – where Nogot would not falsely claim to own a Ford unless Havit did own a Ford. 20 On this version of the case, Smyth’s belief that someone in the office

Knowledge, Falsehood, and Defeat 151 owns a Ford is reliably (safely) formed: the method could not have easily lead to a false belief. Nevertheless, Smyth’s belief still fails to be knowledge because there is epistemic luck involved. Given that reliabilists define epistemic luck in terms of lack of modal stability, they have a hard time acknowledging cases of true justified belief that suffer from epistemic luck even when formed through a reliable method. Defeasibility theorists like Klein do not have this problem. The reason is that they define knowledge-undermining epistemic luck in terms of the presence of undefeated defeaters, as opposed to the absence of a modally robust truth-belief link. In Nogot/Havit*, condition (vi) of UFB is satisfied: whatever doxastically justifies the false belief that Nogot owns a Ford also propositionally justifies the true proposition that either Nogot owns a Ford or Havit owns a Ford. Yet the truth that Nogot does not own a Ford is a defeater of the belief that either Nogot owns a Ford or Havit owns a Ford. The reason is that Smyth is unaware of the link between Nogot’s shamming and Havit’s car ownership. If he were informed that Nogot does not own a Ford, he would cease to be justified in believing that someone in the office owns a Ford. In Handout, however, Ted is aware of the possibility of counting the same person twice and therefore recognizes that even if there are not 53 people in attendance, there could still be 52 people in attendance. This suggests that the difference between Handout (UFB) and Nogot/Havit* (FIG) has to do with whether the subject recognizes the connection between the disjuncts. The question of whether the inferential path leading up to the disjunction is defeated is a red herring. This then points to a serious problem in the account of UFB. Unless the subject recognizes the connection between the disjuncts of the true proposition t that propositionally justifies the target proposition p, UFB is indistinguishable from certain kinds of FIG. And if UFB does require that the subject recognized the connection between the disjuncts of t, then the falsehood q plays no epistemic role whatsoever. For then the subject is doxastically justified in believing the truth t and infers p from t.

9.6 Conclusion Given that there are genuine cases of knowledge from falsehood (KFF), what distinguishes them from Gettier cases where the subject relies on reasoning from falsehood (FIG)? We saw that none of the three defeater accounts discussed in this chapter can properly explain the difference between KFF and FIG. Feit and Cullison must either deny animal knowledge or restrict their ‘doesn’t-justify-the-denial-of-a-defeater’ view (DJDD) to inferential knowledge in which case they could not account for non-inferential KFF. Moreover, DJDD conflates KFF either with KDF or with FIG. de Almeida’s ‘new defeasibility theory’ (NDT) is also unable to differentiate between KFF and FIG. Finally, Klein’s ‘useful

152  Sven Bernecker false beliefs’ (UFB) are indistinguishable from certain kinds of FIG. A positive account of the difference between KFF and FIG is the topic of another paper. 21

Notes 1. Among the proponents of counter-closure are Aristotle (1994, pp. 72a25– 72a30), Descartes (1985, p. 15), Locke (1975, pp. 533–534), Kant (1998, pp. A300/B 356–357), and Russell (1997, pp. 76–77). In recent times, counter-­ closure has been endorsed, among others, by Armstrong (1973, p. 152), Audi (2003, p. 164), Harman (1973, pp. 47, 120), Kripke (2011, p. 202), Lehrer (1974, p. 220), Nozick (1981, p. 231), and Williamson (2007, p. 145–147). For the history of counter-closure, see Borges (2017, pp. 280– 286), de Almeida (2017, pp. 292–293), Luzzi (2019, pp. 1–7), and Hawthorne and Rabinowitz (2017, p. 325 n1). 2. I use ‘justification’ to cover both internalist and externalist justification. 3. For more alleged cases of knowledge from falsehood, see Coffman (2008, pp. 190–191), Klein (2008, pp. 36–37), and Warfield (2005, pp. 407–408). 4. Warfield (2005, p. 412). I take ‘epistemization’ to mean ‘justification.’ 5. I use ‘≈’ to signify approximation and ‘¬’ to signify negation. 6. Among the proponents of the proxy-premise strategy of (alleged) cases of knowledge from falsehood are Ball and Blome-Tillmann (2014), Lee (2021), and Schnee (2015). Buford and Cloos (2018) provide a dilemma for those wanting to reject the possibility of KFF. Borges (2017, pp. 286–289) and Montminy (2014, p. 466) argue that Ted’s true belief that 100 handout copies are enough depends neither evidentially nor causally on the false belief that there are 53 people in attendance. 7. The possibility of KDF is acknowledged, among others, by Goldman (1967, p. 368), Lehrer (1965, pp. 169–171), Saunders and Champawat (1964, p. 9), and Swain (1981, pp. 149–150). 8. In Bernecker (2022), I argue that it is a criterion of adequacy for any theory of knowledge that it solves the false lemma problem. 9. See Pollock and Cruz (1999, p. 37) and Pollock (1974, pp. 41–42). Pollock characterizes a defeater as a reason, not a proposition. 10. For example, Bergmann (2006, ch. 6), Grundmann (2009), and Plantinga (2000, pp. 359–366). 11. Feit and Cullison (2011, pp. 303 n16). One reason for them rejecting the defeasibility label may be that their definition of a defeater only relates to the target proposition p, in an expression of the form, ‘S knows that p.’ This contrasts with the way that Klein, for instance, speaks of defeat of the propositional justification that one member provides or another in an evidential chain. 12. Klein (1996, p. 106) discussed KFF-cases long before Warfield (2005) ­published his seminal paper. Klein’s account of useful false beliefs was first presented at a conference in 1999 (see Klein 2008, p. 25). 13. Unlike Gettier’s own job/coin case, this case does not rest on a confusion of the referential and the attributive sense of the definite description ‘the man who will get the job.’ See Biro (2017). 14. For FIG-cases without false grounds, see Almeder (1973), Feldman (1974), Lehrer (1965, p. 170), and Swain (1972, pp. 429–430). Advocates of the no-essential-false-grounds approach are Feldman (2003, p. 37), Harman (1973, pp. 46–50, 120–124), Lehrer (1974, pp. 219–220), Levin (2006), and Lycan (2006, pp. 156–157).

Knowledge, Falsehood, and Defeat 153 15. Feit and Cullison (2011, p. 295). According to dogmatism, any reason r that justifies S in believing p provides S with (defeasible) justification for the general claim that any rebutting defeater d for p is misleading. And this claim implies that if r justified believing p, r also justified one in believing the claim that every rebutting defeater in class d is misleading. Since rebutting defeaters to p are reasons to think p is false, being justified in thinking all members of d are misleading is a reason to think d is false. Hence, given dogmatism, the truth of condition (iii) entails the falsity of condition (iv) I owe this point to Paul Silva. 16. Another problem with NDT is that many epistemologists think that there are higher order defeaters and that they are distinct from undercutters (Greco, 2014; Huemer, 2011; Silva, 2018; Titelbaum, 2015). For instance, having evidence to think, it is irrational to believe p is said to defeat the rationality of believing p. 17. Consider the following example of a partial defeater adapted from Thune (2010, pp. 357–358). I seem to remember from high school geography that Sofia is the capital of Bulgaria. In an effort to check my memory, I pull an atlas off the shelf which lists Sofia as the capital of Bulgaria. Consequently, I am more strongly inclined to believe the relevant proposition. But suppose you (whom I trust) show me that the atlas is not up to date but a reprint from the original 1960 publication, and you point out that the capital of Bulgaria may have changed since then. In this case, your testimony functions as a partial defeater. It does not indicate that I am actually wrong but that I might be wrong. 18. I disagree with Plantinga (1993, p. 218 n4) who assumes that S’s justified belief that p is defeated by S’s justified belief that not-p if and only if S’s belief that not-p is justified to at least the same degree as S’s justified belief that p. See Casullo (2003, p. 44). 19. As was explained in Section 9.3, Feit and Cullison maintain that if Ted were informed that there are not 53 people in attendance, he would still be justified in believing that the 100 handout copies are enough. According to Klein (2008, p. 38; 2017, p. 54), this assessment is incompatible with the standard defeasibility theory. 20. For more examples of reliably formed beliefs that suffer from epistemic luck, see Bernecker (2011, p. 138), Coffman (2010, p. 246), Ginet (2010, p. 270), Goldberg (2015, pp. 277–278), Hiller and Neta (2007, pp. 307–308), Hiller (2013, p. 10), and Lackey (2006, p. 288). 21. For comments on an earlier draft, I am grateful to Luis Rosa, Paul Silva, and Wes Siscoe. I have also benefited from being a Senior Research Associate at the African Centre for Epistemology and Philosophy of Science, University of Johannesburg. Work on this chapter was supported by an Alexander-­vonHumboldt Professorship Award.

References Almeder, R. (1973). Defending Gettier’s counterexamples. Australasian Journal of Philosophy, 53, 58–60. Aristotle (1994). Posterior analytics. Translated with a commentary by J. Barnes (2nd ed.). Clarendon Press. Armstrong, D. (1973). Belief, truth and knowledge. Cambridge University Press. Arnold, A. (2013). Some evidence is false. Australasian Journal of Philosophy, 91, 165–172.

154  Sven Bernecker Audi, R. (2003). Epistemology: A contemporary introduction to the theory of knowledge. Routledge. Balcerak Jackson, M., & Balcerak Jackson, B. (2013). Reasoning as a source of justification. Philosophical Studies, 164, 113–126. Ball, B., & Blome-Tillmann, M. (2014). Counter closure and knowledge despite falsehood. Philosophical Quarterly, 64, 552–568. Baumann, P. (2020). Close to truth. Philosophia, 48, 1769–1775. Bergmann, M. (2006). Defeaters. In M. Bergmann (Ed.), Justification without awareness (pp. 153–177). Oxford University Press. Bernecker, S. (2011). Keeping track of the Gettier problem. Pacific Philosophical Quarterly, 92, 127–152. Bernecker, S. (2022). Knowledge from falsehood and truth-closeness. Philosophia, 50, 1623–1638. Bernecker, S., & Grundmann, T. (2019). Knowledge from forgetting. Philosophy and Phenomenological Research, 98, 525–540. Biro, J. (2017). Non-Pickwickian belief and ‘The Gettier problem’. Logos and Episteme, 8, 47–69. Borges, R. (2017). Inferential knowledge and the Gettier conjecture. In R. Borges, C. de Almeida, & P. D. Klein (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 273–291). Oxford University Press. Buford, C., & Cloos, M. (2018). A dilemma for the knowledge despite falsehood strategy. Episteme, 15, 166–182. Casullo, A. (2003). A priori justification. Oxford University Press. Clark, M. (1963). Knowledge and grounds. A comment on Mr. Gettier’s paper. Analysis, 24, 46–48. Coffman, E. J. (2008). Warrant without truth? Synthese, 162, 173–194. Coffman, E. J. (2010). Misleading dispositions and the value of knowledge. Journal of Philosophical Research, 35, 241–258. de Almeida, C. (2017). Knowledge, benign falsehoods, and the Gettier problem. In R. Borges, C. de Almeida, & P. D. Klein (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 292–311). Oxford University Press. de Almeida, C. (2019). Review of F. Luzzi, knowledge from non-­knowledge. Cambridge University Press. Notre Dame Philosophical Reviews. https:// ndpr.nd.edu/reviews/knowledge-from-non-knowledge-inference-testimonyand-memory/ de Almeida, C., & Fett, J. R. (2016). Defeasibility and Gettierization: A reminder. Australasian Journal of Philosophy, 94, 152–169. Descartes, R. (1985). Rules for the direction of the mind. In J. Cottingham, R. Stoothoff, D. Murdoch (eds.), The philosophical writings of Descartes (vol. 1). Cambridge University Press. Feit, N., & Cullison, A. (2011). When does falsehood preclude knowledge? Pacific Philosophical Quarterly, 92, 283–304. Feldman, R. (1974). An alleged defect in Gettier examples. Australasian Journal of Philosophy, 52, 68–69. Feldman, R. (2003). Epistemology. Prentice Hall. Fitelson, B. (2010). Strengthening the case for knowledge from falsehood. Analysis, 70, 666–669.

Knowledge, Falsehood, and Defeat 155 Fitelson, B. (2017). Closure, counter-closure, and inferential knowledge. In R. Borges, C. de Almeida, & P. D. Klein (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 312–323). Oxford University Press. Ginet, C. (2010). Causal theories of knowledge. In J. Dancy, E. Sosa, & M. Steup (Eds.), Blackwell companion to epistemology (2nd ed., pp. 268–272). Blackwell. Gettier, E. (1963). Is justified true belief knowledge? Analysis, 23, 121–123. Goldberg, S. C. (2001). Testimonially based knowledge from false testimony. Philosophical Quarterly, 51, 512–526. Goldberg, S. C. (2015). Epistemic entitlement and luck. Philosophy and Phenomenological Research, 91, 273–302. Goldman, A. I. (1967). A causal theory of knowing. Journal of Philosophy, 64, 357–372. Goldman, A. I. (1976). Discrimination and perceptual knowledge. Journal of Philosophy, 73, 771–791. Greco, D. (2014). A puzzle about epistemic Akrasia. Philosophical Studies, 167, 201–219. Grundmann, T. (2009). Reliabilism and the problem of defeaters. Grazer Philosophische Studien, 79, 65–76. Grundmann, T. (2020). Saving safety from counterexamples. Synthese, 197, 5161–5185. Harman, G. (1973). Thought. Princeton University Press. Hawthorne, J. (2004). Knowledge and lotteries. Oxford University Press. Hawthorne, J., & Rabinowitz, D. (2017). Knowledge and false belief. In R. Borges, C. de Almeida, & P. D. Klein (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 325–344). Oxford University Press. Hiller, A. (2013). Knowledge essentially based upon false belief. Logos and Episteme, 4, 7–19. Hiller, A., & Neta, R. (2007). Safety and luck. Synthese, 158, 303–313. Hilpinen, R. (1988). Knowledge and conditionals. Philosophical Perspectives, 2, 157–182. Huemer, M. (2011). The puzzle of metacoherence. Philosophy and Phenomenological Research, 82, 1–21. Kant, I. (1998). Critique of pure reason. Translated and edited by P. Guyer & A. Wood. Cambridge University Press. Klein, P. D. (1996). Warrant, proper function, reliabilism, and defeasibility. In J. Kvanvig (Ed.), Warrant in contemporary epistemology: Essays in honor of Plantinga’s theory of knowledge (pp. 97–130). Rowman & Littlefield. Klein, P. D. (2008). Useful false beliefs. In Q. Smith (Ed.), Epistemology: New essays (pp. 25–61). Oxford University Press. Klein, P. D. (2010). Peter klein. In J. Dancy, E. Sosa, & M. Steup (Eds.), A companion to epistemology (2nd ed., pp. 156–163). Wiley-Blackwell. Klein, P. D. (2017). The nature of knowledge. In R. Borges, C. de Almeida, & D. Klein (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 35–56). Oxford University Press. Kripke, S. (2011). Philosophical troubles (vol. 1). Oxford University Press. Lackey, J. (2006). Pritchard’s epistemic luck. Philosophical Quarterly, 56, 284–289.

156  Sven Bernecker Lee, K. Y. (2021). Reconsidering the alleged cases of knowledge from falsehood. Philosophical Investigations, 44, 151–162. Lehrer, K. (1965). Knowledge, truth and evidence. Analysis, 25, 168–175. Lehrer, K. (1974). Knowledge. Oxford University Press. Levin, M. (2006). Gettier cases without false lemmas? Erkenntnis, 64, 381–392. Littlejohn, C. (2015). Knowledge and awareness. Analysis, 75, 596–603. Locke, J. (1975). An essay concerning human understanding. Edited by P. H. Nidditch. Clarendon Press. Luzzi, F. (2010). Counter-closure. Australasian Journal of Philosophy, 88, 673–683. Luzzi, F. (2019). Knowledge from non-knowledge: Inference, testimony and memory. Cambridge University Press. Lycan, W. G. (2006). On the Gettier problem problem. In S. Hetherington (Ed.), Epistemology futures (pp. 148–168). Oxford University Press. Montminy, M. (2014). Knowledge despite falsehood. Canadian Journal of Philosophy, 44, 463–475. Murphy, P. (2013). Another blow to knowledge from knowledge. Logos and Episteme, 4, 311–317. Nozick, R. (1981). Philosophical explanations. Oxford University Press. Plantinga, A. (1993). Warrant: The current debate. Oxford University Press. Plantinga, A. (2000). The nature of defeaters. In A. Plantinga (Ed.), Warranted Christian belief (pp. 359–366). Oxford University Press. Pollock, J. L. (1974). Knowledge and justification. Princeton University Press. Pollock, J. L. (1986). Contemporary theories of knowledge. Rowman and Littlefield. Pollock, J. L., & Cruz, J. (1999). Contemporary theories of knowledge (2nd ed.). Rowman and Littlefield. Russell, B. (1997). The problems of philosophy. Introduction by J. Perry. Oxford University Press. Saunders, J. T., & Champawat, N. (1964). Mr. Clark’s definition of ‘Knowledge’. Analysis, 25, 8–9. Schnee, I. (2015). There is no knowledge from falsehood. Episteme, 12, 53–74. Silva, P. (2018). Explaining enkratic asymmetries: Knowledge-first style. Philosophical Studies, 175, 2907–2930. Silva, P., & Oliveira, L. R. G. (2022). Introduction. In P. Silva & L. R. G. Oliveira (Eds.), Propositional and doxastic justification: New essays on their nature and significance. Routledge. Sorensen, R. (2016). Fugu for logicians. Philosophy and Phenomenological Research, 92, 131–144. Steup, M. (1996). An introduction to contemporary epistemology. Prentice Hall. Swain, M. (1972). An alternative analysis of knowing. Synthese, 23, 423–442. Swain, M. (1981). Reasons and knowledge. Cornell University Press. Sylvan, K. (2018). Knowledge as a non-normative relation. Philosophy and Phenomenological Research, 97, 190–222. Thune, M. (2010). Partial defeaters and the epistemology of disagreement. Philosophical Quarterly, 60, 355–372. Titelbaum, M. G. (2015). Rationality’s fixed point (or: In defense of right reason). Oxford Studies in Epistemology, 5, 253–294.

Knowledge, Falsehood, and Defeat 157 Turri, J. (2011). Manifest failure: The Gettier problem solved. Philosopher’s Imprint, 11, 1–11. Turri, J. (2012). Gettier’s wake. In S. Hetherington (Ed.), Epistemology: The key thinkers (pp. 214–229). Continuum. Turri, J. (2019). Knowledge from falsehood: An experimental study. Thought, 8, 167–178. Warfield, T. A. (2005). Knowledge from falsehood. Philosophical Perspectives, 19, 405–416. Williamson, T. (2007). The philosophy of philosophy. Blackwell.

Part II

Beyond the Possibility of Knowledge from Non-Knowledge

Section III

Reasoning, Hinges, and Cornerstones

10 The Developmental Psychology of Sherlock Holmes Counter-Closure Precedes Closure Roy Sorensen

Knowledge by deduction is generally discussed prospectively. This perspective is natural for a lone reasoner, self-consciously spreading his knowledge from a premise to a conclusion. But a retrospective, social perspective is suggested by developmental psychology and the history of logic.1

10.1  The Cradle of Epistemology A toddler hides his secrets much as a dog buries a bone. Dogs infer but lack the concept of inference. By age four, children reliably attribute false beliefs. They presuppose others have mental states. Yet they do not attribute purposeful thinking. A four-year-old who deduces the color of an object by disjunctive syllogism does not realize that others can use the same process of elimination to infer the color of an object that had never been seen. Reminding the child of premises available to others does not lead him to infer that others may infer the conclusion. There is plenty of inference but no meta-inference. At six dawns awareness that one has been making inferences (Moshman, 2015, p. 76). Introspected certainty, effort, and control set the landscape for an inferential cogito: ‘I infer, therefore, there are inferences’ (Pillow, 2002, p. 789). The lag between first-person attribution of inference and third-person attribution is short. Deception improves. In addition to making their lies more plausible, children learn how to deceive by telling misleading truths. The discovery of inference also puts children in a position to grade inferences. Children ages six to ten have more confidence in deductions than guesses. They justify their conclusions by citing relevant premises. Between eight and ten, children are more confident in deductions than inductions. At 13, the ‘child’ has the adult hierarchy of certainty: deduction, induction, informed guess, pure guess. Modality follows the same trajectory as inference. At six dawns awareness of the distinction between contingent and necessary truths. DOI: 10.4324/9781003118701-16

162  Roy Sorensen By 11 children begin to grasp validity. They acknowledge that a good argument could have a false conclusion. At about 13, children achieve adult level of meta-logical awareness (and so are accorded adult status in cultures that lack our interposition of ‘adolescent’ between child and adult). Further improvement depends on formal education rather than maturation. Logicians have the highest stage of development, stage 4, in which understanding of the consequence relation is explicit and systematic (Moshman, 2015, p. 78). All of this progress is meta-logical rather than logical. As soon as children can be meaningfully measured for reasoning, they make the same basic inferences as adults (much as they can produce the same basic sentences as an adult). This all-or-nothing competence in logic is also predicted by W. V. O. Quine’s (1970) theory of translation. Any translation that does not preserve classical tautologies is a mistranslation. Quine concludes classical logic is a pre-condition of intelligible speech, not a separate competence. Just as he attributes classical logic to the natives of exotic cultures, Quine is equally committed to attributing classical logic to children. Psychologists of reasoning teach that children are rational and adults are irrational. The decline in rationality is driven by a publication bias toward surprising results. Typically, the standards of rationality are more lenient for children than adults. Developmental psychologists are assessing the competence of children. The distractions that precipitate adult performance errors are therefore avoided for most studies of young children. Some childish outperformance of adults survives imposition of uniform standards. Young children (and animals) are less susceptible to the sunk cost fallacy than older children and adults (Arkes & Ayton, 1999). Children under eight also outperform adults on some immediate inferences (Noveck, 2001). The children are deaf to distracting conversational implicatures and so do not mistake them as entailments. Cognitively overloading adults impair their pragmatic ability and raise their performance to the level of seven-year-old children (De Neys & Schaeken, 2007). A desire to keep secrets is best fulfilled by inferring what others will be in a position to know and then calculating the likelihood they will exploit those inferential opportunities. Young children presume all knowledge flows from an empirical source. They secure the ignorance of others by blocking each suspected source of knowledge. Babies first block perception. As they learn language, they block testimony (and later adulterate it with lies and misleading truths). When those targeted for ignorance still manage to learn, children are baffled. Knowledge by deduction emerges as a hypothesis that explains the anomalous knowledge of others and suggests how to prevent future knowledge. Consider 12-year-old Irene Adler. She is surprised that her schoolmate Sherlock now knows her age. Did Irene’s brother break his promise not

The Developmental Psychology of Sherlock Holmes 163 to tell Sherlock her age? Before accusing anyone, Irene explores the possibility that she herself inadvertently revealed her age. With the hindsight afforded by her knowledge that Sherlock suddenly knows her age, Irene notices two clues. First, she and Sherlock are attending the seventh wedding anniversary of Irene’s mother. Second, Irene just mentioned that her twin sister was five at the wedding. Sherlock could have just put five and seven together. But did Sherlock actually know by deduction? His friend Watson participated in the same conversation. But Watson did not notice the implication. Watson understands deductions after they are performed publicly by others. Watson lacks Sherlock’s ability to spontaneously deduce interesting conclusions. Passive understanding of deduction precedes active construction of deductive arguments. Sherlock has a reputation for satisfying a sufficient condition for knowledge of a conclusion Fred Dretske (1970) dubbed ‘closure’: If Subject knows Premise while also knowing P entails Conclusion, then S knows C. Triggering this sufficient condition does not invariably lead to an epistemic improvement. Sometimes Sherlock’s deductions are sound but viciously circular. In other cases, the epistemic improvement is overdetermined by other sources of knowledge. For instance, the suspense needed for learning by deduction is spoiled by Sherlock’s smarter brother Mycroft. When Mycroft announces the conclusion ahead of Sherlock, Sherlock thereby knows the answer by testimony and so Sherlock cannot learn it by deduction (or by any other means).

10.2  From Gainsaying to Argument Sherlock is Watson’s hero. But Irene Adler knows Sherlock frequently misses opportunities. There are too many implications to think about! Sherlock does not advertise how much time he wastes pursuing consequences that are not interesting. Often, Sherlock’s zeal for deduction leads him to deductions that are symptoms of knowledge rather than causes. Sherlock puffs out syllogisms like smoke from a locomotive. The syllogisms are a sign of progress but not the cause. Some say circular deductions are smokey re-assertions rather than arguments. Instead of overtly gainsaying, the maturing child covertly gainsays by restating the thesis in different words. 2 Listeners who disagree with the ‘conclusion’ can recognize the circularity more easily than those who have the same perspective as the covert gainsayer. This advantage of actual occupation creates a division of labor in which a circular argument is grudgingly amended to yield a deduction that is informative to the hearer. If Irene over-attributes knowledge by deduction, she will fail to protect herself against snoops and informants. The inference rule of

164  Roy Sorensen counter-closure emerges as a checklist that exposes false tracing of knowledge to deduction: S knows C by deduction from P only if, 1 Knowledge of Premise: S knows P. 2 Knowledge of Entailment: S knows P entails C. 3 Knowledge Sustenance: S sustained that knowledge throughout the deduction. If any of these pre-conditions fails, counter-closure eliminates deduction as the source of knowledge of C. Inquiry should switch to the hypothesis that knowledge comes from another source such as testimony, perception, or memory. The hypothesis that Sherlock learned by deduction can be investigated immediately and discreetly. The project is just a continuation of earlier efforts to control what others can infer from what is said or shown.

10.3  Self-Actualizing the Potential Knowledge of Others Closure promises a side-benefit to those who apply counter-closure. If I know that you could have known p by deduction, I thereby actually gain knowledge by deduction. For instance, I initially know two ­members of a family of eight were born on the same day of the week because I know the family’s twins were born on the same day. After I realize that you might have deduced a shared day of birth by the pigeonhole principle (eight people but only seven days of the week), I widen the basis of my knowledge. My knowledge could now survive the revelation that one twin was born before midnight and the other after midnight. Dwelling on your possible deduction has the byproduct of giving me actual knowledge by deduction. On other occasions, I learn your acquired knowledge by deduction without knowing the specific deduction you employed. Proof that there is some proof or other is itself proof. Knowledge by meta-deduction is itself knowledge by deduction.

10.4  Counter-Closure Corrects Empiricist Bias Children prior to the age of ten exhibit empiricist bias. One symptom of the bias is hasty agnosticism. If they cannot find empirical evidence bearing on a statement, they infer that they are not in a position to know. For instance, eight-year-olds were asked about a hidden single-colored poker chip: Is it true that ‘Either the chip in my hand is yellow or it’s not yellow?’. Since they could not see the chip, they answered, ‘I don’t know’ (Osherson & Markman, 1974–1975). Empiricist bias is generalized to others. Young children believe others can be kept ignorant simply by removing all empirical means of learning the answer.

The Developmental Psychology of Sherlock Holmes 165 Security breaches eventually stimulate their suspicion that there is a non-empirical source of knowledge. The suspicion is also triggered by the speed and ease by which others acquire knowledge. This second stimulus corresponds to a second symptom of empiricist bias: inefficiency. Psychologists demonstrate this aspect of empiricist bias by posing problems that have a slow a posteriori solution and a quick a priori solution. Fiveyear-old children are oblivious to a priori solutions. When presented with a scene of boys and girls and asked ‘Are there more boys than children?’, they laboriously count the boys and girls. Children over the age ten do not scrutinize the scene. They exploit the fact that any boy is a child and so answer no. With tutoring on the relation of class inclusion, a large percentage of the five-year-olds suddenly improve. They equal the performance of ­ten-year-olds. The enlightened five-year-olds are delighted to catch up. Children are competitive. If peers are answering faster than permitted by empirical methods, ten-year-old children suspect that knowledge has been deduced. They get into the habit of checking whether a question can be answered deductively. These retrospective inquiries into what others may have learned by deduction set the stage for their own attempts to learn by deduction.

10.5  Counter-Closure Is Mildly Self-Defeating Closure is a valid inference rule that yields meta-knowledge of deductive knowledge. Counter-closure yields meta-knowledge that others lack deductive knowledge. Despite paralleling the fertility of closure, counter-­closure is invalid. Counter-closure underestimates how much can be learned by deduction. Indeed, each of counter-closure’s three sub-requirements embodies a sub-underestimate of how much can be learned by deduction. The Knowledge of Premise requirement is counter-exampled by Peter Klein’s (2008) ‘Useful False Beliefs’. A child who is told by his parents that Santa will give him a gift knows that someone will give him a gift. The conclusion is weaker than the premise. The mistaken extra content in the premise is an excrescence. Other errors are relevant but negligibly small. Still other errors are, relevant and substantial but in a safe direction. Later, I introduce a new genre of counter-examples to the Knowledge of Premise Requirement that does not involve useful errors. These counter-examples suggest that the process of deduction can justify the premise. But for now, I redirect the forces Klein marshaled against Premise Knowledge to argue that Entailment Knowledge is counter-exampled by useful invalidities. One can learn by an invalid deduction because the inference rule errs only a little or in a safe direction or only in irrelevant circumstances. Ironically, counter-closure is itself an example of what it precludes – a useful invalidity. Given counter-examples to (1) Premise Knowledge

166  Roy Sorensen and (2) Entailment Knowledge, we should expect counter-examples to (3) Knowledge Sustenance. For Knowledge Sustenance presupposes that knowledge of the premise and its entailment of the conclusion need to be preserved. If knowledge was not needed to begin with, it need not be preserved. Knowledge of the premise and knowledge of the entailment would each be supererogatory. Loss of knowledge in the course of the deduction might be a loss of a luxury that was not needed to secure knowledge of the conclusion. As an inference rule of epistemic logic, counter-closure has the same status as the following invalid but useful inference rules of epistemic logic: KK: If one knows that p, then one knows that one knows that p. Knowledge Agglomeration: If one knows that p and knows that q, then one knows that p and q. Temporal Retention: If you knew and have not forgotten, then you now know. These principles are blamed for causing paradoxes such as the sorites paradox, the preface paradox, and the surprise test paradox. The standard diagnosis is that the principles work so well (especially when ‘know’ is weakened to ‘in a position to know’) that we over-extend them to situations in which they fail. Consider the surprising equality 0° = 1. School children adopt the rule that multiplication is equivalent to repeated addition. The repetition rule was reinforced with experiments: Adding three two-meter sticks yields a length of six meters. This was a precedent for adopting the rule that exponentiation is repeated multiplication. The rule generates the false expectation that 0° = 0. Saul Kripke (1982) asks how you know you are following the rule of addition rather than a rule such as quaddition (which only diverges from addition at some sum exceeding the largest of your past additions). The challenge is more persuasive when applied to exponentiation. Students who worry about whether they really follow the rule of exponentiation should reconsider their earlier confidence that they were multiplying. ‘Multiplication is repeated addition’ works when a fraction is the multiplicand and an integer is the multiplier. For instance, adding three half-meter sticks yield a length of 1.5. But ‘Repeated addition’ is an unintelligible command when the multiplier is a fraction as in 1/2 × 1/3. One cannot add a half-meter stick to itself a third of a time.

10.6  Deductive Learning Despite Invalidity Conscientious teachers try to switch their students to the valid rule for exponentiation (which models exponentiation as scaling rather than

The Developmental Psychology of Sherlock Holmes 167 repetition). Yet these teachers do not re-grade past examinations that employed the invalid rule to get the true answer. Nor do the teachers refuse credit to students who, out of habit, persist with the invalid rule after they have been apprised of the correct rule. As long as the invalid rule is wielded in a hospitable environment, the teacher attributes knowledge to these creatures of habit. We should not abstain from an algorithm that fails only for questions that we are very unlikely to ask. But we should explore the limits of safe use. Since counter-closure is a practical principle, its departures from validity are of practical significance. Specifically, secrets can be penetrated by deduction despite the discoverer lacking knowledge of his premise or knowledge of the entailment. The limits are also of theoretical significance. Rationalists and empiricists presuppose counter-closure. This leads rationalists to postulate extraordinary sources of knowledge. Empiricists debunk these exotic origins. The rationalists should respond by lowering their aspirations for the sources and raising their aspirations for the process of deduction. Closure is a sufficient condition for knowledge by deduction, not a necessary condition. Historically, knowledge by deduction has more depth than suggested by the flat blackboards of epistemic logicians. Just as others have argued that testimony and memory generate new knowledge, not just transmit it, I shall argue that deduction generates new knowledge.

10.7  The Platonic Emergence of Counter-Closure In the Meno, a slave boy is discovered to have surprising knowledge of a geometrical theorem. How? Not by perception. That never works because there are no observable geometrical figures. Not by instruction from a teacher. Meno never educated his slave. Not from testimony. Sure, Socrates asked questions. But he never told the boy anything. The boy learned by deduction. Counter-closure forbids Socrates from resting there. Deduction can only transmit knowledge from a premise! By a process of elimination, Socrates infers the boy had innate knowledge of a premise unavailable through perception, testimony, or a memory acquired after birth. Socrates concludes the memory was formed prior to the boy’s terrestrial existence. This dialogue would have been more convincing, and amusing, if Socrates were ignorant of the theorem deduced by the boy. That would have immunized Socrates against the charge that he instilled rather than elicited knowledge (Weiss, 2001, pp. 14 and 77–126). Counter-closure plausibly implies that if you lack any justification for the premises, deduction cannot make the heavy lift to knowledge of a conclusion. But counter-closure less plausibly implies that deduction cannot make the lighter lift from near knowledge of premises to knowledge

168  Roy Sorensen of the conclusion. Ultimately, I argue that there are counter-examples of both sorts. But to isolate a clearer target for the counter-examples, I first show how their target became visible in the aftermath of Edmund Gettier’s counter-example to Plato’s definition of knowledge.

10.8  Gettier’s Footnote to Plato The definitions of philosophical terms (virtue, art, piety, justice) proposed in Plato’s dialogues are interesting. Almost none of Plato’s definitions achieve consensus. An important exception is the definition of knowledge stated in Meno 97e–98a. Socrates distinguishes knowledge from luckily true belief by requiring the belief to be tethered to the truth.3 Less metaphorically, knowledge is Justified True Belief. By 1963, the JTB definition had passed the test of time. Gettier (1963, footnote 1) doffs his cap to Plato. Most epistemologists follow suit. Alvin Plantinga, however, remarks on a dearth of explicit formulations of the definition in the history of philosophy. ‘According to the inherited lore of the epistemological tribe, the JTB [justified true belief] account enjoyed the status of epistemological orthodoxy until 1963, when it was shattered by Edmund Gettier…. Of course, there is an interesting historical irony here: it isn’t easy to find many really explicit statements of a JTB analysis of knowledge prior to Gettier. It is almost as if a distinguished critic created a tradition in the very act of destroying it’ (1993 pp. 6–7). Peeking through Plantinga’s wisecrack, I spy a performative interpretation of Georg Hegel’s adage that the owl of Minerva only flies at dusk. History enlightens retrospectively because the truth-makers for the past are accessible only in the future. A good critic avoids the straw man fallacy. An excellent critic improves on the original formulation. This optimized position becomes the definitive position. The critic aims to refute every variation of the position by refuting its strongest formulation. This exercise is simultaneously constructive and destructive. This is no more contradictory than demonstrating the power of an explosive by first maximizing the durability of the target (adding sandbags to the sacrificial bunker). Little wonder that our knowledge of past philosophies is often gleaned from adversaries.

10.9  The First Articulation of Counter-Closure Edmund Gettier imagines two eccentric cases that fit JTB and yet fail to yield knowledge. Both feature beliefs that are coincidentally true. Both are unnatural inferences. Both violate H. P. Grice’s maxim of quantity. Specifically, these under-inferences violate the upward submaxim ‘Say as much as is appropriate’. Each of Gettier’s curious cast of characters uses

The Developmental Psychology of Sherlock Holmes 169 logic to gratuitously dilute what they justifiably believe. For instance, a man who justifiably believes Jones owns a Ford adds a random proposition to conclude ‘Either Jones owns a Ford or Brown is in Barcelona’. In a signature double twist, ‘Jones owns a Ford’ is unluckily false but ‘Brown is in Barcelona’ is luckily true. So Gettier’s ‘random’ reasoner winds up with justified true belief. This random cluttering of one’s reasoning makes the Gettier cases difficult to teach. Students fresh off the streets do not draw inferences merely because they can. Gettier’s cases are more easily understood by logicians, computer programmers, and lawyers. They are already familiar with the dilutions needed to invoke a conditional permission that has a disjunctive antecedent. Once students understand how Gettier’s counter-examples show JTB is too broad, they repeat the history by seeking a fourth condition. The students welcome Michael Clark (1963) extra clause: There must be no false steps leading up to the knower’s conclusion. Clark is presupposing a counter-closure principle that is not limited to deduction: You cannot learn the conclusion from an argument unless you know the conjunction of the premises. Any false premise prevents knowledge of all the premises. J. Saunders and N. Champawat (1964) propose a counter-example to inductive closure. Consider an induction with many true premises and a single falsehood. Someone testing Surefire matches believes he has a sample of 1000 Surefire matches and has successfully ignited them all. The match tester reasons from the premise ‘The sample has 1000 Surefire matches and all lit’ to the conclusion that the next Surefire match will light. Unknown to him, the sample actually contained 1001 matches. Contrary to Clark’s counter-closure principle, the false premise does not prevent the tester from knowing that the next Surefire match will light. All induction goes beyond the evidence. Once we grant that there can be knowledge by induction, we are committed to tolerating knowledge from premises that have a little less support than the reasoner thought. Ted Warfield (2005, pp. 407–408) argues that counter-closure also fails for deduction: A lecturer carefully counts 53 members in his audience and concludes that his supply of 100 handouts is enough. There are actually 52 people because he double counted one individual. The error is too small to undercut the lecturer’s knowledge. The error is directed in the safer direction. Therefore, the lecturer has deduced knowledge from an unknown, indeed false, premise. Defenders of counter-closure suggest that the mis-counter relied on a proxy premise, namely, ‘There are approximately 53 members of the audience’. This defense backfires if the approximation figures as a deductive conclusion from the false lemma ‘There are exactly 53 members of the audience’. For the inference from an exact falsehood to a true approximation would itself be a counter-example to counter-closure.

170  Roy Sorensen The defender of counter-closure must instead portray the approximation step as directly inferred. Furthermore, the inference must not be merely potential. It must actually occur despite the reasoner having no awareness. Warfield can supplement his hypothetical with details which make the existence of an unconscious inference implausible. If the lecturer were the sort of person who could just glance at a crowd and directly infer an accurate estimate, then he would not have bothered counting. Perhaps the lecturer could develop this crowd estimation skill with practice. But until then, the only path available to him is to first specifically count and then approximate on that basis. Others improve upon Warfield’s refutation of Knowledge of Premise requirement (Luzzi, 2019, Chapter 2). I shall merely integrate these refinements into my resumed attack on the Knowledge of Entailment requirement.

10.10  Discovery Despite Invalid Inference If ‘useful falsehoods’ are counter-examples to counter-closure, then so are ‘useful invalidities’. For axioms and inference rules are inter-exchangeable. This fungibility is dramatized by natural deductive systems that contain only inference rules. One can convert useful falsehoods by translation into a natural deductive system. The line between inference rule and premise is flexible enough to reconstruct many minor flaws in reasoning either as insignificant fallacies or as valid inferences from insignificantly false premises. When standards of proof rose in the nineteenth century, David Hilbert corrected some slightly invalid proofs by Euclid. This tinkering did not make Hilbert the discoverer of the theorems. They remain Euclid’s discoveries because the fallacies are not significant enough to have rendered Euclid ignorant of the conclusions (Sorensen, 2016, pp. 141–143). In contrast, Aristarchus’ argument for the circumference of the earth is fatally invalid because it contains two errors that luckily cancel each other out. Johannes Kepler benefitted from a similar cancelation of errors in his calculation of Mars’s orbit. Historians retract knowledge attributions for these deductions. If historians believe there is a selection effect for lucky cancelations, then they suspect a systematic over-attribution of knowledge to past thinkers. When a significant gap was exposed in Andrew Wile’s 1993 attempted proof of Fermat’s Last Theorem, the date of Wile’s discovery was post-dated to coincide with his correction in 1994. A defender of counter-closure will defend the Knowledge of Entailment requirement against useful invalidities in the same way he defends the Knowledge of Premise requirement against useful falsehoods: with proxies. His proxy for Euclid will be a sound argument that was known

The Developmental Psychology of Sherlock Holmes 171 by Euclid (or perhaps should have been known by Euclid). If there is no psychologically plausible proxy, the defender of counter-closure will deny there was knowledge by deduction. The defender of counter-­ closure then treats Euclid the same way historians treat Aristarchus and Kepler. That is too revisionary. The honor roll of discoverers cannot be rewritten every time we raise the standard of proof. Hilbert cannot replace Euclid simply because unforeseeable standards were first satisfied by Hilbert. In any case, Aristarchus and Kepler have lost priority: their slips are fatal by the standards of their own era. There is historical precedent for allowing knowledge from false premise and from invalid inference rules. Inspired by the epistemic fertility of Kant’s regulative ideals, Hans Vaihinger (1925) catalogued productive falsehoods in The Philosophy of The As If. False generalizations can have systematically true consequences and can generate them more efficiently and reliably than truths.4 Instead of defending your foundations as true, defend them as fertile. This is how Milton Friedman defends rational choice theory: ‘theory is to be judged by its predictive power for the class of phenomena which it is intended to explain’ (1953, p. 8). People do not really obey the axioms of the idealization. But if you pretend they do, you get reliable predictions about what they will chose.

10.11  Schopenhauer’s Derogation of Deduction Counter-closure’s pre-conditions for knowledge by deduction led Arthur Schopenhauer to deny that logic has practical value. Whereas the senses can provide knowledge, logic can only pass it on like an idle widow passing on money earned by a husband who worked himself to death. After all, validity is conditional; the conclusion is true if the premises are true. Consequently, ‘Reason is feminine in nature; it only gives after it has received. Of itself it has nothing but the empty forms of its operation’ (1819, p. 50). Schopenhauer is reacting against Gottfried Leibniz’s sweeping optimism about what can be proved deductively. For instance, Leibniz defended most arguments for God’s existence as perfectible as proofs. Schopenhauer was also reacting to Immanuel Kant’s appeals to the phenomenology of reasoning to refute hard determinism. Libertarians contend that the very process of deduction supports the conclusion that the thinker controls his thoughts. In contrast to mere association or triggering of memories, the agent takes the premise as a reason for the conclusion. He is thereby responsible for his inferences. The reasoner can be justifiably proud. This explains the temptation to rationalize association as inference. Conversely, embarrassing fallacies will be reclassified as beliefs that arose without inference.

172  Roy Sorensen

10.12  Knowledge from the Nature of Deduction The libertarians are correct if deductions are evidence about the existence and nature of deduction: Someone competently deduces a conclusion. It is possible that someone competently deduces a conclusion. Someone who falls a little short of knowing the premise could close the gap by competently deducing the conclusion. Upon reaching the conclusion, he has picked up some justification from his own performance. This is enough to fill the quota of justification needed for knowledge. Deduction has provided knowledge from non-knowledge – without any falsehoods to distract us from the alchemical achievement. More philosophically, someone who already built a strong but not yet decisive case against epiphenomenalism might come to know that epiphenomenalism is false by the control he exercised in deducing that epiphenomenalism is false. If I competently deduce a conclusion, then some mental events are causes. If some mental events are causes, epiphenomenalism is false. I competently deduce a conclusion. Epiphenomenalism is false. The deduction itself is evidence for the third premise. Similar arguments can be mounted against other doctrines that imply our impotence, such as fatalism. This introspective objection to counter-closure is compatible with the proposition that deduction was only designed to transmit knowledge. Telephones were not designed to transmit any more information than the caller provided. Nevertheless, those receiving calls soon exploited the faintness of the transmission to infer that the caller was from a long distance. What was informative by accident inspired informativeness by design. Contemporary telephones are engineered to collect collateral information about the source of the call. Similarly, the objection to counter-closure is compatible with the proposition that there was only natural selection for the transmissive role of deduction.

10.13  Schopenhauer’s History of Logic When posted in India, the British judge and linguist William ‘Oriental’ Jones (1746–1794) made the eerie discovery that he could guess Sanskrit words from his knowledge of Greek and Latin. By verifying a conclusion, he gained evidence for his premises.

The Developmental Psychology of Sherlock Holmes 173 Bertrand Russell applies the same reversal to mathematics. Despite advocating logicism, Russell maintained that mathematicians justify their belief in axioms from their belief in the implied theorems (Godwyn & Irvine, 2003). Ideally, the justification would flow from premises to conclusion. But given our limits, conclusions justify premises. Jones went on from language to logic. He reports that, in the Punjab and Persian provinces of India, historians believe that deductive logic began in India. Callisthenes, as Alexander the Great’s historian, learned logic from the Brahmins. Callisthenes transmitted back their theory of the syllogism to his uncle Aristotle (Jones, 1794, vol. IV., p. 163) Jones trusts his Indian sources. He reports hearing several perfectly marshaled in oral debate by the Brahmins. He reports reading such syllogisms in ancient Sanskrit. This caught Schopenhauer’s eye. He is generally eager to re-assign credit from Western philosophers to the sages of India. But in The World as Will and Idea, he assures readers that the sophistication of Aristotle’s Greek predecessors makes the story about Callisthenes implausible (1819, p. 48). Schopenhauer further agrees with Kant: Aristotle perfected this process of reasoning from general to particular. There could be no further improvement in deduction (aside from technical finishes). Nevertheless, Schopenhauer credits Francis Bacon with standing Aristotle on his head. Not content to reason from general to particular, Bacon reasons from particular to general. Induction often yields knowledge of the conclusion that is not present in the premises. Induction is generative, not just transmissive. Induction explains how science increases knowledge of the world. Thanks to Aristotle and his pedantic polishers, the shiny finish of deduction is dazzling. Schopenhauer slings mud at deduction to reduce the glare. The improved lighting makes discernible the generative advantage of induction. Now we understand why the medieval philosophers made no progress in understanding the world. The scholastics neglected induction in favor of Aristotle’s syllogism! Schopenhauer’s history of logic is a four-note melody: i ii iii iv

Deduction is reasoning from general to particular. Induction is reversed deduction. Induction is generative, while deduction is purely transmissive. Science progresses solely by induction.

Schopenhauer’s sublime and witty works are accessible to a large audience. Authors in this audience are in turn accessible to other authors with gifts of dissemination. This aesthetic contagion gave Schopenhauer an outsized influence. The four-note melody plays in the background of most nineteenth-­ century voices. There were some variations in the instrumentation.

174  Roy Sorensen Here is how ‘the father of scientific history’, Henry Thomas Buckle, renders the thesis that induction reverses deduction: … induction, proceeds from smaller to the greater; deduction from the greater to the smaller. Induction is from particulars to generals, and from the sense to the Ideas; deduction is from general to particulars, and from the ideas to the senses. By induction, we rise from the concrete to the abstract; by deduction we descend from the abstract to the concrete. (1861, p. 441) Contrary to Buckle, one can deduce abstract entities from concrete entities: I exist, therefore, {I} exist. The argument is a priori, but the premise and conclusion are contingent; a synthetic a priori proof of an abstract entity! Buckle has strayed into an interesting shooting range for rationalists. But they cannot get into much of skirmish with Buckle. Buckle did not think of himself as deviating substantively from Schopenhauer’s reversal doctrine. John Stuart Mill initially resisted Schopenhauer’s conclusion that deduction is useless. But Mill eventually acquiesces to Schopenhauer’s demotion of deduction. Indeed, Mill sharpens two prongs of Schopenhauer’s rusty dilemma to get this pithy pair of alternatives: If the deduction is valid, it is circular. If it is invalid, the argument is fallacious. Either way, the deduction cannot expand knowledge. But instead retiring syllogisms, Mill gives them various odd jobs and make-work under the job description ‘systematizing what is already known’.

10.14 Mini-Meno Imagine a descendent of Meno. Mini is a lazy but logical student back in 1822. She is one of the few students choosing to attend Arthur Schopenhauer’s lectures rather than Georg Hegel’s. She objects to Schopenhauer’s principle: D: All deductive arguments reason from general to particular. Schopenhauer cites the doctrine that is now enshrined in the Oxford English Dictionary’s definition of ‘deduction’: ‘The process of deducing or drawing a conclusion from a principle already known or assumed; spec. in Logic, inference by reasoning from generals to particulars; opposed to induction’. When Mini asks for the definition of ‘induction’, Schopenhauer provides the definition now endorsed by the same dictionary: ‘The process of inferring a general law or principle from the observation of particular instances (opposed to deduction)’. These synchronized definitions yield a tidy reversal pattern: Deduction is reasoning

The Developmental Psychology of Sherlock Holmes 175 from general to particular, while induction is reasoning from particular to general.5 Having responded to Mini’s request for evidence in favor of D, Schopenhauer asks for her evidence against D. Lazy Mini has not foreseen the battery of counter-examples to D that will eventually populate Brian Skyrms’ Choice and Chance (1966, 13–15). Mini’s objection is instead based on her vague impression that the distinction between general and particular propositions is irrelevant to deduction. Consequently, her degree of justification never sufficed for knowledge that some deductive arguments do not reason from general to particular. Nevertheless, Mini buttresses her belief with the following demonstration: ~D, therefore, ~D. Her demonstration exploits two features of the ‘~p, therefore, ~p’ argument form. First, all instances are valid arguments. Second, the identity of the premise and the conclusion precludes any difference in generality. Consequently, her deduction cannot be proceeding from a general premise to a particular conclusion. Her single premise argument is a new counter-example to counter-­closure: Mini comes to know ~D by competently deducing it from ~D itself (where ~D was not known previous to competently deducing it).

10.15 Counter-Closure and Myths about Epistemic Priority Since the premise and conclusion of Mini’s argument are identical, it is impossible for the premise to be known before the conclusion or the conclusion before premise. They are known simultaneously by virtue of the deduction as a whole. When epistemic improvement is by virtue of coherence, simultaneous improvement is the rule. When a conclusion fits the premises like the final piece of jig-saw puzzle, all the pieces draw support from each other. In the deduction, ‘There are two tokens of this sentence type. ∴ There are two tokens of this sentence type’, the utterance of the conclusion causes the premise to be known simultaneously with the premise (by virtue of being the first sentence token to reach the quota set by the two sentences). The premise was not known earlier or later than the conclusion. Epistemic priority does not imply temporal priority for the same reasons as causal priority does not imply temporal priority. Simultaneous causation and backward are logically possible. Some travel stories illustrate the possibility of causal loops. A time traveler visiting a period prior to his origin draws conclusions before he knew the premises.

10.16  Supplementary Arguments Lazy Mini goes on to debunk other myths about deduction. Some say that a valid argument must have a positive premise, Mini counters, ‘Not

176  Roy Sorensen all valid arguments have a positive premise, therefore, not all valid arguments have a positive premise’. Mini has the experimentalist’s interest in control. Thrifty arguments that compel the same proposition to serve as premise and conclusion eliminate all differences between the premise and the conclusion. Knowledge of this identity allows Mini to challenge any necessary condition for validity that requires a difference between the conclusion and premise. Mini’s identity strategy also side-steps demarcation issues. Consider the disjunction of a general proposition and a particular proposition. Is that disjunction general or particular? The question does not need to be answered. The ‘P, therefore, P’ strategy also allows Mini to shorten some refutations of logical myths. For instance, some claim that a valid argument must synthesize information from separate premises. A common rebuttal shows the validity of immediate inferences with Venn diagrams. Here is a quicker demonstration: ‘Some single premise arguments are valid, therefore, some single premise arguments are valid’.

10.17  Double Deduction Some of the lessons we learn solely by the deductive process involve two competent deductions. Consider the myth that there is a unique validating form for any valid argument. Mini pluralistically counters: Some arguments that instantiate two valid forms are arguments that instantiate two valid forms. Therefore, some arguments that instantiate two valid forms are arguments that instantiate two valid forms. In addition to being an instance of ‘p, therefore, p’, the argument is valid by conversion: Some A is B implies some B is A. Here Mini learns the conclusion by deducing the conclusion in distinct ways. The skeptic may object that Mini must rely on memory. She must remember her first ‘p, therefore, p’ inference and then her second ‘Some A is B, therefore, some B is A’ inference. Some mathematicians take a whole semester to prove a theorem. By the time the professor finishes, the students have forgotten the premises. They have to repeat the course. They are like the painter of a bridge that is so large that the entrance to the bridge end needs re-painting by the time the exit has been painted. Nevertheless, mathematicians still say that large proofs are a priori, even proofs that require a division of labor between mathematicians or cognitive off-loading to computing devices (whose reliability is based on physics and engineering). They resist the conclusion that mathematics is partly historical or psychological.

The Developmental Psychology of Sherlock Holmes 177 Most theories of a priori reasoning defend the mathematical practice of not counting these dependencies as revealing that mathematics is a posteriori. Happily, Mini’s argument is so short that she can keep both deductions in mind.

10.18  Deductions that Alter the Status Quo There are some completed deductions. Therefore, there are some completed deductions. Completing the deduction adds to the number of completed deductions. If that number had been previously zero, the completion of the deduction contributes decisively to its soundness. One’s own calculation can be evidence that one is a competent calculator. After the mathematician Stanislaw Ulam had brain surgery, his worried surgeon asked for the sum of 8 plus 13 (Ulam, 1976, p. 177). Ulam was embarrassed by the question. Alarmed by Ulam’s discomforted silence, the surgeon asked for the square root of 20. Ulam answered: ‘About 4.4’. The surgeon smiled. ‘Isn’t?’ asked Ulam. ‘I don’t know’ confessed the relieved the surgeon. A performative verb is used to bring a state of affairs into existence. When teacher says ‘Class dismissed’, class is thereby dismissed. Students cannot dismiss class in this way. One test of whether a verb V is performative is whether ‘I hereby V’ is felicitous. This inspires a proof: Any verb that passes the hereby test is a performative verb. ‘Deduce’ is a verb that passes the hereby test. I hereby deduce that ‘deduce’ is a performative verb. The deduction creates a truth-maker that provides some ground for the second premise. So, the conclusion is made more probable by the deduction as a speech act. Learning by doing! The performative theory of Descartes’ cogito has the same pragmatic rationale.

10.19  Preferred Derivations Reflection on inference rules suggests a counter-example to counter-­ closure that is entirely rule driven. Either there are Krauts or not. Therefore, either there are Germans or not. The reasoner finds the premise repugnant. He is also unsure whether this is a ‘P, therefore, P’ argument. So, he derives the conclusion solely

178  Roy Sorensen with inference rules. The tautologous nature of the conclusion suffices to establish the validity of the argument.

10.20  Endogenous Information A genuine counter-example to counter-closure must have the reasoner believe the conclusion solely on the basis of inference from the premise. Thanks to the ‘~p therefore ~p’ form of Mini’s reasoning, there is nothing that sustains her belief in her conclusion beyond her deduction of the conclusion from the premise. Admittedly, Mini must be aware of the nature of her deduction. Many exogenous facts about her deduction, such as it being done in Berlin, are excluded by the proviso ‘solely by deduction’. But this information is endogenous to her deduction. It is not as if she reasoned, ‘Some arguments are expressed in italics, therefore, some arguments are expressed in italics’. That inference relies on visual inspection of the sentence tokens expressing the argument – information exogenous to deduction.6 Mini’s inference is more like ‘Some arguments are composed solely of existential generalizations, therefore, some arguments are composed solely of existential generalizations’. Is Mini too clever? Someone who deduces ~D, therefore, ~D might not realize that this refutes D. True enough. But counter-closure entails a universal preclusion: Anyone ignorant of the premise could not gain knowledge of the conclusion by deduction. Competent deduction encompasses recognition of the logical form of the premise and conclusion. This recognition sometimes suffices for the knowledge of conclusion. On other occasions, the recognition merely provides a clue: Someone knows that a contingent truth can entail a necessary truth. Therefore, a contingent truth can entail a necessary truth. The argument begs the question. But it is a near miss of an argument that does not. As a salvageable error, the question-begging argument points the way to knowledge of the conclusion. An alert student will notice that if the conclusion is true, then it is a necessary truth. If the contingent premise is true, then it illustrates the conclusion by the principle that knowledge entails truth. Mini’s original opposition to D may have been unjustified. She might have been a counter-suggestible student prone to reflexively gainsay her teachers. This absence of initial justification is compatible with her learning all she needs during the process of deduction. To learn the conclusion, she needs only to recognize that her competent deduction entails the conclusion. This implies that counter-closure cannot be salvaged by

The Developmental Psychology of Sherlock Holmes 179 lowering the requirement from knowledge to justification as proposed by Luzzi (2019, p. 4.3). The refutation of justification counter-closure offers hope to those wishing to escape the infinite regress of justification. If the deductive process itself creates justification, anti-skeptics might avoid ordering from the unpalatable menu of unjustified justifiers, circular justifications, and epistemic anarchism. Consider deductions that merely suppose rather than assert premises (Murphy, 2013). They justify the conclusion without any help from a justified premise. That is why reductio ad absurdum and conditional proof are such powerful replies to the skeptic. There is no asserted premise for the skeptic to challenge. If the skeptic instead resists the inference rule of reductio ad absurdum or conditional proof, he undermines his standing as a rational adversary. His challenges can be ignored as illogical. This type of ignoring differs from the stonewalling of Thomas Reid (1764, vi, p. 5). Reid relies on the false meta-premise that all cogent arguments require premises. Reid also relies on the false meta-­premise that the skeptic will not grant any premises. This over-simplifies the skeptic as a stingy interlocutor. Nimble skeptics concede as much common ground as possible – at least for the sake of argument. Instead of offering maximal resistance to your premises, they resist only as much as needed to undercut your knowledge claim. This makes them enlightening interlocutors rather than uncooperative conversationalists. Counter-examples to counter-closure are good news for the Socratic method of inquiry. In the early dialogues, Socrates aims to relieve his ignorance by teasing out consequences. Elenchus purports to be ‘a prolonged cross-examination’ that refutes the opponent’s original thesis by getting him to draw from it, by means of a series of questions and answers, a consequence that contradicts it. This is a logically valid procedure, for it corresponds to the logical law ‘if p implies not-p, then not-p is true that is, p is false’ (Hall, 2006, p. 57). Illustration: If there is no truth implies that there is some truth, then ‘There is some truth’ is true and ‘There is no truth’ is false. A fragment of Aristotle’s Protrepticus shows how the method is self-supporting: ‘If we ought to philosophise, then we ought to philosophise; and if we ought not to philosophise, then we ought to philosophise i.e. in order to justify this view; in any case, therefore, we ought to philosophise’. This is a constructive dilemma in which the disjunction is a tautology – and so can be deleted from the argument without affecting validity (Kneale, 1957). One of the horns of the dilemma is also a tautology. So it can be deleted as well. The residue has the ‘miraculous consequence’ form ‘If not-p, then p, therefore p’. The Meno is a disruptive dialogue that puts Socrates on the defensive. If Socrates knows nothing, how can he recognize the correct answer

180  Roy Sorensen even if it were presented? Socrates saves Socratic inquiry by steeply compromising on Socratic ignorance. Instead of everybody being ignorant, everybody is omniscient – though very forgetful. In an earlier existence, the slave boy dwelt among the forms, thereby knowing them by direct acquaintance. The trauma of birth caused amnesia. Socrates’ role is to furnish smelling salts to awaken memories. The details of the doctrine of reminiscence were rejected by Western philosophers – who lack India’s enthusiasm for reincarnation. Nevertheless, the outline was preserved. In the mid-twentieth century, ordinary language philosophers attributed implicit knowledge of grammar that could be transformed into explicit analytic truths. Loyalty to counter-closure motivates the retention of the outline. In axiom systems, theorems are justified by a combination of axioms and inference rules. There is a tradeoff: The more axioms one has, the fewer inference rules one needs. There must be some inference rules for there to be a theorem derivation. But there need not be any axioms because they can be replaced with zero premise inference rules. Natural deductive systems exploit the fact that inference rules can shoulder the entire burden of proof. Logical truths are those propositions that can be deduced from the empty set of propositions. This purely rule-based proof is an analogue of the knowledge produced solely by competent deduction. Counter-closure has the allure of ‘Garbage in, garbage out’. But the principle is too pessimistic (as it would falsely preclude recycling and error correcting algorithms such as the Hamming code). Deduction can add justification beyond what is present in the premise. This permits a return to pre-Menoan innocence.

Notes 1. On March 19, 2021, an earlier version of this chapter was discussed by the expert contributors to this anthology. These stage 4 reasoners, under the supervision of the editor, added to the stock of objections and suggestions earlier amassed by the Logic and Metaphysics group at the University of St. Andrews in Scotland. 2. Any reason giving raises compliance for small requests. Large requests trigger scrutiny of the reason; merely re-wording the conclusion as a premise is less effective (Langer et al., 1978). 3. The definition appears more tentatively in Theaetetus 201. Robert K. Shope (1983, pp. 12–19) discusses the possibility that Plato proposes tethering as only a sufficient condition for knowledge rather than a definition. 4. Peter Klein (2008) regards knowledge based on falsehoods as second rate. Catherine Elgin (2019) objects that Klein’s ranking is based on narrow criteria. A simple, fecund falsehood has more epistemic merit than a complicated, sterile truth. 5. Other dictionaries follow the lead of the Oxford English Dictionary – especially science dictionaries. D is an unkillable myth. If you have not been taught it, your students will try to teach it to you.

The Developmental Psychology of Sherlock Holmes 181 6. In an interesting contrast, some reliance on perception is permitted in the calculation of a priori warrant. Speech perception is needed to judge that a statement is a tautology.

References Arkes, H. R., & Ayton, P. (1999). The sunk cost and concorde effects: Are humans less rational than lower animals? Psychological Bulletin, 125(5), 591–600. Buckle, H. T. (1861). History of civilization in England. D. Appleton. Clark, M. (1963). Knowledge and grounds. A comment on Mr. Gettier’s paper. Analysis, 24(2), 46–48. De Neys, W, & Schaeken, W. (2007). When people are more logical under cognitive load dual task impact on scalar implicature. Experimental Psychology, 54, 128–133. Dretske, F. (1970). Epistemic operators. The Journal of Philosophy, 67, 1007–1023. Elgin, C. (2019). Epistemically useful falsehoods. In B. Fitelson (Ed.), Themes from Klein. Synthese Library. Friedman, M. (1953). The methodology of positive economics. In Essays in positive economics. University of Chicago Press. Gettier, E. L. (1963). Is justified true belief knowledge? Analysis, 23(6), 121–123. Godwyn, M., & Irvine, A. (2003). Bertrand Russell’s logicism. In N Griffin (Ed.) The Cambridge companion to Bertrand Russell (pp. 171–202). Cambridge University Press. Hall, R. (2006). Dialectic. In Encyclopedia of philosophy. Macmillan. Jones, W. (1794). On the philosophy of the Asiatics. Asiatic Researches. Klein, P. (2008). Useful false beliefs. In Q. Smith (Ed.), Epistemology: New essays (pp. 25–61). Oxford University Press. Kneale, W. (1957). Aristotle and the consequentia mirabilis. The Journal of Hellenic Studies, 77(1), 62–66. Kripke, S. (1982). Wittgenstein on rules and private language. Blackwell. Langer, E., Blank, A., & Chanowitz, B. (1978) The mindlessness of ostensibly thoughtful action: The role of ‘placebic’ information in interpersonal interaction. Journal of Personality and Social Psychology, 36(6), 635–642. Luzzi, F. (2019). Knowledge from non-knowledge: Inference, testimony and memory. Cambridge University Press. Moshman, D. (2015). Epistemic cognition and psychological development. Psychology Press. Murphy, P. (2013). Another blow to knowledge from knowledge. Logos & Episteme, 4, 311–317. Noveck, I. A. (2001). When children are more logical than adults: Experimental investigations of scalar implicature. Cognition, 78(2), 165–188. Osherson, D. N., & Markman, E. (1974–1975). Language and the ability to evaluate contradictions and tautologies. Cognition, 3(3), 213–226. Pillow, B. H. (2002). Children’s and adults’ evaluation of the certainty of deductive inferences, inductive inferences, and guesses. Child Development, 73, 779–792. Plantinga, A. (1993). Warrant: The current debate. Oxford University Press.

182  Roy Sorensen Quine, W. V. O. (1970). Philosophy of logic. Harvard University Press. Reid, T. (1764). An inquiry into the human mind: On the principle of common sense (D. R. Brooks, Ed.). The Pennsylvania State University Press. Saunders, J. T., & Champawat, N. (1964). Mr. Clark’s definition of ‘knowledge’. Analysis, 21(1), 8–9. Schopenhauer, A., 1819, The world as will and representation (Vol. 2) (E. F. J. Payne, Trans.). The Falcon Wing Press. Schopenhauer, A. (1851). Parerga and paralipomena (E. F. J. Payne, Trans.). Oxford Clarendon Press. Shope, R. K. (1983). The analysis of knowledge: Princeton University Press. Skyrms, B. (1966). Choice and chance. Dickenson Publishing. Sorensen, R. (2016). Fugu for logicians. Philosophy and Phenomenological Research, 92(1), 131–144. Ulam, S. (1976). Adventures of a mathematician. University of California Press. Vaihinger, H. (1925).  The philosophy of “as if”: A system of the theoretical, practical and religious fictions of mankind (translation of  Die Philosophie des Als Ob by C. K. Ogden). Harcourt, Brace & Co. Warfield, T. A. (2005). Knowledge from falsehood. Philosophical Perspectives, 19(1), 405–416. Weiss, R. (2001). Virtue in the cave: Moral inquiry in Plato’s Meno. Oxford University Press.

11 Inferential Knowledge, Counter Closure, and Cognition Michael Blome-Tillmann and Brian Ball

11.1 Introduction It has been argued that it is possible, in certain circumstances, for an agent to come to know something by inference from premises that are not themselves known—perhaps because one or more of them is false. For instance, according to Warfield (2005), one might come to know that one has sufficiently many handouts to distribute to the audience members at one’s talk if one miscounts the latter slightly, provided the excess number of the former is sufficiently great—in such a case, he claims, we are able to obtain knowledge from falsehood (KFF). However, we have elsewhere upheld the principle of counter closure for knowledge (Ball & Blome-Tillmann, 2014), according to which, if a subject knows a proposition on the basis of competent deduction from one or more premises, the subject must also know those premises. In cases like that of Warfield’s handouts, we do not dispute that the subject comes to know the conclusion—but we contend that this piece of knowledge is secured on the basis of inference from other tacitly, or subconsciously, known premises, rather than from those that are explicitly and consciously considered, including the crucial falsehood. In this chapter, we further develop our view. We begin by relating it to the recent literature from cognitive psychology, showing that our appeal to tacit belief and implicit reasoning is quite uncontroversial. We then show how this empirically grounded perspective can be applied in certain problem cases, thereby responding to objections raised by Luzzi (2019). Next, we discuss an interesting example due to Sorensen (this volume). We conclude with some broader reflections on the debate.

11.2  Inference and Implicit Reasoning What is inference? Recent theoretical characterizations vary widely. For instance, according to Ludwig and Munroe (2019, p. 20), inference involves “a transition from one set of propositional attitudes (e.g., beliefs, intentions, suppositions) to another”—though not every such transition DOI: 10.4324/9781003118701-17

184  Michael Blome-Tillmann and Brian Ball is an inference; crucially, the transition must be—or at least be capable of being brought—under the agent’s conscious control. Thus, on their view, inference is a personal-level act (involving the propositional attitudes of the subject), that is, potentially conscious. By contrast, Quilty-Dunn and Mandelbaum (2018, 2019) allow that inference may be unconscious and even sub-personal (e.g., taking place within the visual system), but at the same time, they insist it is a formal, rule-governed transition. This further requirement strikes us as potentially disastrous: the central lesson of Goodman’s (1955) new riddle is that no formal logic of induction is possible; yet we do not want to preclude that some inference is probabilistic in character! We prefer a characterization of inference on which it is a transition between content-bearing states that is sensitive to rational relations between those contents, however that sensitivity is achieved. Thus, our view allows inference to be tacit/subconscious without insisting it is ­formal, and/or that it involves rule-following. Ultimately, though, it does not matter to us whether such transitions are properly called “inferences”—we can set such terminological issues to one side. What matters is that transitions of the kind in question (a) are widely appealed to in the cognitive sciences and (b) are what underpin knowledge despite falsehood (KDF) in our view. In the next section, we illustrate claim (b) in connection with some cases in the literature on knowledge from/despite falsehood, but we begin, in the current section, by providing evidence of claim (a). As Rescorla (2019) notes, the notion of unconscious inference has been appealed to in perceptual psychology since it was introduced by Helmholtz in the late 19th century. More recently, specifically Bayesian models of such inference have been widely deployed, not only in the science of perception, but also in relation to phenomena that are clearly cognitive in character (cf. Rescorla, 2019, p. 41). In short, Bayesian cognitive psychologists hypothesize tacit or subconscious reasoning.1 Another tradition of theorizing within recent cognitive psychology has been the two-systems approach, popularized as Thinking, Fast and Slow by Kahneman (2011). System 1 thinking is characterized as fast, automatic, and unconscious, while System 2 thinking is described as slow, voluntary, and conscious. Like Bayesianism, two-systems theory has been widely deployed: for instance, in debates on mindreading (Apperly & Butterfill, 2009), delusion (Bongiorno & Bortolotti, 2019), numerical cognition (Graziano, 2018), and elsewhere—with the obvious consequence that unconscious inference has been postulated in this wide range of domains. 2 Of course, we are not committed to any particular theory of implicit reasoning—thus, we need not endorse either Bayesianism or two-­systems theory specifically. We only rely on the more general claim that people engage in specifically cognitive processing that involves transitions

Inferential Knowledge, Counter Closure, and Cognition 185 between attitudinal states manifesting rational sensitivity to their informational content and that is nevertheless unconscious. And there is a broad consensus in the literature on the existence of implicit reasoning as relied upon by us. For instance, prominent critics of two-systems theory, such as Kruglanski and Gigerenzer (2011), share the view that there is intuitive or implicit reasoning—they just account for it differently than two-systems theory. According to them, there is a single, rule-­governed system that underlies both intuitive and deliberate judgment. Our reliance on implicit or intuitive reasoning is thus far from controversial in the current literature in cognitive psychology. This discussion of implicit reasoning allows us to substantiate the view we took previously, by specifying more clearly what we had in mind by saying that subjects in cases of KDF implicitly rely on known premises: their knowledge in these cases is supported, in our view, by implicit reasoning from tacitly known premises. In the next section, we respond to some recent objections to our view (due to Luzzi [2019]), thereby showing the advantage of our reliance on the existence of actual causal processes involving implicit beliefs (rather than dispositions toward possible beliefs and inferences).

11.3  Response to Luzzi In his (2019) book, Knowledge from Non-Knowledge, Federico Luzzi discusses views, like ours, according to which there is KDF in cases such as that of Warfield’s handouts, because the known conclusion is doxastically justified (or “epistemized,” as Luzzi (2019, p. 10) puts it) by knowledge of a true “proxy premise” that is “in the neighbourhood”3 of the consciously and explicitly believed falsehood. The idea is that such views can navigate between the Scylla of Gettier cases (where we do not want to say that the conclusion ostensibly believed on the basis of a false premise is known) and the Charybdis of admitting that there is knowledge from falsehood, and that the principle of counter closure for knowledge is false. Luzzi, however, is skeptical: “the ultimate effectiveness of this strategy,” he says, “remains dubious” (2019, p. 13). Unfortunately, although he notes that we are advocates of a view of this kind (2019, p. 13, fn. 3), he does not consider our published views explicitly in any detail, preferring to focus on Montminy’s (2014) development of the position. This matters because, not only was our view developed independently of Montminy’s—despite Luzzi’s (2019, p. 22, fn. 8) claim that we “endorse” Montminy’s strategy in at least one crucial respect, in fact we do not cite, and had not read Montminy’s article at the time of writing our (2014) article—but it also differs from it in at least one crucial respect. Whereas Montminy appeals to dispositional belief, we speak of tacit belief. Crucially, dispositional belief is (at least roughly) a disposition to consciously judge, whereas tacit belief is not a

186  Michael Blome-Tillmann and Brian Ball disposition of any kind4: that is, it is not a potentiality but an actuality; thus, it can be subconscious but causally active.5 As we will see, this allows us to respond to the worries Luzzi raises for the general approach. To begin with, consider Warfield’s Handout case of alleged KFF. Handout: Counting with some care the number of people present at my talk, I reason: ‘There are 53 people at my talk; therefore my 100 handout copies are sufficient’. My premise is false. There are 52 people in attendance—I double counted one person who changed seats during the count. And yet I know my conclusion. (Warfield, 2005, pp. 407–408) What is Montminy’s account of this case? According to Montminy, Warfield’s conclusion that he has enough handouts is based on a true, dispositionally known proposition—namely, the proposition that there are approximately 53 people at the talk. According to Luzzi (2019, pp. 17–18), however, that proxy proposition is essentially evidentially based on the false premise that there are exactly 53 people at the talk. Luzzi’s response to Montminy’s line of reasoning is thus straightforward: he points out that, if Warfield’s belief that he has enough handouts is indirectly still based on the falsehood that there are exactly 53 people in attendance, then his belief still qualifies as knowledge from falsehood. The only difference being that, on Montminy’s account, it isn’t directly based on the relevant falsehood, but it remains based on it, nevertheless. As we showed in our original paper, this difficulty can be avoided fairly easily. Here is the view we endorsed in 2014. 11.3.1  Knowledge Despite Falsehood (KDF) In apparent cases of KFF, there are two true propositions t1 and t2 such that: 1 2 3 4

t1 evidentially supports both p and t2 for S; t2 is entailed by p; S knows both t1 and t2; S’s belief that q is properly based on her knowledge that t2 .

To see how this account handles the above case, note that the variables in Handout take the following values: t1: The result of my (Warfield’s) count was “53.” t2: There are 53 people at my talk give or take a few. p: There are 53 people in the room. q: I have enough handouts.

Inferential Knowledge, Counter Closure, and Cognition 187 In Warfield’s example, t1 is clearly known by the subject: it also evidentially supports or justifies both t2 and p to at least some degree, and t2 is entailed by p. Finally, note that the truth t2 is a rough-and-ready approximation of the falsehood p, which is, in addition, known in the examples. Moreover, since Warfield’s belief that q is properly based on his knowledge that t2 —namely, by competent deduction—it follows that the examples are not cases of knowledge from falsehood but rather cases of knowledge despite falsehood. What is crucial about this account is that, contrary to Luzzi’s assumption, we do not claim that Warfield’s belief that t2 is derived or epistemically based upon his false belief that p. Instead, we take, as Sorensen (this v­ olume, n. 8) notes, the far more plausible view that Warfield’s tacit belief that t2 is based on his knowledge that t1.6 Thus, Luzzi’s objection that Warfield’s belief is indirectly based on the falsehood p misfires as an objection to our view. In short, Montminy as construed by Luzzi must appeal to an “approximately” belief that is supported by a false belief, whereas we do not. Luzzi also presents another objection to the KDF view based on the following example (due to Fitelson, 2010): Fancy Watch*: I have extreme confidence in the accuracy of my fancy watch. Having lost track of the time I look carefully at my watch. I reason: ‘It is exactly 2:58 p.m.’. Having learned from my logic professor earlier that day that precision entails approximation, I conclude, without any loss of confidence in my premise: ‘Therefore, it is approximately 2:58 p.m.’. I know my conclusion but as it happens it is exactly 2:56 p.m., not 2:58 p.m. (Luzzi, 2019, p. 24) First, here are the values for our propositions7: t1: My watch reads “2:58.” t2: It is approximately 2:58 pm. p: It is exactly 2:58 pm. q: It is approximately 2:58 pm. Luzzi thinks that the case is problematic for the defender of KDF, because his belief that q is intuitively knowledge but is explicitly derived from and thus based on the false belief that p. Now, as we pointed out in our 2014 paper, philosophers’ stipulations are sometimes incoherent or at least psychologically implausible. We think that this is such a case. According to Luzzi’s stipulation, his belief that it is approximately 2:58 pm is caused and based solely by and on the explicit derivation from the belief that it is exactly 2:58 pm. However, in standard cases of the kind suggested here Luzzi believes and knows (at least tacitly) that it is approximately 2:58 pm and he bases

188  Michael Blome-Tillmann and Brian Ball this belief (at least partly and possibly tacitly) on his knowledge that t1—that is, on his knowledge that his watch reads “2:58.” By contrast, if there are possible cases in which a subject has the combination of attitudes that Luzzi stipulates, that subject simply will not count as knowing his or her conclusion. This can be illustrated further by the counterfactual consideration that if, in Fancy Watch*, Luzzi were told that it is not exactly 2:58 pm, he would retain his belief that it is approximately 2:58 pm, despite the fact that what he thought would ground his belief has now been shown to be false. Luzzi (2019, p. 25) suggests further that the defender of KDF must “attribute[…] to cognisers an unpalatable insensitivity to the character of their reasoning.” We are not sure the insensitivity in question is unpalatable: Kornblith (2002, p. 111), for instance, emphasizes empirical findings to the effect that human subjects are unaware of significant causal factors influencing their beliefs, and we would be unsurprised to discover similar findings in relation to beliefs arising specifically through inference. But in any case, we need not deny that Luzzi in Fancy Watch* comes to believe t2 by deductive inference. What we deny is that he knows t2 in virtue of any such competent deduction. On our view, there is a tacitly known truth—namely, t1—that grounds Luzzi’s knowledge that t2 . That ground for Luzzi’s belief that t2 is not just available, but it is an actual ground. This is illustrated by the counterfactual (and causal) sensitivity of the belief that t2 to Luzzi’s belief that t1, and its counterfactual insensitivity to his belief that p. In short, tacit beliefs do more psychological—and, therefore, epistemological—work than our opponents acknowledge. In summary, defenders of KFF misidentify the causally active grounds for belief because they under-appreciate the role of subconscious psychological mechanisms and information processing. Perhaps they do so because, like some defenders of KDF, they speak of dispositional belief rather than tacit/subconscious belief. The final objection of Luzzi’s to be discussed here is based on a novel and somewhat complicated case. We quote it here in full (Luzzi, 2019, p. 20): One Short: Let n be the smallest number such that observing n red balls drawn with replacement from a bag containing m balls allows one to know inductively that the next ball drawn will be red. In Gladys’s office there is a large pile of visually indistinguishable bags, each known by Gladys to contain exactly m balls and each holding varying ratios of red balls to black balls. There are balls of no other color in the bags. Every morning at 10 am, Gladys selects a bag randomly, extracts one ball from that bag and places it back in that bag. If the ball was red, she then takes the whole bag and its

Inferential Knowledge, Counter Closure, and Cognition 189 contents and puts it on the table in the common room. If the ball was black, she keeps the bag in her office. Each evening, if she left a bag on the table, she takes it back to her office and places it randomly among the pile of bags. Everyone in Gladys’s workplace, including Sam, is aware of the above. Around noon one day, Sam walks into the common room, sees the bag and extracts exactly n-1 balls with replacement, all of which are red. Knowing there to be m balls in the bag, but momentarily forgetting that Gladys has already extracted one, she reasons: (p) I have drawn a red ball from the bag n-1 times; so (q) the next drawn ball will be red. As Luzzi points out, in this case Sam doesn’t know her conclusion, since, by stipulation, she has drawn one ball too few to obtain inductive knowledge of q.8 If Sam had, on Luzzi’s assumptions, made one more draw, her premise p would be strong enough to ground her knowledge that q. Luzzi now objects to the defenders of KFF that Sam in fact has dispositional knowledge of the potential proxy premise p′: a red ball has been drawn from the bag n times (n − 1 times by me [Sam] and once by Gladys). Since Sam has this dispositional knowledge, defenders of KDF are committed to the implausible view, Luzzi claims, that Sam knows q. Our response should be obvious by now: Sam may have dispositional knowledge that p′—that is, she may be disposed to know p′, but she doesn’t have tacit knowledge that p′. As Luzzi (2019, p. 21) himself puts it elegantly, we need to distinguish between “a basis that one actually exploits in inferential reasoning and a basis that is merely available but which remains idle.” In this case, the available basis isn’t exploited but remains idle. The extra premise invoked may be dispositionally known, but it is not tacitly known—nor therefore is it reasoned from. The subject in the case (Sam) knows the proposition that someone else drew one red ball but is not currently recalling it, or basing her conclusion on it. Sure, she could recall it and make the inference, thereby securing knowledge of the conclusion (in the case where m = n), but she doesn’t. So, in fact her belief in the conclusion is not justified. In the counterfactual case where she recalls the additional premise and then infers, her belief is justified. As non-dispositionalists, we are under no pressure to admit she’s justified in the actual case. There is one further complication with Luzzi’s One Short example. We take it that Luzzi assumes that, in his case, n ≠ m. But it is worthwhile noting that, if n ≠ m, Luzzi is committed to the view that we can know lottery propositions (cp. Hawthorne, 2004; Williamson, 2000). We take this implication to be implausible—in fact, sufficiently so to dismiss the case outright, but an alternative version of the example might be designed that doesn’t have this problem.

190  Michael Blome-Tillmann and Brian Ball It is time to sum up. In this section, we have considered objections to KDF views, but found them wanting in relation to our own variant of the position, which relies on tacit, rather than dispositional, beliefs that serve as actual, and not merely potential, grounds for belief in the target propositions in the cases under consideration. This is what we should expect. In general, for a belief to be knowledge, there must be some good, “normal,” explanation of its being held (cf. Ball, 2013; Peet & Pitcovski, 2018). That normal explanation will not appeal to the falsehood that is explicitly believed, but to the relevant truths that are tacitly believed— and, in our view, to implicit reasoning from them to the targets.

11.4  Response to Sorensen Sorensen (this volume) considers a nice example in which, he suggests, an agent generates knowledge from non-knowledge through deduction. Mini thinks, but does not know, that not all deductive arguments reason from general to particular (in brief, ¬D), from which she infers that not all deductive arguments reason from general to particular (that is, once again, ¬D). The inference has a deductively valid form (P, therefore, P), and Sorensen suggests that Mini comes to know the conclusion on the basis of this inference. The example is especially nice, however, because the argument under consideration itself verifies the conclusion: that is, it is an instance of an argument that does not proceed from general to particular, and so it makes clear that not all arguments proceed in this way. Unfortunately for Sorensen, however, this very feature of the argument undermines his case. To see this, consider Medi, who guesses that water is H 2O, then argues on this basis to the conclusion that water is H 2O. Pursuing this line of argument of course does nothing to justify her belief in the conclusion; her inference does not result in knowledge. Or, perhaps better, suppose Maxi reasons as follows: some argument proceeds from particular to general; therefore, some argument proceeds from particular to general. Whether the sole claim involved here is analyzed as particular or general, the argument does not proceed from particular to general, and Maxi cannot come to know the conclusion in the way Mini is said to by Sorensen. These examples suggest that what justifies Mini’s belief that not all deductive arguments reason from general to particular is not her inference from her belief in this very premise, but rather the fact that she appreciates that the argument is itself deductive and does not reason from general to particular. Thus, she is able to come to see the truth of the conclusion by virtue of her awareness of a fact that verifies it. Whether this process itself involves inference or insight is not entirely obvious. Does Mini simply come to see the truth of the conclusion as a result of going through her argument (rather than on the basis of its premise)? Or does she tacitly reason (as follows)? This argument is

Inferential Knowledge, Counter Closure, and Cognition 191 deductive but does not reason from general to particular; therefore, not all deductive arguments reason from general to particular. Either way, Mini does not come to know something merely by way of inference from an unknown premise.9 Sorensen says, “[a]dmittedly, Mini must be aware of the nature of her deduction. But this information is completely endogenous to her competent deduction” (this volume, p. 5). This is interesting. We agree that “[i]t is not as if [Mini] reasoned, ‘Some arguments are expressed in italics, therefore, some arguments expressed in italics’” (this volume, p. 5). Reasoning is not necessarily expressed at all, let alone expressed in italics—and so one might engage in the very same reasoning with or without italics being in play. The awareness of the features of the argumentation that Mini displays is not like that—it is integral to her reasoning. And yet we think that the information in question in Mini’s case is exogenous to the explicitly mentioned deduction itself: it is possible to engage in that deduction—to advance deductively from the premise of Mini’s argument to its conclusion—without noticing the features of the argument that support Mini’s knowledge.10 Accordingly, there are two cases to consider: in one of them, Mini does not base her belief in the conclusion on this information, and she fails to come to know her conclusion11; in the other, she does know the conclusion, but on the basis of the further information, not just the premise. Indeed, we can see that, in the case where knowledge is achieved, the conclusion is not believed on the basis of (belief in) the premise, since merely entertaining the premise, or assuming it for the sake of argument, would suffice. This is perhaps particularly clear with the following, earlier example: it is possible for a conclusion to entail its premise; therefore, it is possible for a conclusion to entail its premise (Sorensen, 1991, p. 249). Here, considering the possibility that a conclusion entails its premise, or assuming it to obtain, together with an appreciation of the form of the argument, puts the subject in a position to know that the conclusion of the argument is (actually) true. But this knowledge cannot be justified on the basis of a belief in the premise—for there is no such belief in the case in question! “One vague suspicion,” says Sorensen (in his earlier work), “is that an argument [of the kind(s) with which we are concerned] causes belief in the conclusion without providing a reason for the conclusion” (1991, p. 252). This is at least related to our worry about alleged cases of knowledge from non-knowledge, which is that belief in the premises cannot justify belief in the conclusion. Our worry is perhaps especially clear in cases where a key premise is false: belief in a false premise, even if causally implicated in generating belief in the conclusion, cannot serve to justify that latter belief, since evidence is (and reasons are) factive. But equally, in our view, true belief that falls short of knowledge cannot serve as evidence that justifies belief in the conclusion, since E = K.

192  Michael Blome-Tillmann and Brian Ball Sorensen denies the truth of the vaguely suspected claim on the grounds that the arguments in question are rationally persuasive. More specifically, he distinguishes ontic and propositional reasons: roughly, the former are things (the meanings of noun phrases) that are reasons, while the latter are propositions (the meanings of sentences); thus, a broken leg might be an ontic reason for a hospitalization, while that the leg is broken is a propositional reason for the same. Armed with this distinction, Sorensen claims that some arguments are ontic (rather than propositional) reasons for believing their conclusions—and that this is what makes them rationally persuasive. We have no reason to deny any of this. It is clear that in the (good) case of Mini (in which she ends up with knowledge), her belief in the conclusion is justified because—and in virtue—of her consideration of the argument. We think that, in attending to the argument, she comes to know certain truths about it—e.g., that it is formally valid and that it does not proceed from general to particular—and that this knowledge explains her knowledge of the conclusion. So we think the argument provides Mini with propositional reasons to believe her conclusion. This is consistent with—though does not entail—the argument being an ontic reason for her to believe the conclusion. (Our view does not imply that there are any ontic reasons, though neither does it imply that there aren’t.) Crucially, it does not commit us to saying that the premise is Mini’s reason for believing her conclusion, or that it justifies it. And since Sorensen does not claim that it is—he says the argument is a reason for believing the conclusion in Mini’s case, not that the premise is—we do not find ourselves in conflict with his views on this point. Zooming out a little, it seems to us that Sorensen’s case of Mini (this volume), and some of his earlier (1991) examples, encourages us to consider the relationship between doxastic and propositional justification. In these terms, the central lesson he wishes us to draw, it seems, is that the former cannot be understood solely in terms of the latter. Whether this is because there are ontic reasons for belief (and these include arguments), or for more prosaic reasons,12 we agree.

11.5  Concluding Remarks In our view, there is no KFF, though there are cases of KDF. We explain how belief in the conclusions of arguments can be justified in such cases by appealing to implicit reasoning from, and tacit knowledge of, alternative “proxy” premises that are true (Ball & Blome-Tillmann, 2014). But as we showed in the first section of the current chapter, it is quite uncontroversial in cognitive psychology to think that such unconscious mental states and reasoning occur. It would, of course, be problematic to appeal to merely dispositional belief in the proxies (cf. Luzzi, 2019), i.e., a disposition to consciously judge these propositions

Inferential Knowledge, Counter Closure, and Cognition 193 true, in explaining the presence of the knowledge in these cases (as some have admittedly attempted—cf. Montminy, 2014), but as we made clear in the second section, we do no such thing, as our tacit mental states are causally involved in the implicit reasoning we posit and serve as actual, not merely available (hence potential, or possible), grounds for belief in the conclusion. Implicit cognition does real epistemological work, and appealing to it can help us to explain how we know what we do. In the third section, we considered the ingenious case of Mini, who acquires the knowledge that not all deductive arguments proceed from general to particular, by engaging in a process of reasoning that begins from her unjustified belief in this very proposition (Sorensen, this volume). We argued that Mini also knows some additional information which serves to justify her final belief. But we also suggested that the case may help us to see that doxastic justification cannot be reduced to propositional justification—though possibly not for the reasons that others have indicated (Sorensen, 1991); consideration of the cognitive processes involved is likely to be required. We conclude by briefly expanding on this point. Sorensen (this volume) suggests that Mini is in something of the position of the slave boy in the Meno, generating knowledge out of her own ignorance, through a process of reasoning—and he intimates that a commitment to the principle of counter closure, which we endorse, may explain (though not justify) the Platonic doctrine of recollection, according to which we have innate knowledge (e.g., of the axioms from which the boy can deduce the theorem he comes to recognize as true). But Sorensen takes the lesson of reflection on the Socratic method to be a different one: “Competent deduction can add justification beyond what is present in the premise” (this volume, p. 9). We see no reason to abandon the principle of counter closure in this—though we welcome the encouragement to investigate deduction itself … and other cognitive processes. Socrates, of course, asks questions of the slave boy. The boy, in turn, considers these questions, advances answers to them, and, by recognizing dead ends as such, comes to arrive at some (mathematical) knowledge. It is through the process of pursuing this investigation— which begins with the mental state of wondering (what the answers to Socrates’ questions are)—that he generates knowledge. But, of course, acknowledging this fact does not yield pressure to regard the knowledge in question as based on the wondering—which has, as it objects a question, not a proposition. Similarly, we see no reason to think that the cognitive processes involved in cases of KDF, significant as they may be in explaining the (doxastic) justification of the target belief, provide us with reason to regard that belief as “epistemized” by (belief in) the premise.

194  Michael Blome-Tillmann and Brian Ball

Notes 1. One particular version of Bayesianism that has been attracting attention recently is that of predictive processing (Clark, 2013; Hohwy, 2013). This particular theoretical approach, on which Bayesian reasoning is used specifically to minimize prediction error, has proven controversial—see, e.g., Cao (2020) for a recent critical discussion. But the more general Bayesian view, on which subjects are in states that can be characterized by probability distributions on variables, and updating from one time to another is undertaken in accordance with Bayes’ rule, has been much more widely adopted. 2. Our preferred theoretical approach (cp. Evans & Stanovich, 2013) is one in which rapid autonomous processes (Type 1) are assumed to yield default responses unless intervened on by distinctive higher order reasoning processes (Type 2). Evans and Stanovich go on to give three kinds of evidence in favour of two types of reasoning (autonomous vs decoupling/working memory): experimental manipulations, neuroscientific, and individual differences. (They also contrast their preferred default-interventionist approach—shared with Kahneman—with parallel-competitive accounts.) 3. Luzzi (2019: 10). 4. Of course, it may ground various dispositions, including the disposition to judge. But this is not to say that it is a disposition. 5. For instance, it may figure in implicit reasoning. 6. We also think that Warfield’s belief that q is, more or less directly, based on his knowledge that t1. 7. Note that in this case t2 = q. 8. Epistemicists about vagueness will have no qualms about granting such a stipulation in general, though even they might suspect that what the number in question is varies with context, or circumstance. 9. We might compare Descartes’ cogito: when he forms the (justified) belief (and indeed knowledge) that he exists, does he infer this from the premise that he thinks? Some commentators deny that he does: perhaps it is a simple insight—after all, Descartes explicitly denies that the cogito is a syllogism. On the other hand, the inference from the claim that one thinks to the claim that one exists is modally, if not formally, valid: it is impossible for the premise to be true without the conclusion also being true, and since Descartes can see the truth of the premise, perhaps he likewise intuits (or singularly grasps) the validity (i.e., necessary truth-preservation) of the transition. 10. Sorensen appears, by contrast, to be suggesting that her awareness (of the facts that her argument form is valid and that it manifests the truth of the conclusion) is, in effect, built into the requirement that her deduction is competent. We do not think that can be right. In more usual cases of competent deduction, only the first of these conditions can be required, since the second of them is not met! 11. Sorensen says, “Thanks to the narrowness of Mini’s reasoning, there is nothing that sustains her belief in her conclusion beyond her competent deduction of the conclusion from the premise” (manuscript: 5). We take this to be true in the case just considered—with negative consequences for the assessment of her as knowing the conclusion. 12. While the doxastic variety of justification entails the propositional (in our view), it does not follow that something (such as causation) can be added to this necessary condition to yield sufficient conditions (after all, there can be deviant causal chains).

Inferential Knowledge, Counter Closure, and Cognition 195

References Apperly, I. A., & Butterfill, S. A. (2009). Do humans have two systems to track beliefs and belief-like states? Psychological Review, 116(4), 953. Ball, B. (2013). Knowledge is normal belief. Analysis, 73(1), 69–76. Ball, B., & Blome-Tillmann, M. (2014). Counter closure and knowledge despite falsehood. The Philosophical Quarterly, 64(257), 552–568. Bongiorno, F., & Bortolotti, L. (2019). The role of unconscious inference in models of delusion formation. In Inference and consciousness (pp. 74–96). Routledge. Cao, R. (2020). New labels for old ideas: Predictive processing and the interpretation of neural signals. Review of Philosophy and Psychology, 11(3), 517–546. Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204. Evans, J. S. B. T., & Stanovich, K. E. (2013). Dual process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science, 8, 223–241. Graziano, M. (2018). Dual-process theories of numerical cognition. Springer. Hohwy, J. (2013). The predictive mind. Oxford University Press. Kahneman, D. (2011). Thinking, fast and slow. Macmillan. Kornblith, H. (2002). Knowledge and its place in nature. Oxford University Press. Kruglanski, A. W., & Gigerenzer, G. (2011). Intuitive and deliberate judgements are based on common principles. Psychological Review, 118(1), 97–109. Ludwig, K., & Munroe, W. (2019). Unconscious inference theories of cognitive achievement. In Inference and consciousness (pp. 15–39). Routledge. Luzzi, F. (2019). knowledge from non-knowledge. Cambridge University Press. Montminy, M. (2014). Knowledge despite falsehood. Canadian Journal of Philosophy, 44(3–4), 463–475. Peet, A., & Pitcovski, E. (2018). Normal knowledge: Toward an explanation-based theory of knowledge. The Journal of Philosophy, 115(3), 141–157. Quilty-Dunn, J., & Mandelbaum, E. (2018). Inferential transitions. Australasian Journal of Philosophy, 96(3), 532–547. Quilty-Dunn, J., & Mandelbaum, E. (2019). Non-inferential transitions: Imagery and association. In Inference and consciousness (pp. 151–171). Routledge. Rescorla, M. (2019). A realist perspective on Bayesian cognitive science. In Inference and consciousness (pp. 40–73). Routledge. Sorensen, R. A. (1991). P, therefore, P′ without circularity. The Journal of Philosophy, 88(5), 245–266. Sorensen, R. (this volume). Mini-Meno: How to get more out of your transmission. Warfield, T. A. (2005). Knowledge from falsehood. Philosophical Perspectives, 19, 405–416.

12 Knowledge from Non-Knowledge in Wittgenstein’s On Certainty A Dialogue Michael Veber

A groggy hospital patient awakens to find someone seated in a chair next to his bed. PATIENT: 

Man, I got a headache. What do they use for anesthesia in this place? Goldschläger? Wonder how my surgery went. At least there’s a TV in here. This should be a good game. L.W.: This game proves its worth. That may be the cause of its being played, but it is not the ground. (474)1 P:  That’d be my strategy too. Don’t even think of running the ball. The air game is gonna be key. I’m skeptical they can pull out a win but I’m glad I got somebody to watch it with. What’s your name anyway, buddy? L.W.:  My name is “L.W.” And if someone were to dispute it, I should straightaway make connexions with innumerable things that make it certain. (594) P: I wasn’t planning on disputing it. But if you’re gonna be that way, I’m game. Tell you what: I don’t think anything’s certain. You may believe your name’s L.W. but there’s lots of ways you could be wrong. Maybe there was some sort of mix-up with your birth certificate and people have been calling you L.W. when really your name is Steve. Why couldn’t that happen? And how do you know it’s not happening now? L.W.:  “But I can still imagine someone making all these connexions, and none of them corresponding with reality. Why shouldn’t I be in a similar case?” If I imagine such a person I also imagine a reality, a world that surrounds him; and I imagine him as thinking and speaking in contradiction to this world. (595) P:  You’re begging the question there, L-Dub. The whole issue is whether the world I’m describing is in contradiction to this one or identical to it. L.W.:  It is part of the language-game with people’s names that everyone knows his name with the greatest certainty. (579) DOI: 10.4324/9781003118701-18

Knowledge from Non-Knowledge in Wittgenstein’s On Certainty  197 P:  Ah,

so it’s not just that a world where your name’s not L.W. would be different from this world. It’s that anyone living that world would be speaking a different language. But why? The language is way older than you. How could it be a rule of our language that your name’s L.W.? L.W.:  When we say “Certain propositions must be excluded from doubt,” it sounds as if I ought to put these propositions—for example, that I am called L.W.—into a logic-book. For if it belongs to the description of a language-game, it belongs to logic. But that I am called L.W. does not belong to any such description. The language-game that operates with people’s names can certainly exist even if I am mistaken about my name,—but it does presuppose that it is nonsensical to say that the majority of people are mistaken about their names. (628) P:  Now it sounds like you’re backpedaling. But let me see if I got this right. It’s not the particular fact—if it is fact—that your name’s L.W. that’s a rule of our language—or “language game” as you like to call it. It’s this more general fact that most people are not mistaken about their names. But couldn’t there be a world where there are rampant undetected birth certificate mixups and it turns out most people are mistaken about their own names? And couldn’t people in that world speak the same language we do? So how can it be a rule of our language that most people are not wrong about their names? L.W.: You must look at the practice of language, then you will see it. (501) P:  We’re talking about practice? Practice? Not a game? But alright, sure. The way we go about our business with language assumes we’re not wrong about our own names. But how does that answer my question? The fact that we assume it doesn’t make it true. L.W.:  “Do you know or do you only believe that your name is L.W.?” Is that a meaningful question? Do you know or do you only believe that what you are writing down now are German words? Do you only believe that “believe” has this meaning? What meaning? (486) P:  You’re expecting me to write this down? I just got out of surgery. But yeah, those sound like good questions to me. So what? L.W.:  For months I have lived at address A, I have read the name of the street and the number of the house countless times, have received countless letters here and given countless people the address. If I am wrong about it, the mistake is hardly less than if I were (wrongly) to believe I was writing Chinese and not German. (50) P:  Hold on. Languages are defined by vocabulary and syntax. Except in trick cases like “I am now speaking German,” the question of what language a statement is in is completely different from the question of whether it’s true. Being wrong about your name or address or something like that isn’t on the same level as being wrong about

198  Michael Veber what language you’re speaking. But even so, why couldn’t we be wrong about that too? Yeah, it seems like we’re speaking German to each other here. But suppose the doctors at this place secretly implanted Star Trek universal translator devices into our ear canals and we’re really speaking English but the device makes it sound to us like we are speaking German. How can you be certain that’s not happening? L.W.:  I have a right to say “I can’t be making a mistake about this” even if I am in error. (663) But that does not mean that I am infallible about it. (425) P: You got a right to say what you want. Free country. But what difference does that make? I want to know what we should think. If you give me that it is possible your name isn’t L.W.—and it sounds like you are giving me that when you say you’re not infallible—why should you be certain even if you have a right to say you are? Sure, you gotta walk before you run and you gotta talk before you can think but that doesn’t mean—Oooh! You see that hit? He laid that safety out! And they were nowhere near the play. The set is muted but you can bet the announcers are going on about how wrong that was even though there won’t be a penalty because that’s legal. You see? Just because something is permitted by the rules of the game doesn’t mean it’s something you should do even though there is a sense in which you have a “right” to do it. The same goes for language-games and epistemology. Even if the rules of the language-game give you a right to claim to be certain about your name, that doesn’t mean you are justified in being certain about it. And even if playing the game requires you to be certain that your name’s L.W., that just pushes the problem back a step. “Am I epistemically justified in believing my name is L.W.?” becomes “Am I epistemically justified in doing what it takes to play this game?” L.W.:  I cannot be making a mistake about 12 × 12 being 144. And now one cannot contrast mathematical certainty with the relative uncertainty of empirical propositions. For the mathematical proposition has been obtained by a series of actions that are in no way different from the actions of the rest of our lives, and are in the same degree liable to forgetfulness, oversight and illusion. (651) P: Nobody told me there was gonna be math. But I think I get what you’re driving at. Simple mathematical truths are supposed to be “analytic” in the sense that they’re made true by the meanings of the terms. And that’s supposed to be what makes them so obvious and certain. So here’s a case where you might say mere proficiency with the language—simply understanding the meanings of “12” and “times” and so on—in some sense commits you to treating “12 × 12 = 144” as absolutely certain. And then you wanna say that there are empirical propositions that work the same way?

Knowledge from Non-Knowledge in Wittgenstein’s On Certainty  199 So just understanding the language commits you holding these things certain too? L.W.:  If the proposition 12 × 12 = 144 is exempt from doubt, then so too must non-mathematical propositions be. (653) P:  But why? There’s a big difference between the proposition that 12 × 12 = 144 and the proposition that your name’s L.W. L.W.:  The propositions of mathematics might be said to be fossilized.— The proposition “I am called ….” is not. But it too is regarded as incontrovertible by those who, like myself, have overwhelming evidence for it. And this not out of thoughtlessness. For, the evidence’s being overwhelming consists precisely in the fact that we do not need to give way before any contrary evidence. And so we have here a buttress similar to the one that makes the propositions of mathematics incontrovertible. (657) P: What do you mean you don’t need to give way to any contrary ­evidence? Plenty of things could happen that’d make it rational for you to doubt that your name’s L.W. Suppose the Austrian Bureau of Birth Certificates calls to notify you of a mistake— L.W.:  The question “But mightn’t you be in the grip of a delusion now and perhaps later find this out?”—might also be raised as an objection to any proposition of the multiplication tables. (658) P:  Yep. That’s why we can’t be certain about that stuff either. Suppose the world’s experts come out in agreement that the propositions of basic arithmetic are false because of some fancy proof the rest of don’t understand. That’d give us good reason to doubt the truths of arithmetic—even if there is some undetected error in the proof. So what’s the problem? I never liked the idea that mathematical propositions are true because of the meanings of the words. Take a sentence like “triangles have three sides.” The meanings of the words there determines what the sentence says. But that doesn’t make the sentence true. You need some kind of mathematical fact for that. Yeah, it’s hard to say what a mathematical fact is but that’s metaphysics and we’re doing epistemology here. I never bought the idea that anyone who understands the language has to accept obvious mathematical truths either. For any mathematical truth, there could be an expert mathematician, logician, or philosopher who rejects it for fancy theoretical reasons but speaks the language just as well as any of us. 2 So even when we’re just thinking about mathematical truths, understanding the language—or being a player in the game—does not require acceptance of any particular proposition. Now suppose I’m wrong about all that and certain assumptions— mathematical and contingent—are built into the language and accepting them is required for playing the game. We still need to ask whether we can be sure those assumptions are true. The only way to answer that is to provide some compelling evidence for them.

200  Michael Veber The fundamental assumptions of the language need to be justified, just like everything else. L.W.:  Language did not emerge from some kind of ratiocination (475). Children do not learn that books exist, that armchairs exist, etc., etc.,—they learn to fetch books, sit in armchairs, etc. (476) P:  Kids don’t know jack. But I’ll give you that the way most people talk, including the way they learn language and teach it to kids, assumes things about what the world is like. That doesn’t mean anybody knows those assumptions are true. L.W.:  “So one must know that the objects whose names one teaches a child by an ostensive definition exist.”—Why must one know they do? Isn’t it enough that experience doesn’t later show the opposite? For why should the language-game rest on some kind of knowledge? (477) P: On your view, I don’t see how the game could not rest on knowledge, unless you think we can’t know anything at all—and that doesn’t seem to be what you’re getting at. Lemme put it this way. If everything you say—and everything you think—rests on some fundamental assumptions and you don’t know whether those assumptions are true, how do you know that anything built upon them is? Especially when you admit your assumptions include things that might actually be false. Knowledge can’t come from ignora—Oh, look a nurse. Hey Nurse M! Can you come in here and turn this TV up? We can’t hear the game. I’d do it but my hands are all bandaged up from the surgery. How’d that go by the way? I remember old sawbones saying he might have to amputate. The way you got these bandages, I can’t tell if he did or not. NURSE M:  The two of you, I am quite certain, have had enough television for the day. And the doctor, the one who performed your surgery, will arrive in a short amount of time, not an amount of time I can specify with much precision at the present moment but an amount of time I, nonetheless, am quite certain to be rather short, to discuss the surgical procedure with you. P:  Why can’t you just tell me? Weren’t you assisting at the operation? M:  Rather than have me tell you what the outcome of the surgery was, you must wait for the Doctor. But I assure you that, whatever its outcome, there is not now nor will there be in the future, any need to worry. P:  Easy for you to say, pal. You know you got two hands. Or you think you do anyway. M:  So far from its being true as you maintain, that we cannot be certain of such things or, as your companion has claimed repeatedly, that the sense in which we can be certain of such things precludes our knowing them, I can now provide a perfectly rigorous proof. Here, as I make a certain gesture with my right hand, is one hand. And here, as I make a similar gesture with my left hand, is another.3

Knowledge from Non-Knowledge in Wittgenstein’s On Certainty  201 P:  Fascinating. Anything else? M:  Since I know I have two hands,

I know also there are at least two external things. Beyond that, I can provide a list of truisms that I know with certainty. I know that right now there exists a live body, my own body. This body was born at some time in the past and it has existed ever since. There was a time in the past where this body of mine was much smaller than it is now. P:  I hear ya. Those honey buns from the vending machine are a killer. M:  I know that numerous physical objects other than my own body exist and are not merely presented in space but are also to be met with in space at various distances from each other. I know that this body of mine has never traveled beyond Earth’s atmosphere.4 I know that there is a door in that wall and— L.W.:  What is the proof that I know something? Most certainly not my saying I know it. (487) P: Nailed it, L-Dub. My gripe exactly. You could be wrong about all that stuff, buddy. How do you know you’re not living in some kind of Matrix-type simulation on Mars lying prone and naked in a pod being fed hallucinatory experiences and fake memories? M:  Because I am, at present, and as you both can see, standing up. I am not seated in a chair or lying down in a pod. I am not naked but clothed. I am here on Earth speaking and not singing. I know you are there lying in that bed. I know that seated adjacent to you at a distance of— L.W.:  I know that a sick man is lying here? Nonsense! I am sitting at his bedside, I am looking attentively into his face.—So I don’t know then, that a there is a sick man lying here? Neither the question nor the assertion makes sense. (10) M:  The assertion not only makes sense but is true. And you are both, I am quite certain, sick. Now, if you would please roll up your tweed, and by that I mean the jacket that currently adorns your arms, shoulders and torso, so I may administer your injection. It sounds as though you are rather overdue. P:  Hold on a second there buddy. I gotta hear this. He was just telling me none of that stuff you’re going on about is knowledge. You disagree, obviously. But I’m trying to figure out how we’re supposed to get knowledge from assumptions we don’t know are true. If you don’t know your foundation is any good how can you know what you put on top of it is? L.W.: I should like to say: Moore does not know what he asserts he knows, but it stands fast for him, as also for me; regarding it as absolutely solid is part of our method of doubt and enquiry. (151) P:  Regarding it as solid don’t make it that way. If our method of enquiry is built on all this stuff we don’t know on the back end, I don’t see how knowledge can come out the other side unless it’s just some kind

202  Michael Veber of conditional knowledge. Like, you can’t know p but you can know that if your assumptions are correct, p. But lemme ask a different question. Our nurse here thinks he knows he has hands because he’s looking at them. So why is “I have two hands” or “I am not naked” part of our method of doubt and enquiry instead of something that—at least potentially—results from it? If knowledge is possible at all, why couldn’t somebody like me, who just got out of surgery, enquire into whether he has hands by ripping off his bandages and taking a look? L.W.:  My having two hands is, in normal circumstances, as certain as anything that I could produce in evidence for it. That is why I am not in a position to take the sight of my hand as evidence for it. (250) P:  Oh, now I get it. Mine ain’t normal circumstances because my hands might’ve got lopped off in surgery. So I can know I have hands by taking a look. But that doesn’t mean either of you two can. L.W.:  If I don’t know whether someone has two hands (say, whether they have been amputated or not) I shall believe his assurance that he has two hands, if he is trustworthy. And if he says he knows it, that can only signify to me that he has been able to make sure, and hence that his arms are e.g. not still concealed by coverings and bandages, etc. etc. (23) P:  So you can know I have hands if I’m a trustworthy guy who takes a peek and tells you. Got it. But let’s go back to this idea you can’t know you have hands. I don’t buy the argument you gave for that. In normal circumstances, nothing can be more certain for you than the proposition that you have hands. So what? Why does the evidence have to be more certain than the thing it’s evidence for? Sounds like you’re looking at justification in some sort of rigid bottom-up way. But plenty of people think beliefs can mutually support each other. 5 If that’s right, I don’t see why an initially less certain belief couldn’t offer support to an initially more certain one as well as vice versa. L.W.:  When we first begin to believe anything, what we believe is not a single proposition, it is a whole system of propositions. (Light dawns gradually over the whole.) (141) It is not single axioms that strike me as obvious, it is a system in which consequences and premises give one another mutual support. (142) P: Alright, so you think it’s not all bottom up. Terrific. But then I’m still wondering why you can’t get evidence for the existence of your hands by looking. Since you allow for mutual support, you can’t say it’s because nothing is more certain for you than the proposition that you have hands because that presupposes some kind of one-­ directional linear model. But even aside from that, there’s plenty of ways a less certain belief can support a more certain one. Lemme give you an example. Suppose for instance a friend calls you on the phone. You ask where he’s calling from and you clearly hear him say

Knowledge from Non-Knowledge in Wittgenstein’s On Certainty  203 “New York” followed by what sounded like “City,” The last word wasn’t perfectly clear because there was a bit of static on the line— not too much but a bit. You believe on pretty good grounds that your friend is in New York City. Based on that belief, you infer he is in the state of New York. Given what you heard on the phone, the conclusion you deduced is more certain than the premise. Evidence can be less certain than what it’s evidence for. And that means the fact—if it is a fact—that nothing is more certain for our nurse than his belief that he has hands does not mean nothing could serve as evidence for it. L.W.:  I have a telephone conversation with New York. My friend tells me that his young trees have buds of such and such a kind. I am now convinced that his tree is … Am I also convinced that the Earth exists? (208) P:  It would be weird to become convinced of the existence of the Earth like that. But I can imagine odd situations where that might happen. Still, even in normal circumstances, I don’t see why that conversation couldn’t provide you with some additional support for your belief that Earth exists. Even if you know Earth exists before you pick up the phone, that doesn’t mean you can’t get even more evidence for it by talking to your friend. “This confirms what I’ve known all along.” People say that kind of thing all the time. L.W.:  If a blind man were to ask me “Have you got two hands?” I should not make sure by looking. If I were to have any doubt of it, then I don’t know why I should trust my eyes. For why shouldn’t I test my eyes by looking to find out whether I see my two hands? What is to be tested by what? (125) P:  I don’t know why you’d bother looking if you already knew. But if you did want to look, l don’t see why you couldn’t get more evidence that you have hands than you had beforehand. I suppose it could also give you evidence your eyes work. Now, given the right sort of experience, your belief that you have hands and your belief that your eyes are working can mutually support each oth— M:  I am aware that, despite all I have said and despite the perfectly rigorous proofs I have offered, there will be some who think I have failed to provide a proof of the propositions that I have not only claimed to have proven but have in fact proven conclusively here today including especially the proposition that there are at least two external things, a proposition, you will recall, that I have proven from my two stated premises, namely, first, that here is a hand and, second, that here is another. Both of these premises are propositions for which I both hold conclusive evidence and, on the basis of that evidence, know to be true. L.W.:  Suppose I replaced Moore’s “I know” with “I am of the unshakeable conviction”? (86)

204  Michael Veber P:  Good idea. Why be brief when you can say it with Moore? M:  The dissatisfaction felt in response to my perfectly rigorous

proof is, I believe at least in part a product of the view, held by many including apparently Kant, that one cannot prove a proposition unless he can at the same time provide a proof of all the premises relied upon in the proof of that proposition. But I did not, nor did I intend, on this or any other occasion, to provide a proof of the premise that this, to my right, is a hand or that this, to my left, is another hand. In fact, I do not know how or whether one could provide a proof of such propositions. I do, however, know them to be true. I can know things which I cannot prove.6 P:  Good point—if by “prove” you mean convince somebody else. L.W. says you can’t convince somebody that Earth exists by telling him there’s a tree outside the window. But the fact that you can’t convince somebody else of h by appeal to e doesn’t mean e isn’t good evidence for h and it doesn’t mean you don’t know h on the basis of e. Knowing a proposition is one thing, being able to convince somebody else of it is another. Now that you mention it, I wonder how much of L.W.’s epistemology rests on confusing those two things. Any enquiry has to take certain things for granted. And if you are taking it for granted, you won’t be in a position to persuade somebody who doubts it. But that doesn’t mean you don’t know it’s true. So if the idea is just that all enquiry involves assumptions we can’t prove, that doesn’t mean all enquiry rests on unknown—Oh, there you are Doc! I’ve been waiting to hear how my surgery went. DOCTOR:  I am sorry I could not be here sooner. With the big convention in town, we are overwhelmed with new patients. I do not know why they always hold that event so close to Christmas when they know that is the hospital’s busiest time. To make matters worse, it appears someone set the convention hotel on fire last night. P:  Don’t sweat it. We’ve been watching the game and discussing whether everything we believe rests on unknown assumptions. L.W. here—if that is in fact his name—thinks you don’t that know books and chairs exist. But then he wants to turn around and tell a kid to fetch his books and sit in a chair. L.W.:  If a child asked me whether the earth was already there before my birth, I should answer him that the earth did not begin only with my birth, but that it existed long, long before. And I should have the feeling of saying something funny. Rather as if a child had asked if such and such a mountain were higher than a tall house it had seen. In answering the question, I should have to be imparting a picture of the world to the person who asked it (233). This would happen through a kind of persuasion. (262) P: See what I mean, Doc? Earth’s existence is assumed as part of his “picture of the world.” And he wants to persuade other people of

Knowledge from Non-Knowledge in Wittgenstein’s On Certainty  205 that picture. But by “persuade,” he doesn’t mean he’s gonna produce objectively good reasons for it—he can’t mean that because he thinks you can’t give objectively good reasons for that kind of thing. So he’s talking about prodding, cajoling or converting people into accepting the picture. A picture, mind you, he was not convinced of for any good reason but was himself only cajoled into. And it’s not just people were talking about here, Doc. He wants to do this to kids! D:  You need not worry about that. The state does not permit Mr. W to be around children anymore. P:  Well I don’t like it at all. If everything you believe rests on unknown assumptions, then how’s any of it knowledge? D:  I see no problem. Consider the kind of example you were discussing earlier where you are talking to a friend on the telephone. P:  Wait a minute. How’d you hear that? You weren’t in the room. D:  I was in the hallway and the door was hanging open. That reminds me. The doors in this facility are always supposed to be secured. I will need to have maintenance come in and investigate that one. L.W.:  If I want the door to turn, the hinges must stay put. (343) D:  You should not be turning any doors in this facility, Mr. W. You are to stay in your designated room. But I will have the maintenance staff assess the stability of the hinges, thank you. L.W.:  That is to say, the questions that we raise and our doubts depend on the fact that some propositions are exempt from doubt, are as it were like hinges on which those turn. (341) P:  I hear you L-Dub. You can’t check the hinges when you’re checking the catch. But that doesn’t mean you can’t ever check the hinges. And it doesn’t mean you can’t see they need replacing. L.W.:  The same proposition may get treated at one time as something to test by experience, at another as a rule of testing. (98) P:  Now you think our fundamental assumptions can be tested by experience? And not just in special circumstances? Then what’s your beef with the nurse? D:  Nurse? What nurse? P:  Nurse M. The guy who was just here. He left as you came in. D:  Oh, him. No, Mr. M. is not a nurse. He is another patient. We put him in that uniform because we ran out of gowns with the large influx of new people in need of treatment. But he should not be wandering the halls. P:  Probably shouldn’t be giving out injections either. But you wanted to ask me something about this phone call? D:  Yes. Suppose you have very good reason to think your friend will be visiting New York City. You receive a call and ask him where he is. He says “New York” and something that sounds like “City” but there’s some static. It is reasonable for you to believe that he is in

206  Michael Veber New York City. So you do. And from that you conclude your friend is in the state of New York. P:  That’s the story. What of it? D:  Now let us suppose your friend was driving in from the north and hit bad weather so he had to pull over a few hours outside of New York City. P:  Then why’d he tell me he was already there? D:  He did not. This is my variation on the example. He said, “I’m in New York and it’s shitty.” It only sounded like he said “New York City” because of the interference. P:  Didn’t know they still had pay phones in Poughkeepsie. What’s the point? D:  You concluded that your friend is in the state of New York from a justified belief that he is in New York City. You do not know that he is in New York City because he is not. But you do know he is in the state of New York. The example shows that knowledge can rest on false—and therefore unknown—assumptions. P:  Not so sure about that one, Doc. Sounds like a classic Gettier case. If that’s how it happens, why say I know he’s in the state of New York? D:  The kind of luck required for getterization is not present in this case. Given the circumstances and how you formed the belief, you could not have easily been wrong in thinking that your friend is in the state of New York. If he had been in Pennsylvania, for instance, he would not have said what he said and you would not have believed what you did. But there is no need to dwell on this example. Knowledge from false assumptions is commonplace in science. The NASA scientist’s knowledge of where the rocket will go is based upon calculations from classical mechanics. But classical mechanics is false. Ergo, knowledge can depend upon assumptions that are not known to be true. P:  You got it all wrong, Doc. For this to work, it needs to be that the unknown assumption is essential to the knowledge it begets. When you say classical mechanics is false, you mean it’s not a completely correct theory of the entire universe. But any NASA scientist who relies on classical mechanics to figure out where to point the rocket knows that as well as anybody—better even. In the sense in which it’s correct to say classical mechanics isn’t true, they don’t believe it is. And if the belief ain’t there, it ain’t essential. Now, on the other hand, you might think NASA scientists believe classical mechanics in that they believe it works well enough for the kinds of problems they deal in. But that belief is true. So any way you cut it, the scientists’ knowledge of where the rocket will end up is not based on any false assumptions. D: But there were astronomers in the past who did believe classical mechanics was the completely correct account of the entire universe

Knowledge from Non-Knowledge in Wittgenstein’s On Certainty  207 and they employed that belief in forming other astronomical beliefs. Those astronomers knew things about the paths of the stars and planets. P:  But even in that case, the false belief was not essential. Back then they also believed—correctly—that classical mechanics has an excellent track record and is empirically adequate. It accounted for the observed phenomena up until that time. They could’ve arrived at the same predictions from that belief. So why is the extravagant belief that classical mechanics is the One True Theory of the Universe essential? D:  I no longer understand what you mean by “essential.” You seem here to be speaking in terms of propositional justification when the issue is one of doxastic justification. P:  Look Doc, this is all a red herring anyway. Suppose you’re right and these are cases of people getting knowledge out of ignorance. Still, the assumptions in play in these examples are things anyone can rationally assess. You can’t say the belief that my friend is in New York is a hinge upon which all my other beliefs turn or that it must be accepted by anyone who plays the language game or any of that other stuff. So I don’t see this helps L.W.’s case. D:  Mr. W’s case is very severe and will require years of study and careful analysis. Yours, I believe, is less serious. P:  Well anyway, I see no reason to think all our knowledge depends on unknowable stuff. Even L.W. agrees we can treat something as a rule of testing in one context and then test it in another. So, even if they’re unknown at a given moment, our fundamental assumptions aren’t unknowable. L.W.:  It might be imagined that some propositions, of the form of empirical propositions, were hardened and functioned as channels for such empirical propositions as were not hardened but fluid; and that this relation altered with time, in that fluid propositions hardened, and hard ones became fluid. (96) P:  Now you’re talking. I could use a blast of hard fluids right now. Any of that Goldschläger left? L.W.:  The river-bed of thoughts may shift. But I distinguish between the movement of the waters on the river-bed and the shift of the bed itself; though there is not a sharp division of the one from the other. (97) P:  Sounds like he’s drifting off, Doc. May be time for another injection. D:  I believe he is trying to make it clear to you how, depending on context and circumstance, propositions can change their status. They can go from being things we submit to test to things we must assume to conduct our tests and vice versa. But underlying all that, there is always a deeper assumption that does not admit of rational assessment. And that is the sense in which all our knowledge rests on something we do not and cannot know.7

208  Michael Veber P:  What is this grand assumption and why can’t I rationally assess it? D: First you must appreciate how, in normal circumstances, someone

who denied or even expressed doubt that 2 and 2 are 4, or that he had two hands, or that the Earth has existed for more than 150 years would be challenging not only everything we believe but our entire system of belief formation. L.W.:  It strikes me as if someone who doubts the existence of the earth at that time is impugning the nature of all historical evidence. And I cannot say of this latter that it is definitely correct. (188) P:  Why not? L.W.:  “Do you know that the earth existed then?”—“Of course I know that. I have it from someone who certainly knows all about it.” (187) D:  Indeed, Mr. W. The absurdity of that answer reveals how someone who doubts that the earth existed 150 years ago is in effect challenging all of history and our methods of acquiring historical knowledge and even our concept of time itself. How are we to answer such a person? By appeal to a history book? M:  You can at once think of a vast list of propositions, a list to which we can add with great ease, whose falsehood is implied by the proposition that “Time is unreal.” The truth of any proposition on this list thus entails, and thereby enables me to know, that time is real. I had lunch and then I took a nap. After that, I went to the cinema and then I had tea.8 P:  Hey, look who’s back! Now, listen Doc. I agree that somebody raising questions like that might be challenging our fundamental beliefs about the nature of time and so forth. And if Nurse M here says he took a nap and then drank a cup of tea, that doesn’t answer the challenge. But, still, that doesn’t mean you can’t know that time is real because you know you took a nap before you had tea. People who can’t refute Zeno still know stuff moves because they see stuff moving all around them. Why shouldn’t something similar be true for knowing that time is real? I think you guys are still confusing knowledge and persuasion here. But I’m not gonna convince you of that. So I want to get back to the bit about the river-bed. There was supposed to be some deep underlying unknowable commitment that never goes away. But you never said what it was. D:  It is the proposition that we are not massively mistaken or deceived. Particular propositions like here is a hand, 2 and 2 are 4, earth is not less than 150 years old, and so on are just our particular ways of codifying or capturing that basic idea. In normal circumstances, there is no way to doubt those propositions without at the same time doubting our fundamental sources of evidence and methods of enquiry. But enquiry must proceed on the assumption that our most basic sources of evidence and methods of enquiry are generally reliable. And that is just another way to say we are not massively

Knowledge from Non-Knowledge in Wittgenstein’s On Certainty  209 mistaken or deceived. Since this assumption is fundamental to the very activity of enquiry, it cannot itself be enquired into. We must take it for granted. P:  I don’t buy it. For starters, why doesn’t fact that we get along so well with our picture of the world serve as evidence for its fundamental assumptions—including the assumption that we are not massively deceived? And beyond that, there are abductive arguments against massive deception,9 there are semantic arguments against various skeptical hypotheses10 and in favor of the proposition that most of our beliefs are true,11 and on and on. Maybe you think none of those work. Maybe you’re right. But if you’ve taken the time to think about and criticize them, you’ve been investigating what you claim to be uninvestigatable. Of course, since we’re down on such a fundamental level, there will be circularity issues. But that doesn’t mean there’s no workaround. Maybe some kinds of circularity are okay.12 And if, at the end of the day, no one can give us any good evidence that we aren’t massively mistaken, I don’t see why radical skepticism isn’t a live option. Maybe nobody knows anything. L.W.:  If someone said to me that he doubted whether he had a body I should take him to be a half-wit. (257) D:  Mr. W, we spoke about this at our last session. You are not to refer to our patients in those terms. It is rude and offensive. P:  Not to mention the whole pot-kettle problem. But lemme put my point another way, Doc. These special hinge propositions are supposed to be unknowable in normal circumstances but knowable in extraordinary ones. Now I would’ve thought that if we know anything at all, we know that 2 and 2 make four, that we have bodies and so on. We don’t need extraordinary circumstances to know that stuff. It’s totally normal. L.W.:  It is queer: if I say, without any special occasion, “I know”—for example, “I know that I am now sitting in a chair,” this statement seems to me unjustified and presumptuous. But if I make the same statement where there is some need for it, then, although I am not a jot more certain of its truth, it seems to me to be perfectly justified and everyday. (553) D:  Yes, Mr. W. Unless there is some special circumstance requiring it, it is not normal human behavior to wander around announcing that you know you have two hands, that you know you have not been to the moon, that you know you have a body, as Mr. M is wont to do. P: Sounds like you guys are making some sort of speech act fallacy here.13 The fact that it would be weird to say you know something in normal circumstances doesn’t mean you don’t normally know it. The weirdness of the utterance might just be due to its being so obviously true that it “goes without saying”—as the saying goes. And speaking of fallacies, I don’t like the way you two keep appealing

210  Michael Veber to extraordinary circumstances to save your theory. If I say there are ordinary circumstances where people know things you say are unknowable, you’re gonna say the fact that it’s knowable makes the circumstances extraordinary. But how is that not just a textbook no true Scotsman? MR. R:  In order to define “the author of Waverley was Scotch,” we have still to take account of the third of our three propositions, namely, “Whoever wrote Waverley was Scotch.” This will be satisfied by merely adding that the c in question is to be Scotch.14 P:  Hey look everybody. It’s ol’ Rusty! How ya been buddy? Haven’t seen you since they shipped you over to the neuro-ward. Did your surgery go okay? I’m still waiting to hear back on mine. R:  I should say that what the physiologist sees when he looks at a brain is part of his own brain, not part of the brain he is examining.15 P:  Couldn’t agree more my friend. If you really need your head examined, you gotta do it yourself. D: Oh dear. It appears even more of the patients have escaped their rooms. Mr. R, I need you to remain seated in the chair while I attend to your friend. R:  A table or chair will be a series of classes of particulars, and therefore a logical fiction.16 D:  Where is security? R:  When you go and buy a chair, you buy not only the appearance which it presents to you at that moment, but also those other appearances that it is going to present when it gets home. If it were a phantom chair, it would not present any appearances when it got home and would not be the sort of thing you would want to buy.17 D:  I will need to hurry your bandage removal along so I may assess this situation in the hospital. Now where did I put those scissors? P:  Hey, what was this surgery for anyway? I don’t think you guys ever told me. D: The operation is a routine treatment for acute doxastic pseudo-­ anemia—a condition where the patient constantly expresses delusional doubts about the most ordinary everyday things. L.W.:  A doubt without end is not even a doubt. (625) D: Indeed. Rather than move you to the neurological ward, we performed a surgery that you were told could require amputation of your hands. The purpose of the surgery was to create a situation where it is possible to know you have hands by looking at them, if in fact you do. If your hands were removed, you would be able to know you do not have hands. Either way, the surgery is guaranteed to supply the patient with perceptually grounded knowledge and thus immediately cure all other skeptical delusions. L.W.:  If you do know that here is one hand, we’ll grant you all the rest. (1)

Knowledge from Non-Knowledge in Wittgenstein’s On Certainty  211 D: 

I will begin cutting the bandages now. You must pay very close attention to what is revealed when they are removed. P:  Before you cut me loose Doc, lemme ask you something else. I don’t know whether I have hands or not. But your position is much different, right? D:  Yes. After all, I am the one who performed the operation. P:  And I imagine you do this kind of work all the time. D:  Your condition is somewhat rare.18 But it is a perfectly ordinary and safe procedure. P:  Now what if you peel off the bandages and see something different from what you expected? D:  What do you mean by “different”? P:  Suppose you expect to see two hands underneath the bandages but then you take them off and all you see is stumps. Or suppose you expect to see stumps and you see hands. What would you think then? D:  I would not know what to think. P:  You mean you would not change your mind and say “Ah, the surgery must’ve gone differently from how I remember it”? D:  No, I would not draw that conclusion. P:  Why not? D:  Because I cannot see how I could be wrong about that without being wrong about nearly everything else. Your operation is routine and very recent. If I remove the bandages and see the opposite of what I expect, my memory or my eyes or my mind itself must have malfunctioned. And if that is the case then we have, as Mr. W said earlier, impugned the very nature of evidence. P:  In other words, there’s a proposition about the outcome of my procedure that, for you, has the same status as the proposition that you have a body, that there is a window in this wall, that 2 and 2 make four and so on. D:  I suppose so. Now if you are through, I must remove your bandages so that I may attend to the ongoing situation in the hospital. P:  One more thing, Doc. According to you, I can know that I have hands but you can’t know that because, for you, that’s something that stands fast or acts as a hinge on which other things turn or whatever. D: Yes. P:  But if you can’t know it, you can’t know I know it either. D:  Come again? P:  If I know that you know that p I can deduce and thereby come to know that p myself. In other words, I can’t know that you know that p unless I can know it too. Now here’s the problem. Let’s say I do have hands under these wraps. In that case, according to you, I’m gonna know I have hands once I take a look. But also, according to you, the proposition that I have hands is for you one of these

212  Michael Veber unknowable hinges. Since you can’t know it, you can’t know I know it. Same thing happens if I don’t have hands under here. D:  It sounds as though this simple procedure may not have been sufficient to treat your condition. This was something I was concerned about. P:  You’re not listening Doc. Even if you’ve got the epistemology right— and, for the record, I don’t think you do—but, still, even if you do, there’s no way for you to tell whether this operation worked or not because there’s no way for you to know that I know something you can’t know. Come to think of it, it sounds like there’s no way for you to know whether your epistemology is right in the first place. In order to know that, you’d need to know that certain propositions that are not knowable by you are in fact known by others. But that doesn’t make any sense. If you know that they know it, you know it too. Also, lemme ask you this. Are statements of the form p is a hinge proposition themselves hinge propositions? Because if they’re not, then we could get evidence against them. And if we can get evidence that p isn’t hinge, why doesn’t that open the door to getting evidence for or against p? Now if p is hinge is itself hinge, then it’s not the kind of then you can give anybody any good reason to believe and you have to admit you don’t have any good reason to believe it either. But what kind of philosophy is that? L.W.:  I do philosophy now like an old woman who is always mislaying something and having to look for it again: now her spectacles, now her keys. (532) D:  Mr. W., we will have none of that kind of— M:  I know that at some distance beyond the window before me is a large tree. I know that the distance from my body at which the window resides is less than the distance at which the tree is situated from that same body. D:  Thank you, Mr. M. Now please move away from the window. P:  Sounds like our nurse has become a bit unhinged, Doc. D: I must not delay in completing your procedure so that I may— Mr. W.! Why are you out of your chair? And where did you get that ladder? P:  I think the better question is why he thinks it’ll fit in the wastebasket. D:  I am sorry but your bandage removal will need to be rescheduled. It appears we have a severe emergency here. I must reach security immediately. P:  Cool. I’ll just finish watching the game. We can figure out whether I have hands later. L.W.:  I am sitting with a philosopher in the garden; he says again and again “I know that that’s a tree,” pointing to a tree that is near us. Someone else arrives and hears this, and I tell him: “This fellow isn’t insane. We are only doing philosophy.” (467) P:  Yeah, right. That’s what everybody in here says.

Knowledge from Non-Knowledge in Wittgenstein’s On Certainty  213

Notes 1. Throughout this dialogue, all of L.W.’s contributions are direct quotations from On Certainty, trans. Anscombe and von Wright (New York: Basil Blackwell 1969). Remarks are cited by paragraph number. 2. For a related discussion on whether understanding requires acceptance of simple logical truths, see Williamson and Boghossian, Debating the A priori (New York: Oxford University Press 2020). 3. Moore, “Proof of an External World”, repr. in his Philosophical Papers (London: George Allen & Unwin 1959), pp. 126–148. 4. Cf Moore, “Certainty,” repr. in his Philosophical Papers (London: George Allen & Unwin 1959), p. x. 5. See, for instance, Haack, Evidence and Inquiry (New York: Blackwell 1995). 6. Cf. Moore, “Proof of an External World,” p. x. 7. The doctor is here endorsing the interpretation of Wittgenstein defended by Pritchard in Epistemic Angst (Princeton, NJ: Princeton University Press 2019). 8. Cf. Moore, “The Conception of Reality,” Proceedings of the Aristotelian Society 18 (1):101–120, 1918. 9. McCain “In Defense of the Explanationist Response to Skepticism,” International Journal for the Study of Skepticism 9 (2):38–50, 2019. 10. Putnam, Reason, Truth and History (New York: Cambridge University Press 1981). 11. Davidson, “A Coherence Theory of Truth and Knowledge” in LePore, ed., Truth and Interpretation (Oxford: Blackwell 1986). 12. Among contemporary critics of skepticism, it is commonplace to proceed on the assumption that certain kinds of question-begging arguments against skepticism are permissible. For examples and a critique of this trend, see Veber “Why Not Persuade the Skeptic? A Critique of Unambitious Epistemology,” International Journal for the Study of Skepticism 9 (4):314–338, 2019. 13. See Searle Speech Acts (Cambridge: Cambridge University Press 1969), pp. 141–146. 14. This is quoted directly from Russell’s Introduction to Mathematical Philosophy, Chapter XVI (New York: Macmillan 1919). 15. This is quoted directly from Russell’s “The Analysis of Matter” (London: Keegan Paul 1954 [1927]), p. 383. 16. This is quoted directly from Russell’s The Philosophy of Logical Atomism (New York: Open Court 1998 [1940]) Lecture VIII, x. 17. Ibid. 18. Overall, 4.8% of respondents to the most recent PhilPapers survey claim to accept or lean toward external world skepticism. https://philpapers.org/ surveys/results.pl

References Davidson, D. (1986). A coherence theory of truth and knowledge. In E. LePore (Ed.), Truth and interpretation. Blackwell. Haack, S. (1995). Evidence and inquiry. Blackwell. McCain, K. (2019). In defense of the explanationist response to skepticism. International Journal for the Study of Skepticism, 9(2), 38–50. Moore, G. E. (1918). The conception of reality. Proceedings of the Aristotelian Society, 18(1), 101–120.

214  Michael Veber Moore, G. E. (1959a). Certainty. In Philosophical papers. George Allen & Unwin. Moore, G. E. (1959b). Proof of an external world. In Philosophical papers. George Allen & Unwin. Pritchard, D. (2019). Epistemic angst. Princeton University Press. Putnam, H. (1981). Reason, truth and history. Cambridge University Press. Russell, B. (1954). The analysis of matter. Keegan Paul. (Original work published 1927.) Russell, B. (1998). The philosophy of logical atomism. Open Court. (Original work published 1940.) Searle, J. (1969). Speech acts. Cambridge University Press. Veber, M. (2019). Why not persuade the skeptic? A critique of unambitious epistemology. International Journal for the Study of Skepticism, 9(4), 314–338. Wittgenstein, L. (1969). On certainty (G. E. M. Anscombe & D. Paul Trans.). Basil Blackwell. Williamson, T., & Boghossian, P. (2020). Debating the a priori. Oxford University Press.

13 Vaults across Reasoning Peter Murphy

The Knowledge from Knowledge (KFK) principle says (roughly) that someone inferentially knows p only if, and partly because, each premise from which they essentially inferred p is itself something that they know. Otherwise put, inferences that begin from states other than knowledge and end with knowledge of the conclusion are not possible – or, as I will put it, vaults-to-knowledge are not possible. In the last two decades, this principle has been subjected to a barrage of counterexamples where it is alleged that there is a vault-to-knowledge. Upon considering these cases and KFK, one might wonder whether another kind of vault can also show up in human reasoning. These other vaults are ones that involve an inference which ends in a conclusion-belief that falls short of knowledge, though that conclusion-belief is one step closer to being a knowledge state than the state from which it was inferred. I will call vaults of this second kind, vaults-to-sub-­knowledge. One might also wonder about sequences of vaults of these kinds, by which I mean episodes of extended reasoning in which a succession of two or more inferences are made, each of which involves a different kind of vault. Call these extended vaults. How many vaults, and what kinds of vaults, can show up in an extended vault that ends with a vault-to-knowledge? What about for extended vaults that end with a vault-to-sub-knowledge?1 My main goal here is to provide a taxonomy of candidate vaults-toknowledge, vaults-to-sub-knowledge, and extended vaults. I begin in Section 13.1 with four reasons for thinking that this taxonomical work is important and worthwhile. In Section 13.2, I discuss a key assumption that will inform my approach to identifying the candidate vaults. In Section 13.3, I identify seven simple vaults: four are vaults-to-knowledge and three are vaults-to-sub-knowledge. These are simple vaults because they hold across a single inference. In Sections 13.4 and 13.5, I inventory the candidate extended vaults. And in Section 13.6, I consider an important issue about the relationship between simple vaults and extended vaults. DOI: 10.4324/9781003118701-19

216  Peter Murphy

13.1 Importance My goal is to provide a taxonomy of candidate vaults-to-knowledge, candidate vaults-to-sub-knowledge, and candidate extended vaults. However, I won’t be arguing that any of these vaults are genuine vaults; that is, I won’t try to show that any of these vaults are possible in a non-epistemological sense of possible. Rather I identify them as candidate vaults. As those familiar with the recent literature on KFK know, there is now a mature and nuanced debate about which (if any) vaultsto-knowledge are genuine. 2 Rather than trying to adjudicate any of these debates, my goal is to broaden the current understanding of candidate vaults beyond the ones that are familiar from the recent literature. Doing this will have payoffs for those debates. In this section, I identify those payoffs as part of offering four specific reasons for thinking that this taxonomical work is important. First, having the taxonomy will help us to avoid omissions. With the taxonomy at hand, we can be sure that a proposed epistemology of inference offers a verdict on each and every candidate vault, which is something that an epistemology of inference must do if it is to be comprehensive and to shed light on the epistemology of all inferred beliefs.3 Second, having the taxonomy will help us to avoid overgeneralizing. As we will see, the taxonomy reveals a very wide variety of vaults. Moreover, we will see reasons for thinking that quite different considerations bear on different kinds of vaults. Knowing these things can help us to avoid hastily accepting more vaults than our reasoning supports; equally, it can help us to avoid hastily rejecting more vaults than our reasoning supports. Third, the taxonomy will help us to assess three related kinds of moves that might be made in debates over which vaults are genuine: simplicity arguments, overfitting charges, and debunking attempts. These moves arise when we take a less case-driven approach than the approach that is found in much of the recent literature on KFK. So while that work is often concerned with alleged counterexamples to KFK,4 suppose we take a different tack and consider a simplicity argument, which features this claim: all else equal, models of knowledge on which there are no genuine vaults-to-knowledge are much simpler – perhaps far simpler – than models on which there are genuine vaults-to-knowledge.5 This is because models of the second kind must tell us what it is that combined with a failure to know one’s premises is sufficient for, and partly explains, one’s knowledge of the relevant conclusion. By contrast, models that reject all vaults-to-knowledge just have to explain a unified phenomenon. They have to tell us why failure to know one’s premises is always sufficient for, and always helps to explain, failure to know one’s conclusion. Of course, differences in simplicity, just on their own, may not be determinative, but they should make us wonder whether the more complicated

Vaults across Reasoning 217 models – especially those that recognize many different kinds of vaults – are guilty of overfitting.6 Roughly this is the charge that a model is too complicated because it was arrived at in a way that was not sufficiently sensitive to the presence of bad data, where bad data might include false judgments about cases. Having the taxonomy allows us to clearly identify each kind of vault that a given model accommodates. That, in turn, will position us to determine the extent to which that model might be guilty of overfitting. Having the taxonomy also positions us to assess attempts made by those who defend KFK to debunk judgments that certain cases involve a vault-to-knowledge. One way to carry out such a debunking is to identify a heuristic that helps to generate such judgments, and then show that in such instances, this heuristic is operating in an area where it should not be expected to be reliable.7 With the taxonomy at hand, we can clearly identify each set of judgments that needs to be debunked if some overarching model of inferential knowledge is to be undermined. Fourth, and last, since KFK is silent on whether there are any vaultsto-sub-knowledge, even proponents of KFK have to rule on these vaults if they are to have a comprehensive epistemology of inference. That requires identifying and understanding all candidate vaults-to-subknowledge, which is something that I will begin to do in Section 13.3.

13.2  The Analyzability Assumption I now turn to an assumption that I will be relying on to generate the taxonomy of vaults. The assumption is that satisfying the following four conditions is necessary and jointly sufficient for knowledge: justification, truth, a Gettier-proofing condition, and belief (hereafter JTNGB). In this section, I say more about this assumption, and what it means for the remainder of the chapter. Modeling knowledge in terms of necessary and sufficient conditions makes the task at hand nicely tractable. In particular, necessary conditions on knowledge deliver a crisp inventory of simple vaults-to-­ knowledge. This is because, for each necessary condition on knowledge, there is a vault-to-knowledge in which the starting state does not meet that condition, but the conclusion state does. One thing to keep in mind is that these cases must not be ones where there is some other condition on knowledge that the starting state meets, but the conclusion state does not. For vaults-to-knowledge, this is automatically ruled out since the conclusion state must be a knowledge state. But I will also exclude cases like this from being candidate vaults-to-sub-knowledge on the grounds that those vaults are equivocal, since making the inference has also moved the person one step back from knowledge. Second, for the sake of simplicity, I will not impose a further restriction that would exclude all vaults in which, the vault condition aside, the satisfaction or

218  Peter Murphy non-satisfaction of the other conditions on knowledge are just the same for the starting state as they are for the conclusion state. As we will see, some cases that have been proposed in the literature involve compound vaults, in which more than one condition on knowledge is not met by the starting state but is met by the conclusion state. I will classify these as instances of each of the vaults that they exemplify. I now turn to a natural worry about my assumption that knowledge is analyzable into these four conditions. What if knowledge is not analyzable into necessary and sufficient conditions, as Timothy Williamson has argued?8 Then aren’t we getting off on the wrong foot by adopting the model that I just sketched? A few points are relevant here. One point, which is widely recognized, including by Williamson himself, is that even models which are known to be mistaken can be used to advance our understanding and knowledge of a target subject matter.9 So error, even known error, in a model doesn’t entail that working with that model is a bad idea. Of course, this isn’t license for adopting any old model about one’s target. So, a second point: Williamson himself just holds that knowledge is not fully analyzable. In various places, he suggests that truth, belief, and some version of a safety condition are necessary for knowledge.10 For epistemologists who agree that knowledge is partly analyzable much of what I will do in this chapter should be of interest. These epistemologists should be interested in whether there are vaults that allow an inferential transition from failing to meet one of their favored necessary conditions on knowledge with respect to one claim to meeting that same condition in a conclusion-belief that is inferred from that first claim. This allows people like Williamson to understand and probe questions like this one: is it possible to reason in an epistemically kosher way from an unsafe premise-belief to a safe conclusion-belief? In fact, very little in what follows requires that knowledge be fully analyzable. Third point: anyone is free to hypothesize that knowledge is fully analyzable as JTNGB in order to see what this implies about the various vaults that I will identify. These implications might then confirm, or disconfirm, that hypothesis. In this way, what follows might help us adjudicate the issue of whether knowledge is analyzable as JTNGB. Fourth point: we are free to set aside the issue of whether knowledge is fully analyzable and think about what follows as an inquiry into vaults across inferences to JTNGB states. Even if JTNGB states are not type-identical to knowledge states, the former might be interesting in their own right. And since certainly the extension of “knows” and the extension of “has a justified, true, non-Gettiered belief” overlap very considerably, studying vaults to the latter might provide us with a better grip on instances where someone vaults to the former. I now turn from defending this assumption to one last preliminary point. The vaults that I am interested in consist in inferential transitions

Vaults across Reasoning 219 from a starting mental state to a concluding mental state. Though those transitions have to be executed by an inferential process, I am going to set aside issues about what conditions such a process must meet if the concluding state is to have some positive epistemic status, like being a justified belief or a knowledge state. I have in mind conditions such as these: the starting state must be about a proposition that either validly entails or strongly inductively supports the proposition that the concluding belief is about, the inference must be executed by a conditionally reliable belief-forming process, the person must have a background justified belief that the concluding state is well supported by the starting state, etc. Obviously for a given vault to be a genuine vault, any such process and background conditions must be met. My goal though is to identify candidate vaults, not genuine vaults; so I am not going to evaluate any proposed conditions of these kinds.

13.3  Simple Vaults I now turn to cataloguing the candidate vaults-to-knowledge and vaultsto-sub-knowledge. Because my goal is to simply catalogue the candidate vaults, I will identify just one case to illustrate each vault-to-knowledge. I will not identify every kind of case that has been offered as examples of each kind of vault-to-knowledge. I am also going to set aside the various arguments, objections to those arguments, and other considerations that have been offered to adjudicate whether these cases involve genuine vaults. Recent attacks on KFK began in work by Ted Warfield and Peter Klein, who both argued against KFK by offering cases where someone begins from a belief that falls short of the knowledge mark because it is false, but then goes on to infer from this belief to a conclusion-belief which rises to the level of being knowledge. Here is one of Warfield’s cases: Counting with some care the number of people present at my talk, I reason: “There are 53 people that my talk; therefore my 100 handout copies are sufficient. There are 52 people in attendance – I double counted one person who change seats during the count.” And yet I know my conclusion.11 To symbolize various vault patterns, I will use “→” to stand for an inference, where an inference is a particular psychological process that a person goes through as they transition from one mental state to another. This process is not one of mere association, nor is it any old instance in which one belief (or some other propositional attitude) causes some other belief. Rather it is one of inferring. There are interesting issues about the exact nature of this process, but I will not wade into those here.

220  Peter Murphy Warfield’s case, and the other cases that he and Klein offer, involve an inferential vault from a false premise-belief about one claim to knowledge of a second claim. We can capture these vaults with this template: K1: FBp → Kc, (i.e. from a false belief that p, someone infers, and thereby

comes to know c).12

This is our first candidate vault-to-knowledge. It involves an unequivocal improvement as the person moves from a premise-belief that is false to a conclusion-belief that is a knowledge state. Let me flag two important points about K1, which also apply to the other simple vaults that I will identify in this section. First, K1 is itself just a template; it is silent on whether there are any possible vaults that fit it. So it needs to be distinguished from the claim that there are possible vaults that fit this template. Second, K1 covers inferences from a single premise to some conclusion. Once we understand these single-premise cases, we can go on to theorize about multi-premise cases. In this chapter, the focus will be on the simpler, single-premises cases. Now let’s return to our candidate simple vaults. The same improvement that occurs in K1 might also occur in situations where the concluding belief fails to be a knowledge state. We can capture these cases with this template: → TBc, (i.e. from a false belief in p, someone infers, and thereby comes to have a true belief that, c; but they do not thereby come to know c).13

SK1: FBp

Though this won’t be germane for any of the purposes of this chapter, it is worth noting that there clearly are inferences that fit this template. On any reasonable inventory, SK1 needs to be classified as genuine. In Murphy (2017), I argued for a second kind of vault-to-knowledge. There I offered a case where, I argued, someone begins from a belief that falls short of the knowledge mark because that belief is unjustified, but then infers from this belief to a conclusion-belief that is knowledge. Here is the case: Fred is very busy. Each week, he has at least 20 scheduled meetings. Going only on his memory, he believes that he has a meeting with Mary next Wednesday at noon. However, Fred’s memories about the exact date and time of his meetings are often mistaken: when he has a meeting scheduled for a Tuesday, Wednesday, or Thursday, his memory is often off by a full day; and his memories about the exact time of his meetings are also frequently wrong. Fred knows these things about himself. But he also knows that he is very good at determining what week a scheduled meeting falls on when he

Vaults across Reasoning 221 infers this from one of his memory-based beliefs about that meeting’s exact date and time. In the present case, Fred infers from his memory-based belief that he has a meeting with Mary at noon next Wednesday to the conclusion-belief that he has a meeting with Mary sometime next week.14 I devoted much of that chapter to arguing that Fred’s premise-belief that he has a meeting with Mary at noon next Wednesday is unjustified, and that his conclusion-belief that he has a meeting with Mary sometime next week is justified. If these arguments work, then there are vaults-toknowledge that fit this template: → Kc (i.e. from an unjustified belief that p, someone infers, and thereby comes to know, c).15

K2: UJBp

Just as there was a vault-to-sub-knowledge tied to K1, there is also a vault-to-sub-knowledge tied to K2. It involves a vault from an unjustified premise-belief to a justified conclusion-belief, where that conclusion-­ belief falls short of the knowledge mark. Any such vaults will take this form: → JBc, (i.e. from an unjustified belief in, p, someone infers, and thereby comes to have a justified belief in, c; but they do not thereby come to know c).16

SK2: UJBp

Again, though it is not germane to anything that follows, it is quite controversial whether there are any inferences that fit the SK2 template.17 So far we have looked at candidate vaults that are focused on the truth and justification conditions on knowledge. What about candidate vaults for the other necessary conditions on knowledge? In my (2013), I offered a case in which, I argued, someone begins by assuming, rather than believing, some claim, and then infers from this to a belief that rises to the level of being a knowledge state. Here is that case: Having no idea what day of the week Dana was born on, I assume that she was born on a Tuesday; I then infer that on this assumption, she was born on a weekday; then I end by concluding, and coming to believe, that if Dana was born on a Tuesday then she was born on a weekday.18 If my analysis of the case is correct, then there are vaults-to-knowledge of this form: → Kc (i.e. from a state other than a belief about p, someone infers, and thereby comes to know, c).

K3: NBp

222  Peter Murphy Closely related is this template for a vault-to-sub-knowledge: → Bc, (i.e. from some non-belief state in p, someone infers, and thereby comes to believe, c; but they do not thereby come to know c).19

SK3: NBp

As with SK1 and SK2, it won’t be germane to anything that follows here whether SK3 is genuine or not. Still, it looks like there are inferences which fit this template; and so any reasonable inventory of vaults should list SK3 as a genuine vault. Last, Luzzi (2010) offers a case in which someone begins with a true justified belief which is Gettiered and therefore falls short of the knowledge mark, and then infers from this to a belief that rises to the level of knowledge. Here it is: Unbeknownst to Ingrid, her new and only housemate Humphrey is something of an epistemic prankster. One evening, while Ingrid is in the kitchen cooking dinner, Humphrey mischievously decides to mislead her as to his whereabouts in the house. He therefore turns on the TV in the lounge so that she will believe, as she typically would, that he is in the lounge watching TV. However, also unbeknownst to her, Humphrey is agoraphobic, and hence we leave the house under very few circumstances; any circumstances in which he would leave the house (e.g. because of a raging fire) is undoubtedly one in which Ingrid would be aware that he is leaving the house. Suppose that Humphrey subsequently momentarily forgets about his ploy (something quite out of character for him), and accidentally wanders for a few seconds back into the lounge. During that interval, Ingrid formed the belief that Humphrey is in the lounge on the basis of hearing the TV and by relying (weather implicitly or explicitly) on this inductive argument: “(A) The TV is on and I didn’t turn it on. (B) when this happens, Humphrey is almost always in the lounge. So (1) Humphrey is in the lounge.” She then carries out the following valid and sound deduction: 1 Humphrey is in the lounge. 2 If Humphrey is in the lounge, then Humphrey is in the house. Therefore, (3) Humphrey is in the house. 20 If Luzzi’s analysis of this case is correct, then there are vaults-to-­ knowledge that fit this template: K4: GBp → Kc (i.e. from a Gettiered true, justified belief that p, someone

infers, and thereby comes to know, c).

Vaults across Reasoning 223 An important difference enters here when we try to identify a vault-tosub-knowledge that stands to K4, as the earlier vaults-to-sub-knowledge stood to the earlier vaults-to-knowledge. When we try to do this, the result is: → NGBc (i.e. from a Gettiered belief in one claim, p, someone infers, and thereby comes to believe, without being Gettiered, c; but they do not thereby come to know c).

SK4: GBp

The problem is that the premise-belief can be Gettiered only if it meets the other three conditions on knowledge (since Gettiered beliefs must be true justified beliefs). But then, if the conclusion-belief is to fall short of the knowledge mark and yet not be Gettiered, it must fail to meet one of those other three conditions. But then it follows that the vault is not unequivocal. Since SK4 is not a template that covers unequivocal vaults, I set it aside. We have arrived at an inventory that contains seven simple vaults. Each of the four vaults-to-knowledge lines up with one of the necessary conditions on knowledge in the JTNGB assumption. In fact, the JTNGB assumption has probably structured the recent search for different kinds of counterexamples to KFK. The other three vaults in the inventory are candidate vaults-to-sub-knowledge. With this part of the inventory done, let’s consider what extended vaults can be put together from this inventory of simple vaults.

13.4  Extended Vaults That End in Knowledge Our next focus is on extended episodes of reasoning that involve multiple inferences and multiple vaults. Again, the focus is on extended vaults that have simple unequivocal vaults as their component vaults. Which simple vaults might show up in these extended sequences of vaults? And in what order might they show up? Let’s begin by considering whether any simple vault, or any set of simple vaults, preclude any other simple vaults. This is important because if there are any inconsistent sets of vaults, then no extended sequences of vaults that involve all of the simple vaults in that set are possible, and so we can automatically rule these out as candidate vaults. There are no such sets. Even on the view that there are instances of all seven simple vaults, no contradiction can be derived. To say that there are instances of a vault is just to say that sometimes inferential reasoning allows a person to ratchet up on one of the four conditions as they move from their starting state to their conclusion state, where this can be done in either the context of coming to know a conclusion (as with the K vaults) or in a context where someone comes to believe, but not know,

224  Peter Murphy a conclusion (as with the SK vaults). But clearly none of these ratchets, or sets of these ratchets, preclude any others. So there are no inconsistent sets of vaults. Still there are two kinds of constraints that exclude some extended vaults from candidacy. One is the restriction to unequivocal vaults: this excludes extended patterns of reasoning in which someone moves from a starting state that satisfies one of the four conditions to a conclusion state that does not satisfy that condition. And as we are about to see, there are also constraints that arise from the order in which the simple vaults are sequenced. To see how these constraints work, let’s look in some detail at the extended sequences that end with a vault-to-knowledge that fits the K1 template. Recall that K1 vaults have this form: FBp → Kc. Any simple vault that might immediately precede FBp → Kc must have as its ending state, a false belief, so that it can occupy the FBp slot in K1 – this is an ordering constraint. This immediately rules out the other vaultsto-knowledge in our inventory since their ending states are knowledge states, and the factivity of knowledge entails that they are not false beliefs. It also rules out SK1 since its ending state is a true belief (recall SK1 is FBp → TBc). That leaves SK2 and SK3. Consider then a sequence of two inferences, in which an SK2 vault is followed by a K1 vault. A first formulation of this two-step sequence is UJBq → FBp → Kc. This won’t do though, since we need to make it explicit that the first vault in this sequence involves an improvement ­vis-à-vis justification (otherwise it isn’t an SK2 vault). So we need to move to: EK11: UJBq

→ FJBp → Kc

Now let’s try to extend this sequence further by adding another vault to the front of it. This requires prefacing EK11 with a vault that ends in an unjustified belief. Only SK3 allows us to do this. Putting it in front of EK11 yields this extended vault: EK12: NBr

→ UJBq → FJBp → Kc

This is as far as this sequence can be extended, since none of the vaults in our inventory ends in a non-belief state that can fit the NBr slot. What about the other sequence of two inferences, where SK3 is followed by K1? Recall that SK3 is NBp → FBc. This is the sequence: EK13: NBq

→ FBp → Kc

If we try to extend this sequence further by putting another vault in front of it, we run into the same problem that we encountered with EK12 and the NBr slot. So EK13 cannot be extended any further. And so this is it

Vaults across Reasoning 225 for attempts to build sequences that end in a K1 vault. It has yielded three candidate extended vaults that end in knowledge: EK1, EK12 , and EK13. Similar exercises can be done with K2, K3, and K4 as the final vaults. Sparing you the details, these are the candidate extended sequences that end in K2: EK21: FUJBq → TUJBp → Kc. EK22: NBr → FUJBq → TUJBp EK23: NBq → UJBp → Kc.

→ Kc

There are no extended vaults that end in a K3 vault, since K3 is NBp → Kc, which runs into the NB obstacle. These are the candidate extended sequences that end in K4: EK41: FBq → TJGBp → Kc EK42: FUJBr → FJBq → TJGBp → Kc EK43: NBr → FBq → TJGBp → Kc EK44: NBs → FUJBr → FJBq → TJGBp → Kc EK45: UJBq → TJGBp → Kc EK46: FUJBr → TUJBq → TJGBp → Kc EK47: NBr → UJBq → TJGBp → Kc EK48: NBs → FBr → UJBq → TJGBp → Kc

In total, there are 14 candidate extended vaults. Six are sequences of just two simple vaults, six are sequences of three vaults, and two are sequences of four vaults.

13.5  Extended Vaults That End in Sub-Knowledge Now let’s turn to extended sequences that end in a state that falls short of knowledge. The building blocks for these sequences are limited to the three vaults-to-sub-knowledge. No vaults-to-knowledge can show up in these extended sequences, since the final state must fall short of knowledge and there can be no equivocal vaults on the way to the final state. Let’s have a close look at sequences that end in SK1. Recall, that vault is FBp → TBc. When we preface this vault with SK2, which is UJBp → JBc, we get: ESK11: UJBq → JFBp → JTBc ESK11 can be extended further by prefacing it with SK3, which recall is NBp → Bc. This yields: ESK12: NBr → UJBq → JFBp → JTBc

226  Peter Murphy Turning to extended sequences that end with SK1, the pattern in which SK3 precedes SK1 is this: ESK13: NBq → FBp → TBc Here again the NB obstacle imposes itself, so this pattern cannot be extended any further. I will just list the extended vaults that end with SK2: ESK21: FUJBq → TUJBp → TJBc ESK22: NBr → FUJBq → TUJBp → TJBc ESK23: NBq → TUJBp → TJBc And there are no extended vaults that end in SK3, since it is NBp → Bc, where the familiar NB obstacle imposes itself. So to our inventory of 14 candidate extended vaults that end in knowledge, we can add these six candidate extended vaults to sub-knowledge. This makes for a total of 20 candidate extended vaults.

13.6  From Simple Vaults to Extended Vaults A cluster of related questions have been looming. How much, epistemologically speaking, can inference do for us? Is there a limit to the number of conditions on knowledge that it can help us to vault? Are there cases in which someone infers from a supposition, which is neither believed, nor true, nor justified, nor perhaps even non-Gettiered, to knowledge of a conclusion, and hence to a state that meets all four of these conditions?21 And if this can happen across a single inference, why can’t each of these four simple vaults be made, one at a time, across a sequence of four inferences, as captured in the two most extended candidate patterns (namely, EK4 4 and EK48)? The answers to these questions are obviously determined by which extended vaults are genuine. But perhaps which extended vaults are genuine is determined by which simple vaults are genuine. The claim that they are is equivalent to this principle: Universal Vault Building (UVB): For any set of simple vaults, if all of the vaults in that set are genuine vaults, then each candidate extended vault which is made up of only those simple vaults is also a genuine vault.22 According to UVB, accepting a set of simple vaults requires one, on pain of inconsistency, to accept all of the extended vaults that have those (and only those) simple vaults as their component vaults. UVB has important implications for the question about how much inference can do for us. If UVB is true, then successfully establishing

Vaults across Reasoning 227 that some simple vaults are genuine is enough to establish that all of the extended vaults that are made up of those simple vaults are also genuine. Things cut both ways though. UVB might also be used to show that some simple vaults are not genuine. For if UVB is true, then arguments that some extended vault is not genuine also count against the conjunction of the simple vaults that figure into that extended vault – at least one of those simple vaults must not be a genuine vault. On the other hand, if UVB is false, then things are more complicated: establishing that a number of simple vaults are genuine does not suffice to show that any extended vaults are genuine; and showing that some extended vault is not genuine does not impugn any of the simple vaults that figure into that extended vault. 23 UVB is false. To see this, remember that for a simple vault to be genuine, there just has to be at least one possible instance of that vault; but that clearly does not require that any of those possible instances figure into an extended vault. For this reason, some simple vaults might be genuine vaults, and yet none of the candidate extended vaults assembled from those vaults be genuine. In fact, one can consistently accept all seven simple vaults and reject all twenty candidate extended vaults. Though there is no logical inconsistency in accepting a set of simple vaults, but rejecting some (or all) of the extended vaults that are composed of those simple vaults, what could the rationale be for holding this combination of assessments? For example, take an epistemology of inference that includes two simple vaults, K1 and SK2, but does not include the extended vault in which SK2 is followed by K1. 24 If the denial of UVB is added to this package, two pressing questions arise. One is this question: for any instance of SK2 that this view certifies as genuine, why can’t it be followed by a genuine K1 vault? What obstacle precludes that extended vault from being genuine? The second is a similar question: for any instance of K1 that this view certifies as genuine, why can’t it be preceded by a genuine SK2 vault? What obstacle precludes the possibility of this being a genuine extended vault? An epistemology of inference that denies UVB must identify these obstacles and explain how they preclude the assembly of some extended vaults from the stock of simple vaults that have been deemed genuine. Further work along these lines will help us to adjudicate UVB, so that we can determine which package deals of simple vaults and extended vaults are candidate package deals. This is just one example of where the inquiry might go next. This path ahead, and other paths ahead, are better lit though now that we have a taxonomy of candidate vaults. So while there is more work to be done to fill out this part of the epistemology of inference, we are now better equipped to do that work. I hope to have brought some order to the issue of what kinds of inferential vaults are possible. My main contribution is the taxonomy of candidate vaults that I provided. By using that taxonomy, we can ensure

228  Peter Murphy that a proposed epistemology of inference delivers a verdict on each kind of candidate vault and that the supporting arguments for those verdicts are not guilty of overgeneralizing. The taxonomy will also help us assess a cluster of moves that might be made to bolster views that say many (or all) of the candidate vaults are not genuine. These include simplicity arguments, as well as overfitting charges and debunking arguments. And last the taxonomy will help us be more aware of vaults-to-subknowledge and extended vaults, so that these vaults are also integrated into our epistemologies of inference.

Notes 1. Where a vault-to-knowledge involves a move from not knowing to knowing by way of making an inference, the denial of a closure principle for knowledge allows for a move in the reverse direction, from knowing to not knowing by way of making an inference that is valid and otherwise epistemically good. The conjunction of KFK and a closure principle for knowledge prohibits both of these moves. For reasons against a closure principle for knowledge that is based on claims about extended reasoning, see Lasonen-Aarnio (2008). 2. See Section 13.3 for references to work by opponents of KFK. Defenders of KFK include Borges (2020), Montminy (2014), and Schnee (2015). 3. Though the exact ambit of inferred beliefs is contentious, epistemologists continue to produce new and interesting arguments for the view that even simple perceptual beliefs about one’s immediate surroundings, like your belief that there is a book in front of you now, are inferential. See McGrath (2017) for arguments that such a belief can be justified (or be an item of knowledge) only if it is inferred from a justified belief (or knowledge) about what a book looks like. 4. I will review some of these cases in Section 13.3. 5. I restrict this claim to vaults-to-knowledge because, as we will see below, the case for there being some vaults-to-sub-knowledge is very strong. 6. For more on this general methodological point, see Hitchcock and Sober (2004); for an application to epistemology, see Weinberg (2017). 7. See Williamson (2020). 8. See Williamson (2000), especially Chapters 1 and 3. 9. See Williamson (2017). 10. See Williamson (2000). 11. Warfield (2005, pp. 407–408). See Klein (2008) for other cases. 12. One might wonder about whether the starting state in K1 meets the non-­ Gettiered condition. The starting state in Warfield’s case is not Gettiered, since it is not a JTB that has any of the familiar fluky Gettierzing properties. 13. This can be refined as follows. If SK1, or any other vault-to-sub-knowledge, is to be a vault-to-sub-knowledge, the concluding state needs to fall short of the knowledge mark. For SK1, this means that the concluding state needs to be filled out as either a Gettiered belief or as an unjustified belief. But for it to be a Gettiered belief and still be an unequivocal vault, the starting state will also have to be a Gettiered belief. However, this is not possible since the starting state is a false belief and false beliefs cannot be Gettiered. The starting state can be filled in as an unjustified belief though. Doing so yields FUJBp → TUJBc. This refinement is not necessary for any of my purposes; so I will work with the simpler template that is in the text.

Vaults across Reasoning 229 4. Murphy (2017, pp. 6–7). 1 15. Two things are worth noting. First, if I am right that Fred’s premise-belief is both unjustified and false, and that his conclusion-belief is a knowledge state, then this case involves two vaults, one from an unjustified belief to a justified belief and the other from a false belief to a true belief. This would make this an instance of both K2 and K1.   Second, K2 is to be understood in a theory-neutral way, which means understanding “unjustified” in K2 as simply a placeholder for the condition, whatever it might turn out to be, which is necessary for knowledge and which, in conjunction with true belief and a Gettier-proofing condition, is sufficient for knowledge. Consequently “unjustified” in K2 may not be an internalist notion. 16. Like SK1, SK2 can be refined. To be a template vault-to-sub-knowledge, the concluding state in SK2 needs to fall short of the knowledge mark by failing to meet a condition on knowledge. For SK2, this means that the concluding state, JBc, must be filled out as either a false belief or a Gettiered belief. However, it cannot be filled out as Gettiered and still be an unequivocal vault, since the starting state is not a belief and therefore cannot be filled out as Gettiered. But the starting state can be filled in as a false belief, which yields FUJNGBp → FJNGBc. Since this refinement is not necessary for any of my purposes, I will work with the simpler template that is in the text. 17. See Luzzi (2019, pp. 103–108) for reasons to think that Fred’s premise-belief is not justified. 18. Murphy (2013, p. 312). 19. SK3 can also be refined. To be a vault-to-sub-knowledge, the concluding state in it needs to be filled in to guarantee that it falls short of being a knowledge state. For SK3, this means that the concluding state, Bc, is filled out as either a false belief, an unjustified belief, or a Gettiered belief. It can’t be filled out as Gettiered and SK3 still be an unequivocal vault, since the starting state is not a belief and therefore cannot be filled out as Gettiered. However, the concluding state can be filled out as either a false belief or an unjustified belief, as long as the starting state is filled out as either false or unjustified. It can have a false proposition as its object, which means that at least FNBp → FUJBc. It is less clear that it can be filled in as being unjustified in the same sense that beliefs are. Here too though these refinements are not necessary for any of my purposes; so I will work with the simpler template in the text. 20. Luzzi (2010, pp. 674–675). 21. See the earlier case of an alleged genuine vault that fits the K3 template. 22. Something close to the converse of UVB is true, though somewhat trivially. This is the claim that if an extended vault is genuine, then each simple vault that figures into it must also be genuine. This follows from the simple fact that each of the simple vaults has at least one genuine instance, namely, the instance that is in that genuine extended vault. 23. If UVB is false, it does not follow that simple and extended vaults have to be evaluated entirely separately. That only follows if the converse of UVB is also false. See the previous footnote for an argument that the converse of UVB is true. 24. This is vault EK11, which is UJBq → FJBp → Kc.

References Borges, R. (2020). Knowledge from knowledge. American Philosophical Quarterly, 57, 283–298.

230  Peter Murphy Hitchcock, C., & Sober, E. (2004). Prediction versus accommodation and the risk of overfitting. The British Journal for the Philosophy of Science, 55, 1–34. Klein, P. (2008). Useful false beliefs. In Q. Smith (Ed.), New essays in epistemology (pp. 25–61). Oxford University Press. Lasonen-Aarnio, M. (2008). Single premise deduction and risk. Philosophical Studies, 141, 157–173. Luzzi, F. (2010). Counter-closure. Australasian Journal of Philosophy, 88, 673–683. Luzzi, F. (2019). Knowledge from non-knowledge: Inference, testimony, and knowledge. Cambridge University Press. McGrath, M. (2017). Knowing what things look like. Philosophical Review, 126, 1–41. Montminy, M. (2014). Knowledge despite falsehood. Canadian Journal of Philosophy, 44, 463–475. Murphy, P. (2013). Another blow to knowledge from knowledge. Logos and Episteme, 4, 311–317. Murphy, P. (2017). Justified belief from unjustified belief. Pacific Philosophical Quarterly, 98, 602–617. Schnee, I. (2015). There is no knowledge from falsehood. Episteme, 12, 53–74. Warfield, T. (2005). Knowledge from falsehood. Philosophical Perspectives, 19, 405–416. Weinberg, J. (2017). Knowledge, noise, and curve-fitting: A methodological argument for JTB? In R. Borges, C. de Almeida, & P. Klein (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 253–272). Oxford University Press. Williamson, T. (2000). Knowledge and its limits. Oxford University Press. Williamson, T. (2017). Model-building in philosophy. In R. Blackford & D. Broderick (Eds.), Philosophy’s future: The problem of philosophical progress (pp. 159–171). Wiley. Williamson, T. (2020). Suppose and tell. Oxford University Press.

14 Entitlement, Leaching, and Counter-Closure Federico Luzzi

14.1 Introduction Crispin Wright (2004, 2014) has articulated and defended the view that by incorporating non-evidential entitlements into our theory of knowledge, a novel and satisfactory response to key skeptical challenges comes into view. Crucial to his position is the thesis that various regions of thought are underpinned by ‘cornerstone’ propositions. According to Wright, cornerstone propositions (such as ‘There is an external world’) are those for which warrant is antecedently required in order for non-­ cornerstone or ‘ordinary’ beliefs in that region (such as ‘I have hands’) to enjoy the epistemic support of experiential evidence (such as the appearance as of a hand). Wright’s view has attracted substantial discussion and criticism. Here I focus on one of these criticisms, the so-called leaching problem. I argue that Wright’s and his critics’ diagnosis of this problem as relying essentially on a Closure principle is incorrect, and that a Counter-Closurestyle principle is instead at issue. This observation, coupled with the problematic nature of Wright’s most recent proposed solution to leaching, suggests an alternative response. It also prompts reconsideration of the relationship between the leaching problem and the alchemy problem, which Wright (incorrectly) conceives of as ‘dual’ problems admitting of a unique solution. The chapter proceeds as follows: in Section 14.2, I provide an overview of Wright’s position, including a presentation of the elements of his view required for my critique. In Section 14.3, I describe the problems of alchemy and leaching as understood by Wright and his critics and present Wright’s proposed solution to these problems. Section 14.4 is the key argumentative section: there I argue (i) that Wright’s solution to the leaching problem is not ultimately satisfactory; (ii) that the leaching problem has been mischaracterized by Wright and his critics as turning essentially on a Closure principle, and that it turns instead on a Counter-Closure-style principle; and (iii) that once this point is recognized, a better solution to leaching is achievable, one DOI: 10.4324/9781003118701-20

232  Federico Luzzi that (a) flatly rejects the Counter-Closure-style principle that essentially drives leaching and that (b) motivates this rejection by way of independently plausible considerations in the extant literature on Counter-Closure.

14.2 Wright’s Cornerstone-Based Epistemology: An Overview The following is an arguably plausible epistemic point: we can claim warrant for the routine beliefs we hold in various regions of thought (e.g. that there is a tree in the garden, that Sarah is feeling happy, that the next penguin I observe will be flightless) only if certain foundational propositions enjoy good epistemic standing: the latter category includes propositions such as the proposition that there is an external world at all; that other minds exist; and that regularly observed correlations will extend into the future. Thus, for instance, if one lacked warrant for the belief that an external world exists, it would be wrong to maintain that one’s belief based on casual observation that there is a tree in the garden is warranted. Following Crispin Wright, let’s call these foundational propositions cornerstones. We can construe cornerstones in a given region as propositions such that lack of warrant for them would also result in a lack of warrant for any belief in the run-ofthe-mill propositions of that region.1 In addition to the examples mentioned, cornerstones include among their ranks the denials of scenarios of radical deception (e.g. that I am not a bodiless brain-in-a-vat whose experiences in virtually no way reflect the actual external environment; that I’m not the victim of a powerful evil demon who systematically deceives me), the proposition that the world did not come into existence just a few moments ago replete with apparent evidence of a much longer history, and the proposition that speakers can generally be trusted to speak truly. Cornerstone propositions thus defined not only bear the particular relation to routine beliefs just noted but also play an enabling role in allowing the evidence we typically adduce for those beliefs to constitute evidence for those beliefs. For example, a lack of warrant for the proposition that I am not systematically deceived by an evil demon regarding the external world plausibly implies a lack of warrant for the belief that there is a tree in the garden, and it also disables the evidential support of the appearance as of a tree for my belief that there is a tree: it seems plausible that without some kind of assurance (warrant) that I am not so radically deceived, it is illegitimate for me to take that appearance as providing sufficient (or indeed any) support for my belief there is a tree, when things would look exactly the same if the skeptical hypothesis were true. Mutatis mutandis, the same goes for the other cornerstones mentioned.

Entitlement, Leaching, and Counter-Closure 233 Wright helpfully illustrates this situation using the ‘I-II-III’ argumentative template, an example of which is this: I My experience is as of hands held up in front of me II I have hands Therefore, III I am not a handless brain-in-a-vat (I) is a proposition that according to intuition ought to support the everyday proposition (II). Proposition (III) is a cornerstone of the region. That there is something unsatisfactory about this argument is a widely acknowledged point. Wright’s diagnosis of the defect of this argument is that (I) cannot provide warrant for (II) unless (III) is antecedently warranted: in the absence of prior warrant for thinking that I am not handlessly envatted, an appearance as of hands cannot provide evidential warrant for the belief that I have hands. This point plays a key role in Wright’s reconstruction of the external-world skeptical argument. He diagnoses the skeptic as drawing essentially from this observation in order to mount their threat, as follows: since (I) can constitute evidence for (II) only if one enjoys antecedent warrant for (III), and since proposition (III) can only be warranted by accumulation of evidence of the kind provided by (I) for (II), then there is no non-circular way of obtaining warrant for (III). Therefore, warrant for (III) is not legitimately achievable and, owing to the cornerstone nature of (III), warrant for (II), and for all other everyday propositions in the region of thought is not to be had. Wright’s response to this construal of the skeptical argument is partially concessive. It involves agreeing with the skeptic’s first point that (III) cannot be warranted evidentially on the basis of reasoning from type-II propositions. But crucially, Wright argues that the skeptic’s transition from the impossibility of evidential warrant for (III) to the impossibility of warrant in general for (III) is overly quick. This transition can be resisted—as Wright does—if the case can be made for the existence of non-evidential warrant for typeIII propositions. If type-III propositions enjoy non-evidential warrant that is in place independently of and, crucially, antecedently to any accumulation of evidential warrant for type-II propositions on the basis of type-I propositions, then the skeptical conclusion is blocked. These non-evidential warrants can enable type-I propositions to provide evidential warrant for type-II propositions, thus rescuing our external-world warranted beliefs and knowledge from the skeptical threat. But what lends legitimacy to the idea that such non-evidential warrants are indeed in place? In other words, how can Wright avoid the

234  Federico Luzzi charge that the inclusion of non-evidential warrants into his epistemology is the mere product of philosophical wishful thinking? Certainly, it would be nice to believe that non-evidential warrants can fill precisely the anti-skeptical brief outlined by Wright. But an argument is clearly needed. Here, Wright (2004) appeals to several considerations to defend four kinds of entitlement, two of which are salient for our purposes. It will be useful to start by considering how Wright handles the case of inductive skepticism, which challenges us to find a non-circular justification for the cornerstone proposition that nature is uniform—or at least, uniform enough to allow for ‘All observed Fs are Gs’ to constitute evidence for ‘All Fs are Gs’, where F and G are observable natural properties and when observations are numerous and relevant. The skeptical problem of induction, in a nutshell, is that evidence for this cornerstone seems only accruable via induction. And without antecedent warrant for the proposition that nature is uniform, no legitimate inductive argument purporting to provide warrant for the uniformity of nature can be mounted. Thus, legitimately gaining warrant for the proposition that nature is uniform seems impossible; and since this proposition is a cornerstone, it also seems impossible to gain warrant for our beliefs in generalizations such as ‘All Fs are Gs’. Wright’s solution is to appeal to the idea, inspired by Reichenbachian considerations, that trusting that nature is uniform is a dominant strategy (2004, 2014). If we are interested in gaining true and reasonably held beliefs about the world, induction is our best bet. Sure, nature might actually turn out not to be uniform—in which case our inductively obtained beliefs are likely to be false more often than not. But the alternative—not to trust inductive reasoning—leaves us in the no better position of having no true beliefs (if we decide not to reason inductively at all) or of having unreasonably held beliefs (if we decide to reason inductively without trusting the procedure that generates those beliefs), regardless of whether nature is uniform. 2 In other words, the strategy of trusting that nature is uniform is in all circumstances no worse and in some circumstances better than the strategy of not doing so and is thereby dominant. When trusting a cornerstone is a dominant strategy and we have no reason to doubt the cornerstone, then according to Wright we enjoy non-evidential warrant for the cornerstone. Next, let’s consider a second type of argument in favor of entitlement, which Wright relies on to address Cartesian skepticism about the external world. Here, Wright’s rationale for trusting the cornerstones, which in this case correspond to the denials of large-scale skeptical hypotheses, is slightly different. The Cartesian skeptical worry can be generated by observing that it seems impossible to acquire any evidence for the belief that we are not victims of a suitably described sk scenario according to which we are cognitively detached from our environment (such as a vivid

Entitlement, Leaching, and Counter-Closure 235 dream, handless envatment or deception on the part of an all-powerful demon). Any purported process of evidence-acquisition of this kind (e.g. careful visual inspection of one’s hands or a failed attempt at pinching oneself awake) will be hostage to the concern that the execution of the process itself is part of the skeptical scenario. So the process of establishing, for instance, that one is not a handless brain-in-a-vat can be successful only if one has antecedent warrant for thinking one is not so envatted—which is precisely what our process of evidence acquisition was trying to establish. Thus, evidence against handless envatment seems beyond our reach. From this point, the skeptic concludes that we lack warrant for thinking we are not so envatted, and thus that we can never acquire warrant for the everyday beliefs about our surroundings for which non-envatment constitutes, on Wright’s picture, a cornerstone. Wright’s response, again, is to claim that we enjoy non-evidential warrants for the denials of skeptical hypotheses. He notes that in virtually all our cognitive projects we rely on what he calls ‘authenticity conditions’, defined as ‘any condition doubt about which would rationally require doubt about the efficacy of the proposed method of executing the project, or about the significance of its result, irrespective of what that result might be’(2014, p. 215).3 For example, if I aim to establish the number of people on my bus by counting them, my authenticity conditions include that my eyesight is reliable. If I then wanted to be certain that my eyesight was reliable, I would engage in a different cognitive project whose authenticity conditions might include that the optician is trustworthy, that their test lenses are accurately calibrated, etc. However, according to Wright, the existence of authenticity conditions which have not been checked does not engender widespread skepticism regarding the outputs of the associated cognitive project, because the warrant enjoyed by these outputs need not (and does not) rest on having thoroughly checked the authenticity conditions of that project; instead cognitive projects are always undertaken with a level of risk: […] we should view each and every cognitive project as irreducibly involving elements of adventure—I have, as it were, to take a risk on the reliability of my senses, the conduciveness of the circumstances, etc., much as I take a risk on the continuing reliability of the steering, and the stability of the road surface every time I ride my bicycle. For as soon as I grant that I ought—ideally—to check the [authenticity conditions] of a project, even in a context in which there is no particular reason for concern about them, then I should agree pari passu that I ought in turn to check the [authenticity conditions] of the check—which is one more project after all—and so on indefinitely, unless at some point I can foresee arriving at [authenticity conditions] all of which are somehow safer than those of the initial project. (2004, pp. 190–191)

236  Federico Luzzi Our non-evidential entitlements for cornerstones, then, rest on the following idea: If a cognitive project is indispensable, or anyway sufficiently valuable to us—in particular, if its failure would at least be no worse than the costs of not executing it, and its success would be better—and if the attempt to vindicate (some of) its [authenticity conditions] would raise [authenticity conditions] of its own of no more secure an antecedent status, and so on ad infinitum, then we are entitled to—may help ourselves to, take for granted—the original [authenticity conditions] without specific evidence in their favour. (p. 192) In other words, all cognitive projects that involve investigation of our environment rely on authenticity conditions. The skeptic observes that these ultimately rest on cornerstones that include the denial of skeptical hypotheses, and from our inability to acquire evidence in favor of their denials, the skeptic draws the conclusion that we lack warrant for them. Wright, instead, uses precisely these grounds to conclude that we must enjoy non-evidential warrant to accept them. ‘Accept’ here is crucial. For Wright prudently shies away from the overly bold claim that subjects have warrant for believing that cornerstones are true, opting instead for the weaker epistemic attitude of acceptance, which is consonant with the ideas that (i) belief is tied intrinsically to truth and (ii) only evidence bears on truth, whereas (iii) warrant for cornerstones is non-evidential. Thus, Wright’s distinctive move to conceive warrant as coming in either of two kinds—evidential and non-evidential—is mirrored in the kind of epistemic attitude properly taken toward the propositions enjoying either kind of warrant. In other words, Wright thinks that the warrant in favor of everyday, type-II propositions, is evidential and that the accompanying appropriate epistemic attitude is one of belief. The kind of considerations Wright offers in favor of non-evidential warrant for cornerstones suggests to him by contrast that non-evidential warrant licenses us in merely accepting or trusting such propositions, where this attitude implies acting in many ways as if they are true, without outright believing them to be true. One point that will be important in what follows is that Wright’s concession that accepting the denials of skeptical hypotheses carries some risk is not restricted only to these cornerstones. Very plausibly, it also encompasses the acceptance of other cornerstones, including (crucially, for reasons that will become clear) the cornerstone that nature is uniform: for just as when embarking on a perceptual cognitive project, one must take a risk on the reliability of one’s senses and the cooperation of the environment—that is, just as one hasn’t checked these authenticity conditions and to a good degree takes them on trust—so when

Entitlement, Leaching, and Counter-Closure 237 embarking on inductive reasoning one takes a risk on the uniformity of nature. Even though trusting that nature is uniform may be a dominant strategy in our endeavor to gain knowledge of the world, that point alone does not provide evidence for its uniformity. If it did, there would be no reason for Wright to claim that our warrant for the uniformity of nature is non-evidential. Thus, our trusting that nature is uniform similarly entails taking on a risk.4

14.3 Threats to Epistemic Asymmetries across Knowable Entailment: Alchemy and Leaching Wright’s cornerstone-based epistemology has attracted several criticisms. Among these, two influential objections trade on the epistemic asymmetries across knowable entailment that at first blush result from his view. To see this, let us focus our attention on everyday (type-II) and cornerstone (type-III) propositions. As mentioned, on Wright’s view, we can believe everyday propositions and we enjoy evidential warrant for them (stemming from type-I propositions). By contrast, we lack evidential warrant for cornerstones; nonetheless we have non-evidential warrant which allows us to accept and trust them, where acceptance and trust are epistemic attitudes weaker than belief. Wright’s elegantly concessive response to the skeptic apparently relies on this difference in our attitudinal stance toward everyday and cornerstone propositions. For if we thought it theoretically legitimate, contra Wright’s stated view, to extend the attitude we hold toward everyday propositions to cornerstone propositions, so that we not only accept but also legitimately believe cornerstones (where legitimate belief requires evidential warrant) then Wright’s position would no longer be in a position to concede to the skeptic the point that we cannot acquire evidentially warranted belief in the denials of skeptical hypotheses. On the other hand, if we were to equalize in the other direction, by saying that we have merely non-evidential warrant both for cornerstone propositions and everyday propositions, then Wright’s overall view would be robbed of its anti-skeptical force: for it would be unclear what help it would be to vindicate our right merely to accept or trust everyday propositions in the face of the skeptical challenge, which is commonly understood as urging us to articulate a defense of our right to believe them. The problem for Wright is that an equivalence of attitude toward both kinds of proposition seems to be forced on his position by the very plausible principle of closure for evidential warrant. We can understand this principle as follows: ClosureEW: Necessarily, if S has an evidentially warranted belief that p, and S competently deduces q from p, then S acquires evidentially warranted belief that q.5,6

238  Federico Luzzi ClosureEW enjoys a substantial deal of plausibility, underwriting competent deduction’s ability to extend our body of evidentially warranted beliefs across entailment. While it is not without its detractors, it is fair to say that rejecting this principle is seen by most epistemologists as a substantial and indeed unaffordable cost.7 How does ClosureEW restore the symmetry in attitudes across type-II propositions and cornerstone propositions, a symmetry which Wright’s position ideally would avoid? Take for example the everyday proposition that I have hands, and the cornerstone that I am not a handless brain-ina-vat. Wright’s picture is supposed to vindicate, in the face of the skeptical challenge, my evidential warrant for the belief that I have hands. But as Martin Davies (2004) first pressed, suppose I now competently deduce, from my belief that I have hands, the cornerstone proposition that I am not a handless brain-in-a-vat. ClosureEW says that I have now acquired evidential warrant for the cornerstone—which is precisely what Wright’s view denied I could do. This is known as the problem of alchemy: ClosureEW—problematically for Wright—turns ‘the lead of rational trust into the gold of justified belief’ (Davies, 2004, p. 220), thus restoring the symmetry in attitude—evidential warrant—for both my belief that I have hands and the cornerstone that I am not handlessly envatted. The symmetry, it might seem, can also be restored by ClosureEW in the other direction, to pose a related issue for Wright.8 The problem, as Wright himself describes it, stems from the following kind of consideration: The general picture is that the cornerstones which sceptical doubt assails are to be held in place as things one may warrantedly trust without evidence. Thus at the foundation of all our cognitive procedures lie things we merely implicitly trust and take for granted, even though their being entitlements ensures that it is not irrational to do so. But in that case, what prevents this ‘merely taken for granted’ character from leaching upwards from the foundations, as it were like rising damp, to contaminate the products of genuine cognitive investigation? (2004, p. 207) The leaching problem, as it has come to be known, is attributed by Wright to Stephen Schiffer (177, fn8) and characterized by way of an observation of Sebastiano Moruzzi’s: that the risk one takes in accepting the cornerstone ‘seeps upwards from the foundations’ to affect everyday, type-II propositions. Wright (2004, p. 209) claims that the problem is best expressed as the incompatibility of these three claims: a

If we run a risk in accepting [cornerstone] C, then we run a risk in accepting [everyday proposition] p b We run a risk in accepting C. c p is known

Entitlement, Leaching, and Counter-Closure 239 To see the putative connection with ClosureEW, let’s (plausibly, and in line with Wright9) interpret ‘running a risk in accepting’ as ‘lacking evidential warrant for’10 and work with this version of the argument: d If S lacks evidential warrant for cornerstone C, then S lacks evidential warrant for everyday proposition p in that region e S lacks evidential warrant for cornerstone C f S has evidential warrant for p Clearly (d) and (e) serve jointly as grounds for rejecting (f).11 With this in mind, it seems that claim (d) turns on ClosureEW. Suppose S deduces ‘I am not a handless brain-in-a-vat’ from the belief ‘I have hands’. By ClosureEW, the impossibility of warrant for cornerstone propositions results in the impossibility of warrant for everyday propositions.12 In sum, while alchemy forces Wright’s position to deliver too much, leaching forces it to deliver too little. While I will argue otherwise, it seems—and Wright initially agrees13 —that both problems turn essentially on ClosureEW. It is thus unsurprising that Wright’s initial response to both problems is precisely to deny that ClosureEW holds universally, and to replace it with a version of Closure according to which warrant tout court (evidential or non-evidential)—but not evidential warrant specifically—is closed over competent deduction. Wright’s (2004) initial rejection of ClosureEW as a solution to both problems apparently allows him to maintain the consistency of evidential warrant for everyday propositions and mere acceptance of the cornerstones they entail.14 The rejection of ClosureEW is for Wright motivated independently of leaching, given his view of the architecture of our warrants: since warrant for the cornerstones must be in place before everyday propositions of the relevant region can enjoy evidential warrant, any deductive inference from an everyday proposition to an entailed cornerstone will fail to transmit that evidential warrant to the cornerstone. Wright’s view is that leaching and alchemy rely on ClosureEW in precisely those instances in which we have good theoretical reason to believe ClosureEW fails. So both problems are apparently resolved. In more recent work, however, Wright’s initial view that both problems can be handled by denying ClosureEW has shifted, in the light of criticism brought by Aidan McGlynn (2014). In a nutshell, McGlynn observes that the awkward alchemical result which Wright’s rejection of ClosureEW was meant to avoid can also be generated via the combined work of two seemingly undeniable principles for evidential warrant: Closure-disjunction EW: Necessarily, if S has an evidentially warranted belief that p, and S competently deduces from p the disjunction p or q, then S acquires evidentially warranted belief for the disjunction.

240  Federico Luzzi Closure-equivalenceEW: Necessarily, if S has an evidentially warranted belief that p, and S competently deduces from p a proposition q that is equivalent to p, then S acquires evidentially warranted belief for q. How do these two principles combine to deliver the unwelcome result that we have evidential warrant for cornerstones? Let p be the everyday proposition ‘I have hands’ and q be the cornerstone ‘I am not a handless brain-in-a-vat’. Assuming, as Wright maintains, that I have evidential warrant for ‘I have hands’, then by Closure-disjunctionEW I can obtain evidentially warranted belief that ‘Either I have hands or I am not a brain-in-a-vat’. Now all we need to do is note that ‘Either I have hands or I am not a brain-in-a-vat’ is equivalent to ‘I am not a brain-in-a-vat’. (This follows from the general principle that if p entails q, then the disjunction p or q is equivalent to q.) So by Closure-equivalenceEW, I obtain evidentially warranted belief that I am not a handless brain-in-a-vat— precisely the result Wright wishes to avoid.15 Moreover, Wright accepts that the independent considerations which motivated his rejection of ClosureEW do not motivate a denial of either of the two principles above: for, in defense of Closure-disjunctionEW, it seems exceedingly plausible that the very evidential warrant one has for p must at the same time constitute evidential warrant for the disjunction p or q; similarly for Closure-equivalenceEW. Confronted by this issue, Wright (2014) has more recently backtracked on his rejection of ClosureEW. To address alchemy and leaching, he has retreated instead to claiming that while ClosureEW holds universally, so that one can indeed acquire warrant for a cornerstone by competent deduction from an everyday proposition in the relevant region of thought, the warrant for the cornerstone so obtained cannot be first-time warrant, since the cornerstone must be warranted (by entitlement to trust) before any everyday proposition can be warranted. While accepting ClosureEW, Wright now denies this restricted principle: ClosureFTW: Necessarily, if S has an evidentially warranted belief that p, and S competently deduces q from p, and S lacks evidential warrant for q, then S acquires a first-time warranted belief that q. In other words, according to Wright, competent deduction of a cornerstone proposition from an evidentially warranted ordinary proposition cannot provide a first-time warrant. Moreover, Wright maintains, the evidential warrant one obtains via this deduction does not enhance one’s epistemic position vis-à-vis the cornerstone. The prior entitlement to trust sets an upper bound on the confidence in the cornerstone yielded by deduction from an everyday proposition. So while a form of alchemy is admitted, the positive epistemic status for the cornerstone which we

Entitlement, Leaching, and Counter-Closure 241 gain via deduction from an everyday proposition is no more valuable, epistemically, than the positive epistemic status we already had for the cornerstone prior to drawing the inference. So much for alchemy. How does Wright’s evolved position handle the leaching problem? Recall that leaching could be expressed by the following incompatible triad: d If S lacks evidential warrant for cornerstone C, then S lacks evidential warrant for everyday proposition p in that region e S lacks evidential warrant for cornerstone C f S has evidential warrant for p where (d) was underpinned by ClosureEW. Instead of denying (d) as he had originally done, Wright’s solution to alchemy now also allows him to resolve leaching ‘since we no longer have the assumption in place that there can be no evidential warrant for cornerstones’ (2014, p. 235). Wright thus denies (e) and allows that we do have evidential warrant for cornerstones. The rejection of (e) is made possible by the point that the evidential warrant enjoyed by cornerstones is both non-enhancing and preceded by entitlement to trust them.

14.4  Leaching Reconsidered I will set aside discussion of the alchemy problem for the moment and focus solely on leaching. My contribution to the debate consists in three related points. First, Wright’s solution to leaching is not ultimately satisfactory, because for all but a very few epistemic subjects, it will simply not be true that they have evidential warrant for cornerstones. The second point is diagnostic: the leaching problem has been mischaracterized by Wright (and, following Wright, his critics) as turning essentially on a Closure principle. Instead, it turns on a Counter-Closure-style principle. Third, once this point is recognized, a new and improved solution to leaching comes into view, one that (i) rejects the Counter-Closure-style principle that essentially drives leaching and that (ii) motivates this rejection by way of independently plausible considerations in the extant literature on CounterClosure. I argue for each claim in the following three sub-sections. 14.4.1  Wright’s Solution to Leaching Is Unsatisfactory As we have seen, Wright’s more recent proposed solution to the leaching problem is to deny (e) and to claim that S has evidential warrant for cornerstones, albeit to a degree capped by the prior non-evidential warrant in place. But this solution is not ultimately satisfactory. The reason is that while on Wright’s revised view it is certainly possible for subjects to acquire

242  Federico Luzzi evidential warrant for cornerstones, the vast majority of epistemic subjects do not actually acquire such evidence. In particular, they do not entertain scenarios of radical deception, they do not note entailment relations between the ordinary propositions they believe and the denials of these scenarios and thus do not run through deductive inferences from propositions like ‘I have hands’ to ‘I am not a handless brain-ina-vat’, or from ‘There is a cup on the desk’ to ‘I am not the victim of an evil demon intent on systematically deceiving me’; so they do not in fact gain evidential warrant for cornerstones. Only epistemologists are likely to even notice these entailment relations, and only a small subset of these—those who genuinely believe that such inferences confer evidential warrant (e.g. dogmatists and Wright) and who are concerned about defending their corpus of beliefs from skeptical threat in real life—will actually run through these inferences and thereby (if Wright is correct) gain evidential warrant for cornerstones. This is crucial, because while evidential warrant wards off epistemic risk, the mere possibility of acquiring evidential warrant does not. The whole point of leaching is that, absent evidential warrant for the cornerstone, we run an epistemic risk incompatible with evidential warrant, and this risk seeps up to infect our everyday beliefs. Wright’s view leaves most epistemic subjects in exactly this position: believing everyday propositions with no evidential warrant for cornerstones. Importantly, it will not do for Wright to respond ‘The risk is easily avoided if the subject draws these inferences’, if in fact the subject does not draw these inferences. Although one could easily go next door and check whether the window in the bedroom is closed, insofar as one doesn’t check, one runs a risk in believing without evidence that it is closed. Similarly, one’s easy access to a seatbelt does nothing to mitigate the risk of injury if one doesn’t actually wear the seatbelt. Wright’s position, then, leaves nearly all epistemic subjects vulnerable to leaching. In terms of the formal presentation of the problem above, Wright’s proposed rejection of (e) is only plausible for a very small subset of epistemic subjects. Most epistemic subjects do not acquire evidential warrant for cornerstones, and so (e) can’t be universally rejected. The leaching problem still affects the vast majority of epistemic subjects: they lack evidential warrant for cornerstones and thereby run an epistemic risk with respect to cornerstones—a risk which (d) says must also affect the superstructure of everyday beliefs. One might attempt to neutralize the objection by observing that it was always a key component of Wright’s epistemology that we get some epistemic goods for free, in virtue of a ‘welfare state epistemology’ that does not require subjects to do any epistemic work. Admittedly, this is a linchpin of Wright’s view. However, the generously provided epistemic goods on Wright’s stated view are restricted to non-evidential warrants for cornerstones. Wright’s idea is that non-evidential warrant

Entitlement, Leaching, and Counter-Closure 243 is unearned and that it doesn’t reflect any epistemic work or achievement on the subject’s part. By contrast, to say that the evidential warrant for cornerstones required to mitigate epistemic risk is also unearned constitutes a very substantial (and not independently plausible) departure from Wright’s desired framework. 14.4.2 Leaching Rests Essentially on a Counter-Closure-Style Principle Having argued that Wright’s recent solution to leaching is not compelling, I now move on to my second point, which is diagnostic in nature: while the leaching problem might initially appear to turn essentially on ClosureEW—as the appearance of (d) in the formalization of the incompatible trio of claims suggests—it doesn’t. Instead, the leaching problem of ‘seeping’ entitlement from cornerstone to ordinary propositions is better understood as primarily turning on a Counter-Closure-style principle. My claim is that the leaching worry properly understood is fundamentally not the worry hitherto described by Wright and his critics. To explain the misdiagnosis, consider first the relationship between Closure principles and Counter-Closure principles. Assuming that the subject competently carries out the relevant deduction and that the reasoning is of a single-premise variety, Closure principles generally understood (about knowledge, warrant, justified belief, conclusive evidence, etc.) entail that there is a lower bound on the epistemic status of the conclusion, imposed by the epistemic status of the premise: if one knows (has warranted/justified belief in/conclusive evidence for) the premise, one cannot end up with a conclusion that enjoys a lesser epistemic status. This guarantees knowledge of (warranted/justified belief in/conclusive evidence for) the conclusion. In other words, Closure principles are expressions, in a deductive setting, of the broad thought that when they are correctly carried out, epistemic transitions do not downgrade epistemic status: if what we start with (the premise) has a valuable epistemic property φ, so will the downstream product of that transition (the conclusion). Closure principles assure us that when we extend our body of beliefs competently, the epistemic qualities enjoyed by our original beliefs utilized in the extension carry over to the new beliefs. Counter-Closure principles can be roughly understood as reverses of their Closure counterparts. They, too, govern deductive inferences and can be understood most easily when applied to a simple, single-premise deduction that is competently performed. In such an inference from premise p to conclusion q, Counter-Closure principles for a particular epistemically relevant condition state that the conclusion meets that condition only if the premise from which it is drawn meets that condition, too. For example, a simple Counter-Closure principle for knowledge will claim that the subject knows the conclusion q only if the subject

244  Federico Luzzi knows the premise p (assuming other routes to belief in the conclusion are screened off). Suppose, for instance, that Robin believes on the basis of wishful thinking that there are exactly three beers in the fridge. From this belief alone, Robin infers that there is an odd number of beers in the fridge. The intuition that Robin does not know that there is an odd number of beers in the fridge provides initial support for Counter-Closure for knowledge, since this principle can neatly explain this verdict, by appeal to the fact that Robin doesn’t know their premise, that there are three beers in the fridge, in the first place. Just as Closure principles can be formulated for epistemically relevant conditions other than knowledge, so Counter-Closure principles for other such conditions can be described. For example, a Counter-Closure principle for doxastically justified belief will claim that the conclusion of a single-premise competently performed deduction is believed in a doxastically justified way only if the premise is. A Counter-Closure principle for warranted belief will claim that the conclusion of a single-premise competently performed deduction is warranted only if the premise is. And so on, for other conditions.16 Counter-Closure principles, generally understood, maintain that there is an upper bound on the epistemic status of the conclusion, imposed by the epistemic status of the premise: if one does not have knowledge of (warranted/justified belief in/conclusive evidence for) one’s premise, one cannot wind up with a conclusion that enjoys a higher epistemic status. Counter-Closure principles are expressions, in a deductive setting, of the broad thought that (even) when they are correctly carried out, epistemic transitions do not upgrade epistemic status: if the downstream product of the inference (the conclusion) has valuable epistemic property φ, so too must the premise. When it comes to extending our body of beliefs, Counter-Closure principles guarantee that any new belief that enjoys key epistemic qualities must be borne of prior beliefs which also enjoy those qualities. With these contrasts in mind, consider again a typical (I)-(II)-(III) trio of the Wrightean variety, where (III) is a cornerstone, (II) is an everyday proposition, and (I) is the evidence in favor of (II): I My experience is as of hands held up in front of me II I have hands III I am not a handless brain-in-a-vat On Wright’s picture, (III) is supposed to be an epistemic foundation of our architecture of knowledge, and it is this feature that makes the metaphor of ‘upwards seepage’ appropriate as an illustrative characterization of the leaching problem. After all, it is precisely because of (III)’s foundational role that the skeptical point that we lack warrant for (III) constitutes a threat to the integrity of the whole epistemic structure.

Entitlement, Leaching, and Counter-Closure 245 Moreover—and as an expression of its foundational role—the cornerstone enables the transition from (I) to (II), so that lack of warrant for the cornerstone makes this transition epistemically illegitimate. It is because of this enabling role that, as Wright maintains, in order to make the transition from (I) to (II) we antecedently require some form of warrant for (III). This suggests that while (III) can sometimes be made to figure as the product of an inference from (II)—for those subjects epistemically savvy enough to notice that everyday external-world propositions entail the cornerstones of external-world inquiry—it is not the way it invariably figures (on both Wright’s and the skeptic’s views) in our epistemic architecture. Cornerstones are invariably found as preconditions for the transition from (I) to (II). They are in a roughly similar position to the one held by the tacit premise that one is in the northern hemisphere, in an inferential transition from one’s belief that it’s July to one’s belief that it’s summer: one needs some antecedent warrant for the proposition that one is in the northern hemisphere if one can take its being July as evidence that it’s summer. The proposition that one is in the northern hemisphere is not (at least, not typically) the ultimate conclusion of a two-step inferential transition, from its being July, to its being summer, to one’s being in the northern hemisphere. Clinching support for the view that leaching can’t be understood as essentially relying on ClosureEW (as Wright and his critics have) is that the leaching worry arises even in contexts in which type-II propositions do not entail the cornerstone in the relevant region of thought. Consider the case Wright himself uses to illustrate inductive skepticism, which he takes his view to be able to handle (2004, pp. 184–188). According to Wright, the relevant cornerstone in this region of thought is ‘Nature is uniform’, a statement expressing the broad idea, applicable to a multitude of observed regularities, that such observations track genuine natural regularities that extend to non-observed contexts. For Wright, non-evidential warrant to trust this cornerstone is needed to rationalize the transition from a type-I proposition of the form ‘All observed Fs are Gs’ to a type-II proposition of the form ‘All Fs are Gs’, where the former expresses a particular sample of observations of objects/events exhibiting specific properties F and G, and the latter a regularity or law of nature that extends beyond the observed sample, but that does not entail the cornerstone ‘Nature is uniform’.17 Wright’s view applied to this context is that we have evidential warrant for the belief ‘All Fs are Gs’, but that we are merely entitled to trust ‘Nature is uniform’. My point is that the leaching concern of ‘upward seepage’ arises just as strongly here as it does in the external-world case discussed earlier, with Moruzzi’s observation holding equal force: since I run a risk in accepting that nature is uniform, I run a risk (incompatible with warrant) in accepting that all Fs are Gs. The question ‘How can I still have

246  Federico Luzzi warrant for “All Fs are Gs” compatibly with the risk I take in trusting “Nature is uniform”?’ is just as compelling here as the question of how I can still have warrant for my belief that I have hands when I am taking a risk in trusting that I am not a handless brain-in-a-vat. However, and crucially, the leaching worry in this context simply does not rely on ClosureEW. Recall the formal reconstruction of leaching, as the incompatibility of these three claims: d If S lacks evidential warrant for cornerstone C, then S lacks evidential warrant for everyday proposition p in that region e S lacks evidential warrant for cornerstone C f S has evidential warrant for p And now let’s observe how the reconstruction works in our specific case, substituting ‘Nature is uniform’ for C and ‘All Fs are Gs’ for p: (dNU) If S lacks evidential warrant for the cornerstone that nature is uniform, then S lacks evidential warrant for the everyday proposition that all Fs are Gs. (eNU) S lacks evidential warrant for the cornerstone that nature is uniform. (f NU) S has evidential warrant for the proposition that all Fs are Gs. First, notice that if (d NU) and (eNU) are true, then (f NU) is false—so a genuine concern of leaching is indeed expressed by this trio, just as a leaching worry was expressed by the general argument (d)-(f) above. The crucial point of difference, however, is that (dNU) is not an instance of ClosureEW: the proposition that all Fs are Gs does not entail the cornerstone that nature is uniform. At best it provides only extremely limited evidence for this very broad claim, which in any case cannot be validated deductively. The problem, then, is that in his diagnosis of leaching Wright moves too quickly to interpret (d) as essentially an expression of ClosureEW, thereby taking engagement with this principle to be sufficient in addressing the leaching worry. But because the leaching worry does not rely essentially on ClosureEW—as (dNU)-(f NU) illustrate—he cannot be taken to have addressed the leaching worry to its full extent. Wright’s misdiagnosis results from his interpretation of (d) as an instance of a Closure principle. This interpretation is not always inaccurate: many cornerstones are in fact entailed by everyday propositions, and a leaching worry can thus be generated by the relevant Closure principle of which (d) is often an instance. But (d) is not always an instance of Closure, since not all cornerstones are entailed by everyday propositions in their region. Instead, (d) is always an instance of a Counter-Closure-style principle, regardless of whether the cornerstone at issue is entailed by everyday

Entitlement, Leaching, and Counter-Closure 247 propositions in the region. This stems from the fact that ordinary propositions depend for their warrant on cornerstone propositions: it is the warranted status of cornerstones that allows, on Wright’s picture, ordinary propositions to enjoy warrant. The principle inevitably instantiated by (d) can be thus formulated: Counter-Closure*EW: Necessarily, a proposition p enjoys evidential warrant only if any proposition q on which p depends for its own warrant is itself evidentially warranted. For our purposes, the relevant instances of Counter-Closure*EW are those where p corresponds to an everyday proposition and q to a cornerstone proposition. In this case, the principle claims that the evidential warrant for the everyday proposition can only obtain if there is evidential warrant for the cornerstone—i.e., for the proposition on which p’s evidential warrant depends. Read contrapositively: necessarily, if the cornerstone is not evidentially warranted, then neither are the everyday propositions which depend on that cornerstone for their warrant. To be clear: my thesis is that when the cornerstone is entailed by everyday propositions, the relevant instance of (d) (‘If S lacks evidential warrant for cornerstone C, then S lacks evidential warrant for everyday proposition p in that region’) will undoubtedly be an instance of ClosureEW, and that everything Wright and his critics have said about leaching is relevant to these cases (although ultimately, as argued in the previous section, Wright’s solution is problematic). But crucially, and simply in virtue of the foundational role played by cornerstones, the relevant instance of (d) will also be an instance of Counter-Closure*EW when cornerstones are entailed by the everyday propositions they sustain. Additionally, when the cornerstone at issue is not entailed by the everyday propositions of the region, the relevant instance of (d) will be an instance of Counter-Closure*EW but not of ClosureEW. That is why characterizing the leaching problem as essentially turning on ClosureEW, as Wright has done, involves a significant omission. In some cases, the leaching problem turns on ClosureEW; but in some, it doesn’t. And in all cases, it relies on Counter-Closure*EW. Here’s another way of putting the point. ClosureEW allows one to use one’s warrant for ordinary propositions to gain warrant for deductively correlated cornerstones. By contrast, Counter-Closure*EW is a principle that imposes a condition on the warrant for ordinary propositions, namely: that the cornerstones on which ordinary propositions depend be themselves warranted. It is clear that the latter, and not the former, is at issue in leaching, since (i) leaching arises even in situations where ClosureEW cannot be applied, and (ii) Counter-Closure*EW, unlike ClosureEW, is able to provide a link between the warrant one enjoys for ordinary propositions and the antecedent warrant for cornerstones, as leaching demands.18

248  Federico Luzzi At this stage, I should highlight that I have loosened some of the bolts when formulating Counter-Closure*EW, which is not a strict reverse of ClosureEW. There are two dimensions along which the jurisdiction of this principle is broader than ClosureEW ’s and other Counter-Closurestyle principles I have discussed in prior work (and which motivate the distinguishing mark ‘*’ in its name).19 However, I do not think either of these make the principle less compelling. The first broadening is that Counter-Closure*EW talks generally of dependence of warrant and is not restricted solely to cases in which p is deductively inferred from q: the relation of warrant-dependence should be read widely enough to include not only cases where p is deduced from q, but also cases where p is inferred from q via ampliative inference and—crucially— cases where q plays an enabling role in the realization of p’s evidential warrant, as cornerstones do for everyday propositions. The second (related) difference is that Counter-Closure*EW is not restricted to cases in which the warrant for the proposition at issue depends on the warrant of only one other proposition. There may in fact be more than one proposition to which p owes its evidential warrant, as for example when a proposition p is evidentially warranted via the joint work of two propositions q and r from which it is inferred, where each of q and r provides a necessary but not sufficient contribution to p’s overall evidential warrant. Here, too, I think Counter-Closure*EW enjoys at least first-blush plausibility: unless q and r are evidentially warranted, it seems hard to see how a subject could wind up with an evidentially warranted belief that p. 14.4.3  How Wright Could Resolve Leaching How might Wright address the leaching problem, when it is diagnosed along the lines I suggested? I submit that leaching should still be viewed as arising from the incompatibility of the by-now familiar trio: d If S lacks evidential warrant for cornerstone C, then S lacks evidential warrant for everyday proposition p in that region e S lacks evidential warrant for cornerstone C f S has evidential warrant for p However, we should resist the temptation to read (d) as essentially an instance of ClosureEW, because the leaching worry still arises when p doesn’t entail C, and thus when (d) is not an instance of ClosureEW. Recognizing instead that (d) is always an instance of Counter-Closure*EW and that Wright’s own evolved proposal of denying (e) is unsatisfactory suggests exploring a solution to leaching that confronts CounterClosure*EW head-on and finds grounds to reject it, despite its prima facie plausibility.

Entitlement, Leaching, and Counter-Closure 249 Some of the considerations I offered in previous work, suitably adjusted, can offer such grounds. In prior work (Luzzi, 2019), I examined the plausibility of several Counter-Closure-style principles that tied the knowability of one’s deductively drawn conclusion to the premise’s epistemically relevant properties, such as knowledge, doxastic justification, and truth: Counter-ClosureK: Necessarily, if S believes q solely on the basis of competent deduction from p and S knows q, then S knows p. Counter-ClosureDJB: Necessarily, if S believes q solely on the basis of competent deduction from p and S knows q, then S has a doxastically justified belief that p. Counter-ClosureT: Necessarily, if S believes q solely on the basis of competent deduction from p and S knows q, then p is true. I argued that the plausibility of each of these (and similar) principles turns on one’s prior understanding of the conditions under which believing the premise p is permissible. This is because in general, impermissibly believing the premise of a single-premise deductive inference will always spoil the knowability of the conclusion of that inference—roughly put, if one has no business believing p, then any q one deduces from p will inevitably be a proposition which one has no business believing and therefore cannot know on the basis of that deduction. The plausibility of Counter-Closure principles for specific epistemically relevant properties, then, will depend on whether one’s theoretical commitments allow one to permissibly believe the premise p even when it lacks that property: if they do allow this, then one should take the Counter-Closure principle tied to that property to be false; if they don’t allow this, then one should take the Counter-Closure principle tied to that property as universally true. For example, if one’s epistemology allows for the possibility of S’s holding a false belief that p in an epistemically permissible fashion (as many do) then counterexamples to Counter-ClosureT can be devised. Such examples will be ones where S believes a false proposition p in a permissible way and deduces from p a true proposition q that enjoys a myriad of good epistemic properties, arguably sufficient to amount to knowledge. 20 But if one’s epistemology rejects the possibility of holding a false belief in an epistemically permissible fashion, then no persuasive counterexample to Counter-ClosureT can be devised and the principle should be deemed correct by that epistemic view. Mutatis mutandis, similar considerations hold for CounterClosureK, Counter-ClosureDJB , and other Counter-Closure principles. To apply this thought to our context, we must once again broaden out from the single-premise deductive focus of Counter-Closure principles, to encompass the more general notion of epistemic dependence. Additionally, if this idea can be usefully applied to Wright’s epistemology,

250  Federico Luzzi we must also broaden out from the stricter notion of belief to allow for the relevant attitude to take either the form of belief or that of acceptance. Once these two broadenings are allowed, the lesson takes this specific form in the Wrightean context: in situations where p and q are such that the warrant S has for p depends on q’s being warranted (as is the case when p is an ordinary proposition and q is a cornerstone), S can know p just in case belief that/acceptance of q is epistemically permissible. The key question for Wright, then, is this: is it epistemically permissible to accept a cornerstone C when C is not evidentially warranted? The answer here is clearly affirmative. It is an essential and distinctive component of Wright’s overall view that accepting a cornerstone does not require evidential warrant. The boundaries of permissible acceptance for Wright outstrip what one’s evidence indicates. We can view Wright’s arguments in favor of strategic entitlements and entitlements of cognitive project as rationalizing precisely this non-evidential acceptance of cornerstones. Since it is epistemically permissible on Wright’s view to accept cornerstones without evidence, then, by the point made above drawn from general consideration of Counter-Closure principles, it is to be expected that the lack of evidential warrant for cornerstones does not pose an obstacle to the knowability of a proposition p which depends on the cornerstone for its warrant. In other words, this Counter-Closure-style principle, which entails that lack of evidential warrant for a cornerstone makes everyday propositions unknowable, should be deemed false by Wright’s view: Counter-Closure**EW: Necessarily, a proposition p can be known only if any proposition q on which p depends for its own warrant is evidentially warranted. Now, since knowing an ordinary proposition p entails having evidential warrant for it, the principle examined earlier, which essentially drives leaching, should also be expected to fail. Recall the principle in question: Counter-Closure*EW: Necessarily, a proposition p enjoys evidential warrant only if any proposition q on which p depends for its own warrant is itself evidentially warranted. To see why Wright should deem Counter-Closure*EW false, consider that the falsehood of Counter-ClosureEW** means that there are possible circumstances where an everyday proposition p is known, but a relevant cornerstone q is not evidentially warranted. Such circumstances are also ones where p is evidentially warranted (since knowledge entails evidential warrant) but q is not evidentially warranted—circumstances which demonstrate Counter-ClosureEW* to be false.

Entitlement, Leaching, and Counter-Closure 251 Let’s take stock. The leaching worry, once again, was expressed as the incompatibility of these claims: d If S lacks evidential warrant for cornerstone C, then S lacks evidential warrant for everyday proposition p in that region e S lacks evidential warrant for cornerstone C f S has evidential warrant for p Wright’s first attempted solution rejected (d) on the grounds that it was an instance of the putatively false principle ClosureEW. Wright’s more recent attempted solution accepts (d) and rejects (e) instead. I have argued that (d) is not essentially an instance of ClosureEW, that rejecting (e) is ultimately problematic, and that the leaching problem can be resolved by rejecting (d) on the grounds that it invariably instantiates CounterClosure*EW, a principle Wright’s view should deem to be false, and where its falsehood is well supported by independent considerations regarding Counter-Closure-style principles generally. Resolving the leaching worry in this way ultimately means accepting (e) and (f)—accepting, that is, that evidential warrant for ordinary propositions can happily coexist with lack of evidential warrant for the cornerstone on which the former propositions depend for their warrant. It also means that insofar as evidence eliminates epistemic risk, accepting our evidentially unwarranted cornerstones carries some risk. However, this risk is something that (Wright can say) should be lived with, insofar as it does not prevent an epistemic subject from permissibly accepting the cornerstones, with Wright’s arguments for strategic and cognitive-project entitlements grounding this permissibility. Accepting cornerstones is legitimate even though doing so carries some risk, and this risk does not transfer to ordinary proposition in the relevant region of thought, since Counter-Closure*EW is false in the relevant circumstances. Or so, I believe, Wright can legitimately argue.

14.5  Concluding Remarks Where does this leave the relation between leaching and alchemy? As we saw, alchemy relies essentially on ClosureEW: its unwelcome result— evidentially warranted belief in cornerstones—only arises if one notices that an everyday proposition entails a cornerstone and carries out the relevant deduction. By contrast, leaching does not essentially rely on a potential attempted extension of our body of beliefs via ClosureEW (although in the limited circumstances in which an everyday proposition entails a cornerstone, such a deduction can be used to press the leaching problem in a particularly vivid way). Instead, the leaching worry tries to impress upon us the point that, because of the foundational role of cornerstones and the attitude of mere acceptance they elicit, anyone

252  Federico Luzzi who believes any ordinary proposition is already running the epistemic risk characteristic of leaching. The unwelcome result, in other words, is allegedly already there in the routine epistemic activity of ordinary subjects, prior to and independently of their drawing any deductive inference from an everyday proposition to a cornerstone. For this reason, to conceive of leaching and alchemy as ‘dual’ problems—as Wright does— is to misunderstand their true nature, and to overlook the necessity of distinct solutions to each problem. Wright’s best reply to the leaching problem, I have argued, is to address head-on and flatly deny the claim expressed by Counter-Closure*EW: that lack of evidential warrant for the foundational cornerstones translates into lack of evidential warrant for everyday propositions. What of alchemy? As we have seen, Wright maintains that alchemy can occur, but that when it does, the evidential warrant one obtains in favor of a cornerstone via deduction from an everyday proposition does not exceed the antecedent non-evidential warrant one already had for the cornerstone. I will not assess this solution here but will limit myself to noting that the solution to leaching I have proposed is perfectly compatible with Wright’s solution to alchemy, resulting in an overall stable response to the two problems. 21

Notes 1. Strictly speaking, Wright describes cornerstones as propositions such that: if one lacked warrant for them, one would lack the higher-order warrant to claim warranted belief in everyday propositions. However, Wright himself (2004) thinks that the lack of warrant to claim warranted belief in everyday propositions stems from the fact that such beliefs would be unsupported by evidence if the cornerstone were not warranted. Since everyday beliefs are only warranted evidentially, it is clear that the higher-order lack of warrant for claiming warranted belief in everyday propositions, produced by lack of warrant for cornerstones, is due to the lack of first-order warrant for those everyday propositions. Additionally, the construal of cornerstones I have offered—in terms of lacking warrant for everyday propositions, rather than in terms of lacking warrant to claim warrant for them—allows us to see Wright’s view as a response to first-order skepticism, in keeping with the way most epistemologists interpret skeptical challenges. 2. It has however been argued that this dominance-based argument depends on allocating primary value to having true beliefs. If one were to primarily value, for example, avoiding false beliefs, then the strategy Wright advocates would no longer be dominant. See N. J. L. L. Pedersen (2009) and N. J. Pedersen (2020). 3. Wright (2004) originally talks of ‘presuppositions’ but later (2014) clarifies that ‘authenticity condition’ better captures the intended notion. 4. The point is particularly compelling for the Reichenbachian considerations on which Wright draws in order to motivate the dominance-based argument for trusting in the uniformity of nature (Reichenbach, 1938). In the example discussed by Wright, Crusoe is stranded on an island with no food other than a strange-looking fruit whose edibility is in question. It is a dominant

Entitlement, Leaching, and Counter-Closure 253 strategy for Crusoe to trust that it is edible, since not doing so will lead to death by starvation, and doing so may lead to survival. It is undeniable that Crusoe’s strategy of trusting that the fruit is edible carries risk—not just risk of poisoning, but also—more relevantly—the epistemic risk that his trust in the fruit’s edibility will not be borne out by the facts. 5. This formulation will do for our purposes, but it is worth observing that formulations of this principle vary across authors, and that the question of which formulation of this principle is most plausible is the subject of some debate. 6. Note that Wright likes to reserve the term ‘Closure’ to principles that are silent on the way in which warrant for the entailed q is achieved, and the term ‘Transmission’ to principles, like ClosureEW, which specify that the warrant for the entailed q is achieved by deductive inference from the entailing proposition p. Nothing much hinges on this difference in terminological use. My rationale for labelling ClosureEW as a Closure principle is to set up a comparison with Counter-Closure-style principles. 7. For notable exceptions, see Dretske (1969, 1970, 1971), Nozick (1981) and more recently Alspector-Kelly (2019). 8. I will ultimately disagree with the claim that ClosureEW plays an essential role in generating the leaching worry. I use this claim here only for illustrative purposes, following Wright’s characterization of the leaching worry. 9. The epistemic risk Wright is concerned with here is incompatible with evidence (Wright, 2004, 2014). 10. I am not denying that risk is incompatible with some small amount of evidential warrant. I wish to deny merely that risk is incompatible with evidential warrant sufficient for warranted belief. ‘Evidential warrant’ should be read throughout with this qualification, which I omit for simplicity. 11. The observant reader will notice that (f) reinterprets Wright’s (c) by adverting to warrant rather than knowledge. This slight adjustment, made for expository purposes, is entirely legitimate: in more recent work of his (Moretti & Wright, 2022) it is clear that Wright views the pressure (a) and (b) place on ‘S knows p’ as indirect, i.e., as stemming from the pressure they directly place on ‘S has evidential warrant for p’. 12. Wright (2014). 13. Wright (2014) labels alchemy ‘a kind of dual of the concern about leaching’. 14. Wright additionally concedes that leaching occurs, but merely ‘at second-­ order’. According to his view, while in fact we possess evidential warrant for everyday propositions thanks to the entitlement to trust cornerstones, what we cannot do is claim evidential warrant for everyday propositions, since this would require second-order evidential warrant for such propositions, which we lack. I will not be concerned here with this aspect of Wright’s overall view. See McGlynn (2017) for an argument that this move is ultimately problematic for Wright’s position. 15. McGlynn describes an equivalent way of putting the point. According to Wright’s conception of cornerstones, if we lack warrant for them, then we lack warrant for all other propositions in that region of thought. If a cornerstone proposition C entails a proposition C′, then by closure of warrant, lack of warrant for C′ will entail lack of warrant for C, which in turn entails lack of warrant for all propositions in the relevant region of thought. So C′ itself is a cornerstone. In other words, the set of cornerstone propositions must be deductively closed. Now, since ‘I am not a handless brain-in-a-vat’ is a cornerstone, so is the entailed disjunction ‘Either I have hands or I am not a handless brain-in-a-vat’. But evidential warrant for the latter can easily be reached via Closure-disjunctionEW from the evidential warrant one is assumed to have for ‘I have hands’.

254  Federico Luzzi 16. Counter-Closure principles will vary in their plausibility depending on the condition which they concern; and their plausibility will not always match the plausibility of their Closure counterpart. For example, Closure for truth is undisputable—if p is true and q is competently deduced, q must be true—but Counter-Closure for truth should clearly be rejected—a true conclusion q can clearly be competently deduced from a false premise p: from the falsehood that Einstein invented dynamite one can competently deduce that someone invented dynamite. 17. It is essential to my diagnostic point that (at least some) type-II everyday proposition of the form ‘All Fs are Gs’ does not logically entail the cornerstone that nature is uniform. It seems to me that any attempt to deny this, and to thereby attempt to conceive the uniformity of nature as a proposition entailed by ‘All Fs are Gs’ will inevitably make it too specific—too specific, that is, to count as a proposition such that lacking warrant for it means lacking warrant for any proposition in that region of thought. Additionally, my argument does not turn solely only this example. We can think of other cornerstones that are not entailed by type-II propositions of that region. For example, the proposition ‘My perceptual faculties are generally reliable guides to my environment’ is not entailed by the proposition ‘There is a cup on the desk’, yet it is arguably a cornerstone. Thanks to Luca Moretti for suggesting the latter case. 18. Thanks to Luca Moretti for suggesting this presentation of the point. 19. See Luzzi (2019). 20. For seminal pieces in the debate on Counter-ClosureT, see Warfield (2005) and Klein (2008). For discussions of this principle, see Adams et al. (2017); Audi (2011); Ball and Blome-Tillman (2014); Borges (2017); Buford and Cloos (2018); Coffman (2008); de Almeida (2017); Fitelson (2017); Hawthorne and Rabinowitz (2017); Leite (2013); Littlejohn (2013, 2016); Klein (1996, 2017); Luzzi (2014); Montminy (2014); and Schnee (2015). 21. I am very grateful to Jesper Kallestrup, Luca Moretti, and Crispin Wright for comments and helpful conversations.

References Adams, F., Barker, J., & Clarke, M. (2017). Knowledge as fact-tracking true belief. Manuscrito, 40(4), 1–30. Alspector-Kelly, M. (2019). Against knowledge closure. Cambridge University Press. https://doi.org/10.1017/9781108604093 Audi, R. (2011). Epistemology: A contermporary introduction to the theory of knowledge (3rd ed.). Routledge. Ball, B., & Blome-Tillman, M. (2014). Counter closure and knowledge despite falsehood. The Philosophical Quarterly, 64(257), 552–568. Borges, R. (2017). Inferential knowledge and the Gettier conjecture. In R. Borges, C. de Almeida, & P. Klein (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 273–291). Oxford University Press. https://doi. org/10.1093/oso/9780198724551.003.0017 Buford, C., & Cloos, C. M. (2018). A dilemma for the knowledge despite falsehood strategy. Episteme, 15(2). https://doi.org/10.1017/epi.2016.53 Coffman, E. J. (2008). Warrant without truth? Synthese, 162(2). https://doi. org/10.1007/s11229-007-9178-5

Entitlement, Leaching, and Counter-Closure 255 Davies, M. (2004). Epistemic entitlement, warrant transmission and easy knowledge. Aristotelian Society Supplementary, 78(1). https://doi.org/10.1111/j. 0309-7013.2004.00122.x de Almeida, C. (2017). Knowledge, benign falsehoods, and the Gettier problem. In R. Borges, C. de Almeida, & P. Klein (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 292–311). Oxford University Press. https://doi.org/10.1093/oso/9780198724551.003.0018 Dretske, F. (1969). Seeing and knowing. Routledge and Kegan Paul. Dretske, F. (1970). Epistemic operators. Journal of Philosophy, 68(24), 1007–1023. Dretske, F. (1971). Conclusive reasons. Australasian Journal of Philosophy, 49(1). https://doi.org/10.1080/00048407112341001 Fitelson, B. (2017). Closure, counter-closure, and inferential knowledge. In R. Borges, C. de Almeida, & P. Klein (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 313–324). Oxford University Press. https://doi. org/10.1093/oso/9780198724551.003.0019 Hawthorne, J., & Rabinowitz, D. (2017). Knowledge and false belief. In R. Borges, C. de Almeida, & P. Klein (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 325–343). Oxford University Press. https://doi. org/10.1093/oso/9780198724551.003.0020 Klein, P. (1996). Warrant, proper function, reliabillism and defeasibility. In J. Kvanvig (Ed.), Warrant in contemporary epistemology: Essays in honor of Plantinga’s theory of knowledge (pp. 97–130). Rowman & Littlefield. Klein, P. (2017). The nature of knowledge. In R. Borges, C. de Almeida, & P. Klein (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 35–56). Oxford University Press. Klein, P. D. (2008). Useful false beliefs. Epistemology, 25–62. https://doi. org/10.1093/acprof:oso/9780199264933.003.0003 Leite, A. (2013). But that’s not evidence: It’s not even true! The Philosophical Quarterly, 63(250). https://doi.org/10.1111/j.1467-9213.2012.00107.x Littlejohn, C. (2013). No evidence is false. Acta Analytica, 28(2). https://doi. org/10.1007/s12136-012-0167-z Littlejohn, C. (2016). Learning from learning from our mistakes. In M. Grajner & P. Schmechtig (Eds.), Epistemic reasons, norms and goals (pp. 51–70). De Gruyter. https://doi.org/10.1515/9783110496765-004 Luzzi, F. (2014). What does knowledge-yielding deduction require of its premises? Episteme, 11(3). https://doi.org/10.1017/epi.2014.3 Luzzi, F. (2019). knowledge from non-knowledge. Cambridge University Press. https://doi.org/10.1017/9781108649278 McGlynn, A. (2014). On epistemic alchemy. In Scepticism and perceptual justification. Oxford University Press. https://doi.org/10.1093/acprof:oso/ 9780199658343.003.0009 McGlynn, A. (2017). Epistemic entitlement and the leaching problem. Episteme, 14(1). https://doi.org/10.1017/epi.2015.63 Montminy, M. (2014). Knowledge despite falsehood. Canadian Journal of Philosophy, 44(3–4), 463–475. https://doi.org/10.1080/00455091.2014.982 354 Moretti, L., & Wright, C. (2022). Epistemic entitlement, leaching and epistemic risk. https://philpapers.org/rec/MOREEL-2

256  Federico Luzzi Nozick, R. (1981). Philosophical explanations. Harvard University Press. Pedersen, N. J. (2009). Entitlement, value and rationality. Synthese, 171(3). https://doi.org/10.1007/s11229-008-9330-x Pedersen, N. J. L. L. (2020). Pluralist consequentialist anti-scepticism. In Epistemic entitlement. Oxford University Press. https://doi.org/10.1093/oso/ 9780198713524.003.0011 Reichenbach, H. (1938). Experience and prediction. University of Chigaco Press. Schnee, I. (2015). There is no knowledge from falsehood. Episteme, 12(1), 53– 74. https://doi.org/10.1017/epi.2014.26 Warfield, T. A. (2005). Knowledge from falsehood. Philosophical Perspectives, 19(1), 405–416. https://doi.org/10.1111/j.1520-8583.2005.00067.x Wright, C. (2004). Warrant for nothing (and foundations for free)? Aristotelian Society Supplementary, 78(1). https://doi.org/10.1111/j.0309-7013.2004. 00121.x Wright, C. (2014). On epistemic entitlement (II): Welfare state epistemology. In D. Dodd & E. Zardini (Eds.), Scepticism and perceptual justification (pp. 213–247). Oxford University Press.

Section IV

Knowledge: From Falsehoods and of Falsehoods

15 Why Is Knowledge from Falsehood Possible? An Explanation John Turri

*** What is the relationship between knowledge and truth? This question consistently receives attention in textbooks and other sources designed to provide readers with a succinct summary of the field’s current state of knowledge. Unfortunately the attention it receives is consistently superficial and unpersuasive. For example, consider the chapter, “What is Knowledge?”, in the third edition of a textbook written by arguably the most influential anglophone epistemologist of the second-half of the twentieth century. The chapter begins, “If you know that it is raining, then it is raining” (Chisholm, 1989, p. 90). From there, we are told, “the point may be generalized” to the claim that if you know something, then it is true. No evidence is offered for the initial claim or the generalization. No alternatives are considered. The matter is simply stipulated. Other textbooks adopt a similar strategy. The first condition [on knowledge], the truth condition, requires that p be true. What is known must be true, or to put it the other way around, it is impossible to know a proposition that is false. Here are two examples. First, since it is false that there is a fax machine on my desk, I cannot know that there is a fax machine on my desk. Second, many people believe that there are space aliens who periodically visit earth and abduct humans to perform experiments upon them. Some people claim that they themselves have been the victims of such abductions. According to the truth condition, if it is false that these people were abducted by space aliens, then they do not know they were abducted, however strongly they believe it. (Steup, 1996, p. 3) Here the only evidence, if it can be called that, offered for accepting that truth is necessary for knowledge is two examples: one in which it is DOI: 10.4324/9781003118701-22

Why Is Knowledge from Falsehood Possible? 259 stipulated that a proposition is not true and therefore cannot not known, and another in which it is stipulated if a proposition is false, then it therefore is not known. One struggles to interpret this as an attempt at rational persuasion. Later, writing for an influential and potentially the most widely used encyclopedia of philosophy in recent decades, the same author appeals to authority in support of the point and offers another example: “the truth condition,” we are told, enjoys nearly universal assent, and thus has not generated any significant degree of discussion. It is overwhelmingly clear that what is false cannot be known. For example, it is false that G.E. Moore is the author of Sense and Sensibilia. Since it is false, it is not the sort of thing that anybody can know. (Steup, 2001) The author provides no evidence that the truth condition “enjoys nearly universal assent.” Twenty years later, the updated entry in that same encyclopedia differs only superficially, suggesting that the field has made no progress on the topic in the intervening decades: Most epistemologists have found it overwhelmingly plausible that what is false cannot be known. For example, Hillary Clinton did not win the 2016 US Presidential election. Consequently, nobody knows that Hillary Clinton won the election. One can only know things that are true. (Ichikawa & Steup, 2021) When I’ve challenged epistemologists to defend the alleged “consensus” in conversation, the most common coherent response I receive appeals to successful action (for discussion of an alternative strategy invoking the “factivity” of “knows,” see Hazlett, 2010, 2012). The hypothesis that knowledge requires truth helps explain why knowledge is a surer basis for successful action than belief. What is the success-ratio for “action based on knowledge” compared to otherwise similar “action based on belief”? No definite answer has been forthcoming, but I’m assured that it is Very Plausible that the ratio is higher than one! And perhaps it is. Let’s grant that the ratio is indeed higher than one. Would this support a necessarily true generalization or an exceptionless requirement whereby it is impossible to know something false? No. For instance, consider this alternative weaker hypothesis. Suppose it is a conceptual truth, reflected in the very meaning of the word “know” in our language, that the central tendency of knowledge states is to be true. That is, as a matter of conceptual necessity, a majority of knowledge states

260  John Turri are true. This is consistent with some knowledge states being not true. However, suppose further that it is not a conceptual truth that there is a similar central tendency for belief (or, if there is, it is weaker). This could explain why the ratio of successful “action based on knowledge” to “action based on belief” is higher than one. Another alternative explanation, short of hypothesizing the impossibility of knowing untruths, involves approximation (Turri, 2016b, pp. 131–132). Suppose that a strictly false representation can count as knowledge provided that it is still an adequate approximation of the truth. And suppose that adequate approximation involves being close enough to the truth for practical purposes. However, suppose further that there is no constraint on belief—that is, a belief needn’t involve being close enough to the truth for practical purposes. This could also explain a knowledge/belief success-ratio higher than one. Indeed, this hypothesis could explain the high success ratio even while granting that most knowledge is actually false. With alternatives in front of us, we can ask which better describes the ordinary concept of knowledge. Recent research provides some evidence relevant to evaluating two hypotheses: Factivity hypothesis: it is impossible to know a false proposition. Adequacy hypothesis: it is possible to know a false but adequate proposition. In support of the Factivity hypothesis, first, a number of behavioral experiments studying knowledge attributions by adult speakers have compared rates of attribution in conditions where the agent’s representation is true to rates in closely matched control conditions where the agent’s representation is false. A consistent finding is that knowledge attribution is significantly lower in false-belief control conditions, often with an extremely large effect size (Blouw, Buckwalter, & Turri, 2018; Starmans & Friedman, 2012; Turri, 2013, 2016d; Turri et al., 2015). Second, some expressions in ordinary language initially suggest that the Factivity hypothesis is false, such as when someone who improbably survives a plane crash says, “I knew I was going to die.” But experiments designed to assess such “nonfactive” attributions have found evidence that they are influenced by factors such as perspective-taking; that is, the expressions can function to convey how things seemed from a certain perspective, rather than how things actually were, as an objective matter (Buckwalter, 2014; Turri, 2011). Third, in studies where participants recorded judgments about truth-value and knowledge, judgments about truth-value were strongly positively correlated with judgments about knowledge and related judgments, such as evaluations of evidence (e.g. Turri, 2015, 2016d, 2021; Turri & Buckwalter, 2017; Turri et al., 2016).

Why Is Knowledge from Falsehood Possible? 261 The research just described is suggestive but limited in a crucial way. Although it included false-belief control conditions, the featured false beliefs were not designed to be approximately true and practically adequate. For example, the comparisons included judgments about knowing that a stone is a diamond when it is real but not when it is fake (Turri et al., 2015, experiment 4), or knowing the location of an object when it is present but not when it was stolen and removed (Starmans & Friedman, 2012, experiments 1–2). Another way of putting the point is that the true/false comparisons were too coarse or extreme to generate opposing predictions by the Adequacy and Factivity hypotheses. Therefore, the findings in question do not allow us to distinguish between those hypotheses, despite strongly supporting the conclusion that there is some sort of conceptual connection between knowledge and truth. To provide a sharper test between the Adequacy and Factivity hypotheses, one recent study adopted a 2 × 2 experimental design that manipulated whether an agent’s answer was either true or false (truth-value) and also whether the agent’s answer was practically adequate to achieve a salient goal (adequacy) (Buckwalter & Turri, 2020b). Both the Factivity and Adequacy hypotheses predict that people will deny knowledge when the agent’s answer is both false and inadequate. And both hypotheses predict that people will attribute knowledge when the answer is both true and adequate. But they disagree on whether people will attribute knowledge when the answer is false but adequate. In this key condition, the Factivity hypothesis predicts that participants will deny knowledge, whereas the Adequacy hypothesis predicts that participants will attribute knowledge. Otherwise put, the Adequacy hypothesis predicts a specific interaction effect between truth-value and adequacy on participants’ knowledge, whereas the Factivity hypothesis predicts no such interaction. The results supported the Adequacy hypothesis. In the key condition, participants tended to attribute knowledge (Buckwalter & Turri, 2020b, experiments 1–2). Importantly, participants tended to attribute knowledge while, in the very same context, they also tended to agree that the agent’s representation was false and adequate for the task at hand (Buckwalter & Turri, 2020b, experiment 1). The basic pattern supporting the Adequacy hypothesis was robust across narrative contexts, types of approximation, and questioning procedures, including when the answer options enabled participants to easily avoid perspective-taking by contrasting “knows” with “only thinks he knows” (Buckwalter & Turri, 2020b, experiment 2). Setting aside the question of what is possible according to the ordinary knowledge concept, there is the further question of what knowledge states, as they exist in the world, are actually like, regardless of how accurate or complete our conception of them is. In that vein, researchers have also argued that findings from neuroscience undermine

262  John Turri the Factivity hypothesis. For example, there is evidence of imprecision and noise in the visual and motor systems, which implies that the propositional representations, which are formed on their basis and often constitute knowledge, are strictly false (Bricker, 2018), which would be impossible if the Factivity hypothesis were true. The inevitability of our cognitive equipment’s imprecision and limitations also raises skeptical worries about whether many of our ordinary beliefs and even our most well-established scientific claims constitute knowledge. The Factivity hypothesis is key to fueling those skeptical worries, which evaporate without it (Bricker, 2018, 2021; Buckwalter & Turri, 2020a). Overall, then, the current balance of evidence supports the conclusion that it is conceptually possible for some false representations to qualify as knowledge. *** Another question debated in contemporary is whether it is conceptually possible to knowledgeably infer a conclusion from a false premise (e.g. Ball & Blome-Tillmann, 2014; Montminy, 2014; Schnee, 2015; Warfield, 2005). For instance, suppose a fan believes that the actress’s dress is blue, but the dress is actually green. Can the fan knowledgeably infer “the dress is not red” from “the dress is blue”? Does the ordinary knowledge concept allow for this? To help answer that question, researchers recently conducted a behavioral experiment (Turri, 2019). In the critical test condition, participants read this scenario: Michael bet one of his friends that their favorite actress would not wear a red dress to tonight’s award ceremony. A leading fashion site just posted images of the actress arriving at the ceremony. Given the lighting, the dress looks blue to Michael, but it is green. Michael says, “Her dress is blue, so it is not red.” Participants then rated their agreement or disagreement with several statements, including the crucial knowledge attribution: Michael knows that the dress is not red (“conclusion-knowledge”). If people tend to attribute conclusion-knowledge, then that supports the hypothesis that knowledge from falsehood is conceptually possible. By contrast, if people tend to deny conclusion-knowledge, then that supports the contrary hypothesis that knowledge from falsehood is conceptually impossible. The strong central tendency was to attribute conclusion-knowledge, thereby supporting the hypothesis that knowledge from falsehood is conceptually possible. Moreover, in the same context, participants rated

Why Is Knowledge from Falsehood Possible? 263 the claim “her dress is blue” as false, and further testing revealed that participants agreed that Michael concluded that the dress is not red because he thought that the dress was blue. So we have some provisional evidence supporting two hypotheses about our ordinary knowledge concept: It is possible for false but adequate representations to be knowledge. It is possible for conclusions based on false premises to be knowledge. *** Now I’d like to propose a connection between the two hypotheses. I hypothesize that if false but adequate representations can themselves qualify as knowledge, then knowledge can be based on false but adequate premises. Otherwise and metaphorically put, if knowledge tolerates falsehood within itself, then it stands to reason that knowledge would also tolerate falsehood within its basis. The mere fact that a representation is strictly false does not rule out the possibility that the representation qualifies as knowledge. In some cases where the representation is approximately true and practically adequate, the central tendency is to attribute knowledge to the agent. It is a plausible conjecture, then, that practical adequacy could vindicate a false premise used to infer a conclusion. Consider again the example of the blue dress discussed above, which is widely accepted to be a case of knowledge from falsehood. In a context where the salient question is whether the actress’s dress is red, Michael misperceives her green dress to be blue and then reasons, “Her dress is blue, so it is not red.” Michael’s perceptual premise, although untrue, might nevertheless be practically adequate to the task of deciding whether the dress is red. Of course, the provisional conclusions reached above—that it is possible to know a false proposition, and that inferential knowledge from a false premise is possible—could be withdrawn or even reversed in light of further evidence. For example, perhaps the findings discussed fall into that unlucky subset of empirical results that, improbably, fail to replicate despite the observed differences exceeding conventional criteria for provisionally accepting them as real and reliable. Or suppose we surveyed research from various fields of cognitive science, which study actual knowledge states and knowledge-producing processes, and found that whenever a knowledge state is studied or discussed there, it is unfailingly strictly true, or all of the premises producing the knowledge are strictly true. (This strikes me as unlikely in light of the research discussed by Bricker [2018, 2021], but I mention it as a broad theoretical possibility.) This could potentially reveal a flaw in our ordinary knowledge concept, whereby it fails to reflect the true nature of its referent.

264  John Turri The point bears repeating: we should not treat the two provisional conclusions, or the proposed explanatory connection between them, as fixed points or as a fulcrum to leverage a new orthodoxy or “consensus” to which future contributions must assent in order to be taken seriously. The aim is not to substitute one lazy paradigm for another. Instead, the aim is to proceed responsibly by identifying a research question, practicing a methodology suitable to providing evidence to help answer that question, then cautiously interpreting that evidence, all the while remaining open to alternative theoretical approaches to the whole body of existing evidence and other methods and possible lines of convergent evidence that could incrementally contribute to our understanding of the underlying issues. In recent decades, however, many anglophone “analytic” philosophers have exhibited an unfortunate tendency to not proceed by gathering and evaluating evidence relevant to assessing the hypotheses of interest. Instead, they often rest content with just asserting, for example, that it is impossible to know a false proposition, or just asserting that this is the “consensus” view or that “nearly everyone agrees” with it (e.g. Chisholm, 1989; Steup, 1996). Does an alleged “consensus,” or even an actual consensus, carry any weight when those same philosophers consistently fail to provide any serious evidence to substantiate an extremely strong and exclusionary hypothesis? Not in my book. Others must judge for themselves. Similarly, some philosophers assert that knowledge from falsehood is impossible, or that this principle is “widely accepted and plausible” (Montminy, 2014, p. 473), and then greet potential counterexamples by drawing fine distinctions between, say, “knowledge from falsehood” and “knowledge despite falsehood,” and then further asserting that the potential counterexamples always fall into the latter category. Thus, there are no exceptions and the orthodox view triumphs “unscathed” (Montminy, 2014, p. 473). If critics persist in claiming that a counterexample works because it is “obvious” that it has certain critical features, we can restore the proper order by declaring: “I beg to differ,” the example “does not” have those features (Montminy, 2014, p. 471, emphasis in original). If the meddlesome interlopers still have the gall to protest further, we could take things to the next level by switching to all caps: “I BEG TO DIFFER, NO IT DOES NOT.” The defense of orthodox dogma is sometimes accompanied by elaborate speculation about actual and counterfactual cognitive processing of cases. If only people remained firmly aware that the false premise “is entirely epiphenomenal to the etiology of the knowledge,” then they would agree that it is “knowledge despite, rather than from, falsehood” (Ball & Blome-Tillmann, 2014, p. 555). If only the cases were “described appropriately,” then people would “not be tempted to think of those conclusions as known” (Ball & Blome-Tillmann, 2014, p. 556).

Why Is Knowledge from Falsehood Possible? 265 For example, we are asked to consider this case: Temperature Bill lives in Pittsburgh, but has to go on a business trip to Toronto in March. He has heard that Canada has long, cold winters, and wonders whether he will need to bring his winter coat. He searches on Google for ‘temperature Toronto March’ and finds that the average low is −5 degrees. But then he remembers that Canada uses degrees Celsius and wonders how much that is in Fahrenheit. He decides to calculate. Unfortunately, Bill misremembers the conversion formula: although he should add 32 (to get 27), then divide by 9 (to get 3), and finally multiply by 5 (to get 15), he thinks that he is supposed to divide by 5 and multiply by 9. At the same time, however, Bill makes typing errors on his calculator: he keys in 5 (rather than −5) then adds 3 and adds 2, rather than adding 32; he divides the result (10) by 5 (to get 2), then multiplies by 9 (to get 18). As a result he believes that the average low in Toronto in March is 18 degrees Fahrenheit, and he concludes he will need his winter coat. (Ball & Blome-Tillmann, 2014, p. 556) We are then informed, Intuitively, in Temperature Bill does not know that he will need his winter coat: if he hadn’t made the typing errors, he would have believed that it would be 49°F in Toronto in March, and would have concluded that he did not need his winter coat; and since he might easily not have made the typing errors, he might easily have had a false belief regarding the issue of whether he ought to bring his coat. Moreover, in our opinion, the reason why Bill’s belief that he will need his winter coat does not constitute knowledge is that in Temperature, Bill’s explicit numerical beliefs play a crucial role in causing his belief that he won’t need his winter coat: accordingly, his false belief that it will be 18°F in Toronto in March is not merely epiphenomenal in the causal derivation of his belief that he won’t need his coat; and as a result, the latter belief does not constitute knowledge. (Ball & Blome-Tillmann, 2014, pp. 556–567) And for any other potential counterexample, if people would only take it “to be genuinely like Temperature in involving a falsehood in the causal etiology of the belief in the conclusion,” then people would also “intuitively judge” the agent to lack knowledge (Ball & Blome-Tillmann, 2014, p. 557). I beg to differ: far from intuiting that Bill doesn’t know that he will need his winter coat, I confess to finding it difficult to clearly intuit

266  John Turri anything about the case at all, other than that it is long, confusing, and confounded, such that any alleged intuition about Bill’s knowledge is basically uninterpretable. With respect to being long, the “dress” case discussed above was 58 words, whereas Temperature is over three times longer at 189 words. With respect to being confusing, readers are left to judge for themselves, but I will report my own experience: whereas I can easily recall from memory all the important details of the “dress” case, I cannot do the same for Temperature, despite repeated attempts. With respect to confounds, we’re told that Bill “misremembers” the formula and makes other “errors” on the way to forming the belief that the average low in Toronto in March is 18°F. By my count, this suggests that Bill’s conclusion, “I will need my winter coat,” is based on a chain of inferences involving as many as three false premises about • • •

the conversion formula, the calculator entries, and Toronto’s average low temperature in March.

More generally, Bill comes across as an improbable dolt, based not only on his exquisite inaccuracy with calculators, but also on the remarkable fact that he is a grown man living in Pittsburgh who is mature and successful enough to fly internationally on business trips but, nevertheless, somehow fails to already know that March in Toronto—over two hundred miles north of Pittsburgh, which itself tends to be cold in March!—is coat weather. And if Bill went through the trouble of googling average historical temperatures in Toronto in March, why wouldn’t he have instead just performed the more useful and no-more-difficult search of Toronto weather forecast tomorrow? Additionally, for all that’s said, Bill’s conclusion could be completely false and inadequate for salient practical purposes. He might not need his coat for many reasons. He could rent or borrow one while he is there. He might be meeting someone inside a restaurant, hotel lobby, or conference room in the airport or adjacent thereto, such that he never has to set foot outside in the cold. Or, setting average temperatures aside, Toronto might experience an unseasonably warm stretch of weather during Bill’s visit—another reason why the search for historical average temperatures is silly compared to the actual forecast for his trip. By contrast, the “dress” case discussed above, which people reliably judge to be a case of knowledge, is much shorter, clearer and lacks any evident confound. (The original study—Turri, 2019—included closely matched control conditions to rule out possible deflationary explanations of the principal finding of interest, which might well have already occurred to alert readers, such as a bias in favor of attributing knowledge of a specific proposition.) For instance, Michael’s conclusion, “her

Why Is Knowledge from Falsehood Possible? 267 dress is not red,” is based on just one false premise, “her dress is blue.” If we are interested in whether knowledge from falsehood is possible, then a single false premise will suffice; increasing the number of false premises—to, say, three—prevents us from assessing what the intuitive verdict is in such a case. Additionally, Michael is not depicted as comically incompetent or foolishly ignorant of facts that a typical adult could be expected to know, such as the basic geography and weather patterns of where he lives in eastern North America, or which internet search would be most useful for his purposes. Overall, then, far from clarifying matters, it would be utterly distorting to understand Michael’s case “to be genuinely like Temperature.” Similar to the critical speculations just discussed, other philosophers claim that “once we fully understand” the details of alleged examples of knowledge from falsehood, “it is clear” that the agent does not know, that “there is no intuitive pull to the thought that she has knowledge” (Schnee, 2015, p. 58). In support of this thesis, we are presented with the following “example” built up in parts and fully revealed by the entire sequence. TV Show 1: Ellie’s favorite TV show is on from six to seven. Ellie looks at her watch, which reads six thirty, and she infers, from the consideration that it is (exactly) six thirty, that her show is on. But Ellie is in error. It isn’t (exactly) six thirty, it is six thirty-two; her watch is slow by two minutes. TV Show 2: This case is like TV Show 1, but Ellie is extremely confident about the accuracy of her watch and only forms exact beliefs from it; she has no other evidence at all regarding the time. TV Show 3: This case is like TV Show 2, but Ellie knows that, even though her watch is quite reliable, when her watch is not exactly right it is usually an hour or more off. That is why she does not form approximate beliefs about the time from her watch; she believes exactly what her watch says or she forms no belief at all about the time on its basis. TV Show 4: This case is like TV Show 3, except that instead of knowledge, Ellie has a justified false belief that her watch is normally reliable but when it malfunctions, it is off by an hour or more. She had a watch-testing machine run a test on her watch to determine exactly how it operates. The normally reliable machine, however, mixed up the reports and gave her the report for a different watch; hence her justified false belief. Her watch is actually like most watches; it is very reliable about the approximate time. In just four installments and 253 words, we now have sufficient detail to “understand the relevant epistemic features of the case” and thereby acquire “good grounds for thinking that Ellie lacks knowledge”

268  John Turri (Schnee, 2015, p. 59). More specifically, we now have “grounds” for thinking that Ellie’s “belief has been Gettiered” (Schnee, 2015, p. 59 n17): Ellie’s method of forming beliefs from her watch in this version essentially involves the machine’s report, and so does her problematic luck: she is the victim of bad luck, because the normally reliable machine is wrong about how her watch operates and under what conditions the watch is reliable, but she is the victim of good luck because the report that she did get, despite the machine’s error, happens to still lead her to the truth…. The machine instantiates a classic Gettier pattern: it is normally reliable, but happens to be wrong this time. We might try to eliminate that aspect of the example. One way to do so is to … weaken Ellie’s justification for her belief about the watch. For example, say that instead of using the machine, Ellie just has fairly good inductive grounds for her belief about the watch’s behavior. The problem, however, is that such a fix still does not eliminate the problematic luck. Ellie is the victim of bad luck, because her inductive grounds are misleading (they falsely suggest that her watch behaves in a certain way, which she relies on), but she is the victim of good luck, because, even though her grounds are misleading, she is nonetheless led to the truth. (Schnee, 2015, p. 59, n. 17) To reiterate, the initial part of this sequence, TV Show 1, is supposed to be an example of the sort of case that others have had in mind when they claim that knowledge from falsehood is intuitively possible. What follows is designed to clarify “the relevant epistemic features of the case.” In some ways, it is difficult to assess whether this line of reasoning succeeds. The author does not operationalize what it is to “understand the relevant features of the case” and provide evidence of having measured this “understanding” in relevant ways that would support his claims about what would happen if we better “understood” them. Even the most basic parameters remain unaddressed, such as, How many relevant details are there? What is the criterion of (sufficiently correct) understanding? How many details do readers tend to understand after reading the first installment (TV Show 1)? How many details do they tend to understand after reading the entire sequence? What proportion of readers understand more details after reading the entire sequence compared to when they had read just the first installment? How many comprehension errors do readers tend to make after reading just the first installment? How many errors do readers tend to make after reading the entire sequence? Without answers to these questions, we cannot assess whether the full sequence tends to improve understanding. In other ways, it is easy to discern critical failures in the line of reasoning. Even assuming that the author could adequately answer all of

Why Is Knowledge from Falsehood Possible? 269 the questions just listed, it would still leave unaddressed the following crucial fact: elaborating the entire sequence is not a way of clarifying details of TV Show 1, but rather of creating a topically related but distinct case that could be judged differently from the original for countless reasons. Otherwise put, forming an impression of the entire sequence is not a way of forming a more precise impression of the initial installment. By shifting our focus from TV Show 1 to the entire sequence, the author changes the subject. Even if everything the author claims about the entire sequence is correct, that provides effectively no information regarding our judgments about TV Show 1 specifically. If I had to clearly and concisely summarize the gist behind the critique under consideration, then the best I can do presently goes something like this: For any example where, intuitively, an agent might gain knowledge from falsehood, we can add details that turn it into a “Gettier case.” And if it’s a “Gettier case,” then the agent doesn’t know. Therefore, no such agent knows and knowledge from falsehood is impossible. One problem with this argument is that any scenario can be supplemented until it can be classified a “Gettier case.” So the underlying logic leads to the false conclusion that no one ever knows, which refutes the underlying logic. A second problem, already suggested above, is that adding such details to a case effectively creates a new case. Judgments about the one don’t automatically carry over to the other; the uncertainty of such inferences tends to increase with the number of additions; and the increase is exponential if we account for potential interactions among additions. A third problem with the argument is that “Gettier case” is a theoretically useless category (Turri, 2016c). Controlled behavioral experiments reveal that scenarios which, in the literature, would be counted as “Gettier cases” elicit rates of knowledge attribution anywhere from below 20% to above 80%. The fact that something is a “Gettier cases” is consistent with it being judged similarly to paradigmatic cases of ignorance, or similarly to paradigmatic cases of knowledge, and anything in between (Blouw et al., 2018; Turri et al., 2015). In other words, something’s being a “Gettier case” provides no information about whether the agent knows. Other things being equal, I submit, tossing one tangled mess into another tangled mess will probably just make a bigger mess. Or, switching metaphors, however hard it is to build a sandcastle, it is all the harder to build one sandcastle upon another. *** Of course, if philosophers wish, they can just stipulate that they are interested in a knowledge concept according to which it is impossible to gain inferential knowledge from a false premise (or know a false but

270  John Turri adequate propositions, etc.). For what it’s worth, in my conversations about the matter with philosophers, it often does reach the point where I’m told, “This is just part of what philosophers have meant when discussing this topic.” If that were true, then it would be silly to ask for evidence because there is no debating the meaning of stipulatively defined terminology—it just is what it is. (It would not, however, be silly to ask for evidence that it is true, because it is an open empirical question what philosophers have in fact meant.) One advantage of the stipulative approach is that it would help avoid merely terminological disputes, which are a tedious waste of everyone’s time. There is a disadvantage to the stipulative approach, though. Why should anyone pay attention to, let alone participate in, such an exercise? What reason is there to think that it is an even remotely sensible use of finite time and resources? By contrast, an attempt to better understand the content of our shared, ordinary knowledge concept has potential intellectual and practical benefits. On the intellectual side, it satisfies curiosity about ourselves and our community, about how we actually think and relate to one another. On the practical side, it provides insight into a concept central to social cognition (e.g. Turri, 2017), social evaluation (e.g. Turri, Friedman, & Keefner, 2017), and communication (e.g. Turri, 2016a), which can be useful if, for example, one wishes to suggest improvements to ordinary thought and behavior. In closing, I encourage philosophers writing on this issue to get clearer on what their objective is, stop making unsupported assertions, especially in resources commonly deemed authoritative and aimed at beginners, improve their methodology, and consistently hold each other to higher intellectual standards. In other words, my recommendations for research in this area of philosophy are basically the same as my recommendations for research in mainstream contemporary anglophone philosophy generally. We easily could, and should, be doing so much better.

Acknowledgments For helpful feedback I thank Sarah, Angelo, and Geno Turri. This research was supported by the Social Sciences and Humanities Research Council of Canada and the Canada Research Chairs program.

References Ball, B., & Blome-Tillmann, M. (2014). Counter closure and knowledge despite falsehood. The Philosophical Quarterly, 64(257), 552–568. https://doi. org/10.1093/pq/pqu033 Blouw, P., Buckwalter, W., & Turri, J. (2018). Gettier cases: A taxonomy. In R. Borges, C. de Almeida, & P. Klein (Eds.), Explaining knowledge: New essays on the Gettier problem (pp. 242–252). Oxford University Press.

Why Is Knowledge from Falsehood Possible? 271 Bricker, A. M. (2018). Visuomotor noise and the non-factive analysis of knowledge. University of Edinburgh. Bricker, A. M. (2021). Knowing falsely: The non-factive project. Acta Analytica. https://doi.org/10.1007/s12136-021-00471-3 Buckwalter, W. (2014). Factive verbs and protagonist projection. Episteme, 11(4), 391–409. https://doi.org/10.1017/epi.2014.22 Buckwalter, W., & Turri, J. (2020a). Knowledge and truth: A skeptical challenge. Pacific Philosophical Quarterly, 101(1), 93–101. https://doi.org/10.1111/ papq.12298 Buckwalter, W., & Turri, J. (2020b). Knowledge, adequacy, and approximate truth. Consciousness and Cognition, 83, 102950. https://doi.org/10.1016/j. concog.2020.102950 Chisholm, R. M. (1989). Theory of knowledge (3rd ed.). Prentice Hall. Hazlett, A. (2010). The myth of factive verb. Philosophy and Phenomenological Research, 80(3), 497–522. https://doi.org/10.1111/j.1933-1592.2010.00338.x Hazlett, A. (2012). Factive presupposition and the truth condition on knowledge. Acta Analytica, 27(4), 461–478. https://doi.org/10.1007/s12136-0120163-3 Ichikawa, J. J., & Steup, M. (2021, Spring). The analysis of knowledge. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/sum2021/ entries/knowledge-analysis/ Montminy, M. (2014). Knowledge despite falsehood. Canadian Journal of Philosophy, 44(3–4), 463–475. https://doi.org/10.1080/00455091.2014.982 354 Schnee, I. (2015). There is no knowledge from falsehood. Episteme, 12(1), 53–74. Starmans, C., & Friedman, O. (2012). The folk conception of knowledge. Cognition, 124(3), 272–283. https://doi.org/10.1016/j.cognition.2012.05.017 Steup, M. (1996). An introduction to contemporary epistemology. Prentice Hall. Steup, M. (2001, Spring). The analysis of knowledge. Stanford Encyclopedia of Philosophy. Retrieved July 25, 2021, from https://plato.stanford.edu/ archives/spr2001/entries/knowledge-analysis/ Turri, J. (2011). Mythology of the factive. Logos & Episteme, 2(1), 143–152. Turri, J. (2013). A conspicuous art: Putting Gettier to the test. Philosophers’ Imprint, 13(10), 1–16. Turri, J. (2015). Evidence of factive norms of belief and decision. Synthese, 192(12), 4009–4030. https://doi.org/10.1007/s11229-015-0727-z. Turri, J. (2016a). Knowledge and the norm of assertion: An essay in philosophical science. Open Book Publishers. https://www.openbookpublishers.com/ books/10.11647/obp.0083 Turri, J. (2016b). Knowledge as achievement, more or less. In M. Á. F. Vargas (Ed.), Performance epistemology (pp. 124–134). Oxford University Press. Turri, J. (2016c). Knowledge judgments in “Gettier” cases. In J. Sytsma & W. Buckwalter (Eds.), A companion to experimental philosophy (pp. 337–348). Wiley-Blackwell. Turri, J. (2016d). The radicalism of truth-insensitive epistemology: Truth’s profound effect on the evaluation of belief. Philosophy and Phenomenological Research, 93(2), 348–367. https://doi.org/10.1111/phpr.12218.

272  John Turri Turri, J. (2017). Knowledge attributions and behavioral predictions. Cognitive Science, 41(8), 2253–2261. https://doi.org/10.1111/cogs.12469. Turri, J. (2019). Knowledge from falsehood: An experimental study. Thought, 8(3), 167–178. https://doi.org/10.1002/tht3.417. Turri, J. (2021). Knowledge attributions and lottery cases: A review and new evidence. In I. Douven (Ed.), Lotteries, knowledge, and rational belief: Essays on the lottery paradox (pp. 28–47). Cambridge University Press. Turri, J., & Buckwalter, W. (2017). Descartes’s schism, Locke’s reunion: Completing the pragmatic turn in epistemology. American Philosophical Quarterly, 54(1), 25–46. Turri, J., Buckwalter, W., & Blouw, P. (2015). Knowledge and luck. Psychonomic Bulletin & Review, 22(2), 378–390. https://doi.org/10.3758/ s13423-014-0683-5 Turri, J., Buckwalter, W., & Rose, D. (2016). Actionability judgments cause knowledge judgments. Thought, 5(3), 212–222. Turri, J., Friedman, O., & Keefner, A. (2017). Knowledge central: A central role for knowledge attributions in social evaluations. Quarterly Journal of Experimental Psychology, 70(3), 504–515. https://doi.org/10.1080/17470218. 2015.1136339 Warfield, T. A. (2005). Knowledge from falsehood. Philosophical Perspectives, 19(1), 405–416. https://doi.org/10.1111/j.1520-8583.2005.00067.x

16 The Assertion Norm of Knowing John Biro

I Can one come to know something based on things one does not know? This question is usually taken to be one about deduction: must the premises of a (sound) argument be known if the conclusion is to be? But that is not the question to be addressed in this chapter.1 I am interested in the broader question of what is involved in coming to acquire knowledge. That we sometimes do this is beyond question. I shall propose an unorthodox way of understanding what happens when we do based on an unorthodox way of understanding what it is to have what we so acquire. I shall couch the discussion in different, and perhaps slightly awkward-sounding, language. Instead of talking about having knowledge, I shall talk of knowing, and instead of talking about acquiring knowledge, I shall talk of becoming a knower. I do this because it seems to me that the more usual ways of talking can easily put us on the wrong scent. In most epistemological theorizing, ‘knowledge’ is taken to refer to a mental state, sometimes called a propositional attitude, parallel to but different from other such attitudes such as believing, doubting, hoping, fearing, etc., which are, arguably, mental states. One of the chief tasks of epistemology is thought to be specifying how the state referred to differs from other such states with which it seems to be intimately connected, in particular, from believing. Not everyone thinks of knowledge in this way: “Saying ‘I know’ is not saying ‘I have performed a specially striking feat of cognition, superior, in the same scale as believing and …being quite sure…’ for there is nothing in that scale superior to being quite sure” (Austin, 1946, p. 171); “[In] characterizing an episode or a state as that of knowing, we are not giving an empirical description of that episode or state; we are placing it in the logical space of reasons, of justifying and being able to justify what one says” (Sellars, 1956, §36). It is, though, by far the dominant picture, and it is assumed by both sides in the debate whether knowledge or belief comes first. DOI: 10.4324/9781003118701-23

274  John Biro But there is another way we use ‘knowledge,’ common when we are not doing epistemology, as in ‘a body of knowledge’ a set of propositions concerning some subject matter. That is obviously not a state of anyone. Typically, different subsets are known by different people, with enough overlap, however, to make it informative to speak of the set of such people as experts with respect to that body of knowledge.2 In this use of ‘knowledge,’ the word refers to what is known, rather than to the knowing of it. If we want to explain the latter, to say what it is to be a knower, this sense of ‘knowledge’ is irrelevant. Thus, calling the explanandum knowledge is doubly misleading. It conflates the known with the knowing, and it tilts the discussion in favor of thinking of the latter as a mental state. It does this by creating the appearance of a parallel with belief, assumed to be a mental state. It is then natural to think that when one knows one is also in a mental state, albeit in an importantly different one. The task is then seen as spelling out what the difference is. Whether the answer given is that it is belief+, as in the traditional analysis and its post-Gettier successors, or that it is a different, perhaps unanalyzable one, as for knowledge-firsters, the shared assumption is that to know is to be in a mental state. This assumption sits uneasily with our notion of expertise. We think of an expert as someone who is knowledgable about some subject matter, knows a number, typically a large number, of propositions about it. Ask her a question, and she knows the answer. Does this mean that she was, before the question was asked, in some mental state? What was the content of that state? The answer she gives? But she must be supposed to have been in a similar state with respect to every proposition about the subject she knows even is not asked questions to which they are the answers. How many such propositions are there? To understand what we mean when we say that someone knows that something is the case, we need an account of knowing. What does it take to be a knower with respect to some proposition? Once we put the question this way, it is no longer quite as obvious that it is being in a certain mental state – whether partly or wholly analyzable into other mental states, as the tradition has it, or unanalyzable, as Williamson has argued. I propose that it should not be thought of as being in state at all but as having a certain status, that of being entitled to assert something. That is what experts are entitled to do, and we believe them because they are. It is natural to think of experts in this way.3 We should think of knowers in general in the same way, even if it is a particular proposition, rather than a body of knowledge, that is the object of their knowing. Having such an entitlement does not require its possessor to be in any particular mental state, including believing what one is entitled to assert or believing that one is so entitled. Nor is any particular mental state, including that of believing that one is so entitled, sufficient for having

The Assertion Norm of Knowing 275 the status. More surprisingly, it does not require that what one is entitled to assert be true. (How, then, can the account be one of knowing? Is not one of the few things we all agree on that what we are interested in, call it knowledge or knowing, requires that what is known be true? See Section III below.) The proposal is that we should say that someone knows that p if and only if he is entitled to assert it.4,5 One may have reasons not to assert that p even if one is entitled to do so. Whether one is entitled to do something is independent of whether one is inclined to do it. One may be entitled to assert something even if one does not believe it and does not believe that one is so entitled. One may think that one is entitled to assert something when one is not, a lamentably common occurrence. People often think they are entitled to do something when they are not, and they sometimes think that they are not entitled to do something even though they are. What we have to spell out, then, is what constitutes being entitled to assert something. The idea here is that once we have done that, we will have said what it is to know. We can do so in a way that does not include a requirement that the proposition one is entitled to assert be something one believes and, more surprisingly, does not include a requirement that the proposition be true. Surprising and counterintuitive the last may be, it is important to see that it is central to a proper understanding of entitlement. If being entitled to assert that p required p’s truth, we could never be sure that anyone is ever entitled to assert anything. But we often are. Of course, my thinking that you are entitled to assert that p usually goes hand in hand with my thinking that p is true. But not always. I may think that it is not true but still regard your evidence and expertise as entitling you to assert it. More importantly, I may have no belief concerning the proposition in question and yet be able to decide, looking at your evidence and expertise that you are entitled to assert it and as a result come to believe it. If I can decide that someone is entitled to assert a proposition without my believing that it is true, I obviously do not have to know that it is true before so deciding, If I could think that someone is entitled to assert that p only if I knew that p was true, I would accept expert testimony that p only if I already knew that p. My being able to learn something from an expert presupposes that I do not already know it. When I come to know that p on the basis of expert testimony, what do I learn? Not that p is true – even experts can be mistaken. I learn that someone who is entitled to assert that p thinks that it is true. His being so entitled is what makes him an expert, a knower with respect to the proposition at hand, and if he does assert what he is entitled to, I can come to know that p by inheriting his entitlement to assert that p. (My entitlement does not depend on my having to cite its source if it is questioned.)

276  John Biro This gives us as a necessary and sufficient condition of being a knower, call it the assertion norm of knowing: (ANK) S knows that p iff S is entitled to assert that p. where being entitled to assert requires neither that S believe that p nor that p be true. ANK needs to be further spelled out, of course, and that is a task for a later section. But first, a defense of the claims that knowing that p does not entail either believing that p or that p be true.

II Two widely, if not universally, accepted theses about knowledge and belief are 1 Knowledge is a kind of belief. 2 Knowing entails believing. (1) has a venerable history, going back to the Platonic picture of knowledge as doxa with an account. The many attempts to save the account of knowledge as justified true belief from Gettier-style counterexamples by adding a fourth condition presuppose it.6 Equally, if not more, widely endorsed is (2): even Williamson, who is notable for maintaining that knowledge is not analyzable (does not ‘factor into’) belief plus something else, usually thought to be justification of some kind, agrees that it cannot be present without belief being present as well (2000, p. 27). But consider: the Internal Revenue Service (IRS) sends you a registered letter informing you of the date of the hearing on their audit of your tax return. It has a signed receipt showing that you received the letter. It is, I suggest, entitled to say – that is to say, it knows – that you know when the hearing is, whether or not you have any belief concerning it. Perhaps you were busy, set the letter aside and forgot about it. Makes no difference: the IRS and the rest of us are entitled to say that you know the date, since you are entitled to say what it is, even if you cannot do so, lacking the belief.7 Some will not be convinced, saying that the right verdict here is that while it is true that you should have known, you do not actually know. So, consider the notorious question at congressional hearings: who knew what and when? If the question is, Did S know that p, and it is shown beyond doubt that S had been told that p, I should think it would be considered a poor defense on S’s part to say that he did not believe what he had been told and thus that even if he should have known, he did not. Suppose I am one of those with a deep distrust of government agencies, never believing anything they say. I do open the letter from the

The Assertion Norm of Knowing 277 IRS, but I do not believe a word of it. If knowing entails believing, the IRS can say only that I should have known the date of the audit, though I did not. It seems to me that that is the wrong thing to say. The IRS would be entirely within its rights to claim that I did know of the date of the hearing: it informed me of it. Why should what I thought of the information make any difference? To think that it should requires thinking that knowing is a mental state – but that is just what is under challenge here. If I am proven to have been given insider information about the merger and make a killing on the stock, where would my saying that I did not believe what I had been told and bought only on fundamentals get me? If you tell me authoritatively that it was my friend who had planted a bomb but I cannot bring myself to believe you, where would my saying to the police that I did not know get me? With such cases, it does not matter whether the subject really lacks belief or merely claims to. The point is that we do not think that having it is required for it to be the case that the subject knows. Neither being in denial nor pretending to be is deemed an excuse. Any account of knowing according in which believing is a component will have difficulty accommodating such cases.8 On the other hand, ANK has no difficulty accommodating such cases. On that account, it is not surprising that knowing does not entail believing. Knowing is not only not a mental state (contra Williamson), but it is also not a state of any kind. Being a knower is being an expert or an authority with respect to some proposition or propositions. Being an expert does not entail being in any particular state, mental or other. It is a status one has, and one has it in virtue of being entitled to assert certain things. In general, having a status is not being in a state. (One’s marital or citizenship status obviously does not depend on what state, mental or other, one is in.) Thus, one may well have the status of being a knower with respect to p without believing that p.9 There is nothing odd in being entitled to assert that p while not having any belief concerning it. The appearance that there is comes from conflating being entitled to assert that p and actually asserting it. Doing the latter (sincerely) is, perhaps, incompatible with not believing what one asserts. But it is not necessary to assert either what one knows or that one knows it to satisfy ANK. It is not even necessary to be inclined or disposed to do so. None of this is to say that knowing and believing are not importantly connected. But their connection is of a different kind and goes in a different direction than is usually thought. ANK can accommodate the familiar point that in asserting something I give others the right to rely on my authority. It is not that my knowing that p entails that I believe that p, it is, rather, that my knowing that p entails that my asserting it, if I do, gives you reason to believe it

278  John Biro and, perhaps, if you correctly regard me as an expert, reason to claim to know it.10 The connection is between my knowing and your believing, not between my knowing involving my believing. As already noted, nothing in the present account precludes my doing both, but the first does not require the second. If knowing does not require believing, it obviously does not require being justified in believing. ANK has no need of, and leaves no room for, a justification condition. Its entitlement condition does the work the justification condition does in the usual accounts, but it does so without the need for a belief condition to be satisfied. ANK captures part of the idea behind Foley’s claim that ‘knowledge is a matter of having ­adequate information.’ But only part: Foley goes on to equate having adequate information with ‘not lack[ing] important true beliefs.’ He says, ‘Whether a true belief counts as knowledge … hinges on the importance of the information one has and lacks.’ This makes it clear (as does the title of his book) that he endorses (1).11 But one can accept that what information one has or lacks bears on whether one is a knower while rejecting (1), as I have argued we should. One may have the information sufficient to make one a knower of p (as in the examples above) without having any belief about p, so that lacking adequate information does not come to ‘lack[ing] important beliefs,’ as Foley assumes.12

III If, as I have argued, knowing is not a kind of believing and does not entail believing, what room is there for a truth condition, usually formulated in terms of true belief and almost universally assumed to be necessary?13 If there is no belief, there can be no true belief. Is there something else in the offing such that it could be said that it is necessary that it be true for a case to be one of knowledge? The obvious candidate is the proposition being said to be known. Part of the reason for thinking that a truth condition is obviously needed is the fact that it seems, and, indeed, is, absurd to say that one can know a false proposition. Surely, one cannot know that something is the case when it is not!14 But not including a truth condition in ANK is not to say that one can. It is only to say that we cannot build into the criteria we use for judging whether someone knows a proposition the requirement that the proposition be true. If this leaves our attributions of knowledge vulnerable to revision, so be it. That is better than not being able to make any. And if ANK licenses attributing knowledge of a proposition at one time, only to disallow such attribution later in the light of new evidence about the proposition that is as it should be. We say that we thought the subject knew, but we were mistaken: there was some information he lacked. Whether it is information he could have been reasonably expected to

The Assertion Norm of Knowing 279 have bears on our assessment of him as an epistemic agent. If it is information no one could have had at the time, we do not withhold the title of expert, even if we do with one who lacks the same information today. But if we allow our saying now that he did not know then to stand in the way of our saying then that he does, we will be giving in to the skeptic. Better to say that he knows if he satisfies the conditions we are in a position to ascertain he satisfies, even while recognizing that we may be wrong. That is why there is no mention of truth in ANK. Its absence makes it possible to make true attributions of knowledge without both the attributor and the attribute knowing that the proposition in question is true. This is desirable, given that with many, if not most, of the attributions of knowledge we make we obviously do not know that the proposition we say is known is true. We think we do, of course. As noted above, your asserting something when I think you satisfy the entitlement condition is grounds for me to (at least) believe it. If I had to know that it is true in order to think that you knew it, my attributions of knowledge could never outrun what I know; I could not think that you knew more than I did. What ANK gives us is a criterion of what you know that is independent of what I know. It is not my knowing that p is true but my thinking that you satisfy the entitlement condition for knowing that p that entitles me to say that you know that p. My thinking that you satisfy the entitlement condition is a result of learning about your epistemic powers and situation, how expert you are about the matter at hand (even if I am not) and what evidence is available to you (even if not to me). This is so even if my being entitled to assert that you satisfy the condition is subject to the same caveats as is your entitlement to assert that p. It may be said that even if ANK is accepted as a criterion of knowledge attribution, it should not be seen as a definition or analysis of knowledge. It does not, by itself, say anything about what knowledge is. It says only what it takes to know. It can be turned into a definition by adding a truth condition. Perhaps not, as the traditional analysis goes, by requiring that one’s belief be true, since one may well not believe the proposition one is entitled to assert, but simply that the proposition known be true. The difference is important. It allows the criterion by which we judge whether someone knows something to be independent of the truth of the proposition in question. The ‘N’ in ANK stands for ‘norm,’ and there is no sense in laying down a norm that cannot be followed. ‘Do what you can to be entitled to assert that p!’ makes sense. So does ‘Believe only what you have reason to think is true!’ But not, ‘Believe only what is true!’ That is not something that is wholly up to one. It is, in part, the part the first two injunctions speak to. But not wholly: there is nothing one can do to make it the case that what one believes is true. That is up to the world.

280  John Biro On this proposal, the definition of knowledge would be a two-part one, combining a usable criterion of knowledge attribution and a condition requiring that the proposition said to be known be true: AK One knows that p iff (i) one is entitled to assert that p and (ii) p is true. For better or for worse, I reject what may seem a friendly amendment. I am willing to defend the more radical claim that ANK is all we need. Were we to accept AK as a definition of what we attribute to someone when we say that he knows, we would not be in a position to make many – if any – warranted attributions. That is how skepticism gets a foot-hold. ANK allows us to continue to say that people know the many ordinary things they do – what Lewis (1996) thought could be secured only by going contextualist. We are right to do so even if we have to concede that we cannot be certain that we are right. To this extent, we have to agree with the skeptic. But we do not have to agree that one knows only if one is certain in the sense the skeptic has in mind, only if one knows that one satisfies AK. That is, indeed, something one cannot know. But that does not mean that one cannot satisfy ANK and that we cannot often know that someone does. And even though we cannot know that we satisfy AK, sometimes we do. What the skeptic is telling us is that we cannot be certain that we ever do, even if by the lights of ANK we can say we do. But ordinary knowledge attribution does not require that kind of certainty. ANK highlights the forensic aspect of the concept of knowledge. If, after a properly conducted trial, someone is found guilty of a crime, the fact that evidence exonerating him comes to light later does not mean that the verdict was improper. We say that we were mistaken in thinking that he was guilty, not that we were wrong in declaring him to be so. In the same way, as long as someone satisfies the entitlement condition, it is proper for him or for us to say that he knows, even if it turns out that he did not because, as we later find out, the proposition in question is false. All this is perfectly compatible with saying, with perfect propriety, that our forbears did not know that the Earth was flat. And our saying this is compatible with their saying, truly, that they did. And we can say, truly, if we satisfy ANK, that we know things even though our descendants may decide that we did not. I know, I know… We are uncomfortable with saying that it is true now that someone knows something and it may be true later that he did not know it. And we are even more uncomfortable with saying that someone knows something that is not the case. Is that not why ‘I know that p but p is false’ and ‘He knows that p but p is false’ are unacceptable? On any account of knowing that includes a truth condition, uttering these amounts to uttering a contradiction. Not so on the present account, on

The Assertion Norm of Knowing 281 which the first comes out to as ‘I am entitled to assert that p but –p’ and the second as ‘He is entitled to assert that p but –p.’ To utter these is not to utter contradictions. To claim (rightly or wrongly) that one is entitled to assert something and yet assert its negation, while strange and perhaps a sign of irrationality, is not to contradict oneself. Neither is claiming (rightly or wrongly) that someone else is entitled to assert something and yet to assert the opposite. Saying that he knows that p and I know that −p would not be, either, since I may well think that we are each entitled to assert the respective propositions, contradictory though they may be. Thus, the admitted surface infelicity of such utterances does not show, as it is often claimed, that only true propositions can be known.15 ANK delivers a sufficient condition even for present-tense attributions. Hard as we may find biting this bullet, it is better than the alternative of biting the skeptic’s. Hobson’s choice, I say. ANK has a context-relative aspect and retains part of the idea behind contextualism. But it avoids the untoward consequences to which the latter’s currently most popular version, so-called attributor contextualism, leads. Whether one is entitled to assert something may be allowed to depend on how much precision is required in the context in which the assertion is made. This may seem to be the same kind of context-relativity as that supposedly present in the well-known airport and bank examples of Cohen and DeRose, respectively (Cohen 1999; DeRose 1992). But there are three important differences. First, the aspects of the context that govern how much precision is required do not depend on the psychology of an attributor. They are objective, interpersonal, facts about what is being discussed. If it is the rough shapes of countries, saying that France is hexagonal whereas Italy is not, will pass; if we are in cartography class, it will not. Which context we are in does not depend on any of the factors invoked by the attributor contextualist, such as the importance of the question to a particular attributor. ANK speaks for us, for what we think is important in the context. It does not individuate contexts by what is important to particular attributors of knowledge. Second, for this reason, it is not about attributions of knowledge at all. It is about the attributability of knowledge. Third, in saying that someone is entitled to assert something in one context but not in another we are not saying different things about him in the respective contexts. What about a conversation in which not all the parties think they are in the same context? Once again, what matters is not in what context they think they are but in what context we have reason to believe they are. As long as we have such reason, we will have reason to regard the dissenting party as confused and in error in what he asserts or claims explicitly to know. In the cases where it is not clear – to us – what the context is, the obvious thing to say is that the parties are talking past each other and that we cannot give an answer where we do not know what the question is.

282  John Biro Disagreements between the skeptic and the non-skeptic often fall under the latter heading. The appearance of conflict, so worrying to Lewis, between our thinking that we have a lot of mundane knowledge and our being troubled by the skeptic’s arguments is, indeed, a case in point. What makes that case special is that we are both parties. But AK can help here. As I suggested earlier, its first clause suffices as a criterion by which to judge whether someone knows something. In not being satisfied with it, the skeptic is in effect claiming that the context is one in which ANK’s entitlement condition cannot be satisfied unless AK’s second condition is satisfied and, moreover, unless we know that it is. But his claiming that that is the context does not make it so. The context, therefore, is the non-skeptical one. The fact that we are willing to accept ordinary knowledge claims shows that we think that that is the context. By ‘we’ we refer to ourselves, thereby showing ourselves to be non-­skeptics, even if momentarily troubled by the skeptic’s qualms. Furthermore, in insisting that the second clause must be treated as part of the criterion, the skeptic is asking us to use that for which we have evidence as part of the evidence. The truth of a proposition is not evidence of its truth, and if we think that AK is satisfied, our grounds for thinking that it is cannot include the proposition’s being true. It is the fact that ANK is satisfied that entitles us to think that AK is, even though it may not be. ANK has some additional virtues. First, denying as it does that knowing requires believing, it need not address the question whether believing is one. This is a virtue, for while it may seem just obvious that it is, it is in fact not clear that things are as they seem. If we accept that there are tacit beliefs – and it is hard to deny that there are – it is far from obvious in what sense these can be thought of as mental states. Second, it enables us to give a straightforward answer to the vexed question whether knowing requires knowing that one knows: no, it does not. Third, it allows us to give an affirmative answer to the question whether knowledge can come from non-knowledge. It can: we learn, come to know things we did not know. This does not mean that we know these things based on our former ignorance.

Notes 1. For what it is worth, my view is that it must and the examples sometimes thought to show otherwise involve overlooking known suppressed premises (Warfield 2005). In Warfield’s much discussed lecture-handout case, I do not conclude that my hundred copies will be enough from knowing that there are exactly fifty three people present – which I do not, having miscounted – but from knowing that there are far fewer than a hundred – which I do. In fact, I do not even have the false belief that there are fifty three, having stopped counting long before reaching that number. Of course, that is not how Warfield describes his case: he has me count – but miscounts – all the people present. I am

The Assertion Norm of Knowing 283 suggesting that that would not happen in real life. Suppose, though, that the room is crowded, and it is not obvious that there are fewer than a hundred people. Now I have to count them all and do so carefully. I come up with ninety eight. I think to myself, ‘It looks like I (may) have enough… I hope so!’ I do not think that I know, and I would be surely right not to claim that I did. 2. Consider the body of knowledge known in the London cab trade as ‘the knowledge.’ A would-be taxi driver is deemed to have mastered it if he can answer questions on it. The same notion is in play in the Baliol rhyme about Benjamin Jowett: ‘I am Master of this college, What I don't know isn’t knowledge.’ 3. With their penchant for pleonasm, Americans often speak of someone’s having expert status. 4. Being entitled to assert something should not be confused with being in a position to know, something discussed by a number of recent writers (Yli-Vaccuri & Hawthorne, 2022). To say that someone is in a position to know obviously cannot be an analysis of what it is to know, since ‘know’ appears in the analysans, as it does not in the one offered here. 5. ‘Being entitled’ should be understood as short for ‘being epistemically entitled.’ One can be epistemically entitled to assert something but not be legally, politically, or socially entitled to do so. I may know the result of the election, or the winner of the lottery, but it may not be my office, or right, to announce it. 6. Foley (2012) and Klein (2012) are just two examples of writers explicitly endorsing (1). Austin is a rare dissenter (1946, p. 171). Another is Vendler (1972), who adduces persuasive linguistic reasons for denying both and for the stronger claim that knowing and believing are mutually exclusive. Ring (1977) and, earlier, Cook Wilson (1926) also maintain, for different reasons, that believing is incompatible with knowing. The former argues that once one knows something one ceases to believe it. He assumes that knowing and believing are both (mental) states, but each has properties the other could not have and are therefore not only distinct but also mutually exclusive. The latter also thinks of knowing as a mental state, but as a sui generis, unanalyzable, one. On the present account, one may well believe something one knows, since (at least) one of believing and knowing is not being in a particular mental state. Those of us entitled to assert that Rome was not built in a day may, and typically will, believe that it was not, even though we do not have to. I may think, for whatever quirky reason, that it was. 7. You are, of course, entitled to the belief. But here, too, one can be entitled to believing something without believing it and without realizing that one is so entitled. 8. It may be suggested that the right thing to say about such cases is that the subject does not know but is merely in a position to know. However, it is not clear what this means. It may seem natural to say that in the IRS case it means that he is able to find out the date of the hearing by opening the letter. But, first, that does not fit the case where he did open it, read it, but forgot its contents. Second, to say that one is able to find out what is the case presupposes that one does not already know it, which begs the question against the present proposal, according to which the subject does know, since he is entitled to assert what the case, being in possession of information concerning it. 9. Arguments for this, different from those on offer here, can be found in Holguin (inpress). He proposes to replace the belief condition with a think condition, where thinking that something is a certain way is one’s best guess as

284  John Biro to how that something is. It is not clear how that is different from believing that something is probably so. In any case, in the examples considered here, there is no thinking that p or guessing that p: p is not entertained at all. 10. You teach your first-grade class that the Earth is round, even though you are a closet flat-Earther. The students, who rightly see you as an expert, learn that the Earth is round (cp. Lackey on ‘selfless assertion’) (Lackey 2007). 11. Foley has sensible things to say about what ‘important’ comes to in this context. But the present point is independent of that question. 12. The subjunctive conditional ‘if one were entitled to assert that p, one would be justified in believing that p if one did’ is true whether one believes that p or not. Thus, the intuition that justification (‘having an account’) has something importantly to do with knowing is accommodated by ANK, even in the absence of a belief condition. 13. Almost, but not quite: for dissent, see Hazlett (2010, 2012); Buckwalter and Turri (2020). 14. Hazlett (2010) argues that this is not as obvious as it is often taken to be, and I agree. But his reasons will play no role in what follows. Others (Turri & Buckwalter, 2020) have suggested that the truth condition may be relaxed: a proposition has to be merely nearly true to be knowable. The arguments to come cut against such a view as much as against one requiring truth. 15. Hazlett (2012) argues for the same conclusion, seeing the source of infelicity as presupposition failure.

References Austin, J. L. (1946). Symposium: Other minds II. Proceedings of the Aristotelian Society, 20, 148–187. Buckwalter, W., & Turri, J. (2020). Knowledge and truth: A skeptical challenge. Pacific Philosophical Quarterly, 101(1), 93–101. Cohen, S. (1999). Contextualism, skepticism, and the structure of reasons. Philosophical Perspectives, 13, 57–89. Cook Wilson, J. (1926) Statement and inference with other philosophical papers, 2 vols. Clarendon Press. DeRose, K. (1992). Contextualism and knowledge attributions. Philosophy and Phenomenological Research, 52(4), 913–929. Foley, R. (2012). When is true belief knowledge? Princeton University Press Hazlett, A. (2010). The myth of factive verbs. Philosophy and Phenomenological Research, 80(3), 497–522. Hazlett, A. (2012). Factive presupposition and the truth condition on knowledge. Acta Analytica, 27(4), 461–478. Klein, P. D. (2012). What makes knowledge the most highly prized type of belief? In T. Black & K. Becker (Eds.), The sensitivity principle in epistemology (pp. 152–169). Cambridge University Press. Lackey, J. (2007). Norms of assertion. Noûs, 41(4), 597–626. Lewis, D. (1996). Elusive knowledge. Australasian Journal of Philosophy, 74(4), 549–567. Ring, M. (1977). The cessation of belief. American Philosophical Quarterly, 14(1), 51–59.

The Assertion Norm of Knowing 285 Sellars, W. (1956). Empiricism and the philosophy of mind. In H. Feigl & M. Scriven (Eds.), Minnesota studies in the philosophy of science (vol. I, pp. 253–329). University of Minnesota Press. Vendler, Z. (1972) Res cogitans: An essay in rational psychology Cornell University Press. Warfield, T. (2005). Knowledge from falsehood. Philosophical Perspectives, 12(13), 405–416. Williamson (2000) Knowledge and its limits Oxford University Press Yli-Vaccuri, J., & Hawthorne, J. (2022). Being in a position to know. Philosophical Studies, 179, 1323–1339.

17 Knowledge without Factivity Kate Nolfi

17.1 Introduction This chapter investigates the theoretical consequences of marrying an intuitively attractive strategy for developing a philosophical account of knowledge with a promising but somewhat unorthodox way of thinking about epistemic evaluation.1 The strategy for developing a philosophical account of knowledge that captures my attention here is just the sort of virtue-theoretic strategy with which contemporary epistemologists are already exceedingly familiar. But the sort of approach to thinking about epistemic evaluation which interests me here is fundamentally action-­oriented. In slogan form, the foundational commitment of this action-oriented approach is that believing is, most fundamentally, for acting. So, in contrast with, e.g., the sort of alethic approach that takes truth to be a kind of fundamental epistemic value, aim, or goal, an action-oriented approach takes the fact that our doxastic states have a distinctive sort of job, role, purpose, or function within our mental economies in the service of action-production to be the fundamental focal point for epistemic evaluation. My first aim is to sketch the contours of the kind of theoretical account of knowledge that results from marrying the virtue-theoretic strategy with this sort of action-oriented approach to thinking about epistemic evaluation. I’ll begin by presenting and briefly trying to motivate the virtue-theoretic strategy (Section 17.1). Then I’ll work to integrate an action-oriented approach to thinking about epistemic evaluation into a virtue-theoretic framework for theorizing about the nature of knowledge (Sections 17.2 and 17.3). My second aim is to explore certain theoretical consequences of the resulting action-oriented virtue-theoretic account of knowledge (Sections 17.4 and 17.5). In particular, I’ll argue that embracing an action-oriented virtue-theoretic account very likely requires abandoning any sort of commitment to the factivity of knowledge. 2 That is, embracing an action-oriented virtue-theoretic account of knowledge should lead one to expect that, at least sometimes, it is possible for S to know that p even while p is, strictly speaking, false. DOI: 10.4324/9781003118701-24

Knowledge without Factivity 287 I’ll suggest that, although unorthodox and perhaps initially off-putting, we ought not regard this consequence as a serious theoretical liability.

17.2 Getting a Fix on What Knowledge Is, for the Purposes of Theorizing Let me begin by considering two powerful intuitions about knowledge. The first of these, very roughly put, is that knowledge that p is as good as it gets, epistemically speaking. The second, also put very roughly, is that knowing that p is the kind of thing that an epistemic subject achieves or accomplishes (rather than the kind of thing that merely happens to an epistemic subject). Of course, both of these slogan-esque formulations require some unpacking. Intuitively, having knowledge that p is better than having a doxastic state toward p that falls short of knowledge but is, nevertheless, rational, justified, reliably formed, true/accurate, etc. Knowledge, then, is particularly and distinctly epistemically praiseworthy, valuable, or worth having. And when S knows that p, then S’s doxastic state toward p thereby enjoys an especially elevated epistemic status. Put differently, S’s doxastic state toward p attains an especially praiseworthy species of epistemic success (whatever epistemic success comes to). More specifically, a subject who knows that p is a subject whose doxastic state toward p enjoys the highest epistemic status that a doxastic state, considered in isolation, can possibly attain. And, at least when it is considered in isolation, a doxastic state toward p is most praiseworthy, epistemically speaking, by virtue of constituting knowledge that p.3 Of course, if we widen the scope of our evaluative focus beyond an isolated doxastic state toward p, we may find that there are further epistemic goods that are more praiseworthy, valuable, or worth having than (mere) knowledge. Understanding is a plausible candidate. Nevertheless, when we consider a single doxastic state toward a proposition in isolation, knowledge is as good as it gets. The intuition that knowledge that p is something an epistemic subject achieves or accomplishes (rather than the kind of thing that merely happens to an epistemic subject) is meant to capture a crucial difference between knowing that p and coming to believe that p, e.g., as a direct result of having been bumped on the head. When S knows that p, S’s epistemic success with respect to p is appropriately credited to S, themselves. Or perhaps more precisely, we might say that S’s epistemic success with respect to p is appropriately credited to what we might label S’s cognitive character. Thus, an epistemic subject knows that p is a subject whose epistemic success with respect to p is no mere fluke or happy accident. And when an epistemic subject knows that p, it makes sense to praise the subject, themselves, in virtue of the fact that their doxastic state toward p constitutes knowledge.

288  Kate Nolfi Taken together, these two intuitions give us a preliminary fix on a target for epistemological theorizing: S knows that p iff S can be credited with the highest or most praiseworthy epistemic status that it is possible for a subject to attain simply by virtue of holding a doxastic state toward p, considered in isolation. In the remainder of this section, I want to introduce a now-familiar strategy for supplying a theoretical account of knowledge that this intuition-driven way of picking out the phenomenon makes especially attractive. I won’t attempt to offer anything like a thoroughgoing defense of this virtue-theoretic strategy here.4 Instead, my much more modest aim is to briefly rehearse how the virtue-­theoretic strategy vindicates both the intuitions with which I opened the discussion. My hope is that this will provide sufficient motivation for my adoption of the virtue-theoretic strategy in the remainder of the chapter. The virtue-theoretic strategy proposes the following basic schema for a theoretical account of knowledge: S knows that p if and only if S’s epistemic success in believing that p is attributable (in the right way) to S’s epistemic virtue, skill, or competence. This schema, of course, will require significant filling in if it is to supply a complete theoretical account of knowledge. But the schema itself is already enough to explain the intuitive appeal of both the idea that knowledge is as good as it gets, epistemically speaking, and the idea that knowing that p is the kind of thing for which an epistemic subject, themselves, deserves credit. If knowing either results from or involves a subject exercising their epistemic virtue, skill, or competence, then it makes sense that knowledge is the kind of thing that the subject achieves or accomplishes. After all, we generally regard that which results from or involves a subject’s exercise of virtue, skill, or competence as an achievement or accomplishment for which the subject is rightly credited. The sense of movement in the night sky that the paint-on-canvas of The Starry Night5 manages to evoke is a product of van Gogh’s adroit brushwork and artistic vision. It is no lucky accident that the paint-on-canvas of The Starry Night successfully creates the relevant effect—rather, the effect is a testament to van Gogh’s impressive skill as a painter. And for this reason, it constitutes a kind of achievement or accomplishment for which van Gogh, himself, deserves praise. Moreover, we generally regard success that is attributable to virtue, skill, or competence as more valuable, praiseworthy, or worth having than equal success that is not similarly attributable to virtue, skill, or competence. Imagine Serena Williams executes a masterful serve that her opponent cannot return, precisely controlling the trajectory and force of the tennis ball to exploit a subtle weakness in her opponent’s form. Serena’s having won the point is especially impressive, especially admirable, especially praiseworthy precisely because it is attributable to her impressive skill as a tennis player. It is certainly conceivable that

Knowledge without Factivity 289 a somewhat less competent player might win a point facing the same opponent by serving in a way that results in the ball flying over the net with identical trajectory and speed. Still, that this less experienced player wins the point is, in large measure, as a matter of luck rather than skill. The player has not recognized and does not intend to exploit the opponent’s weakness, and the player’s serve does not manifest the same level of control over the trajectory and force of the ball. And for this reason, although the opponent is equally unable to return both serves, the less experienced player’s serve is less impressive, less admirable, and less praiseworthy than Serena’s is. Because Serena’s serve wins the point owing to Serena’s skill, Serena’s serve is as impressive or admirable or praiseworthy as a tennis serve can be. More generally, when we restrict our attention to a particular isolated performance, we typically judge that success owing to virtue, skill, or competence is as good as it gets. And this explains the intuitive appeal of the idea that knowledge is the highest or most praiseworthy epistemic status that it is possible to attain when we restrict our attention to a subject’s a doxastic state toward p, considered in isolation. By providing the basic schema for a theoretical account of knowledge, the virtue-theoretic strategy supplies a kind of framework for theoretical inquiry aimed at developing a full and complete account of the phenomenon. Recall that this basic schema proposes that S knows that p if and only if S’s epistemic success in believing that p is attributable (in the right way) to S’s epistemic virtue, skill, or competence. Accordingly, the basic schema centers subsequent theoretical inquiry on the following cluster of questions. First, precisely what does it take for S’s believing that p to be epistemically successful? Second, what exactly does epistemic virtuous, skillful, or competent believing involve? And third, when is the epistemic success of a subject’s belief (as opposed to, e.g., that the belief’s content is p rather than q or that the subject believes that p) is attributable to the subject’s epistemic virtue, skill, or competence? There are, of course, a variety of different ways in which the theorist might answer each of these questions. And each distinct package of answers will yield a distinct virtue-theoretic account of knowledge, each enjoying its own advantages and facing its own set of challenges. By way of illustration, I will conclude the present section by sketching the rough contours of one prominent way of deploying the virtue-theoretic strategy to develop a full and complete account of knowledge. This familiar way of deploying the virtue-theoretic strategy yields what I’ll label an alethic virtue-­ reliabilist account of knowledge. First, on an alethic virtue-reliabilist account, S’s believing that p is epistemic successful, in the sense relevant for knowledge, just in case p is true. Epistemic success, then, is understood in terms of truth or accuracy: for S’s doxastic state toward p to be epistemically successful is for S’s doxastic state to represent the facts as they are. As a consequence,

290  Kate Nolfi an alethic virtue-theoretic account entails the factivity of knowledge. Knowledge, on this sort of account, requires true belief (and, of course, more besides). True belief turns out to be a metaphysically and/or conceptually necessary condition for knowledge. Second, an alethic virtue-reliabilist account maintains that virtuous, skillful, or competent belief regulation is, on this sort of account, just the sort of belief regulation that reliably (in normal environments) engenders epistemic success. S exercises epistemic virtue, skill, or competence just in case S regulates their doxastic state(s) in way(s) that, at least in normal circumstances and for typical epistemic subjects, tend to produce and sustain epistemically successful doxastic states.6 And, of course, when paired with the idea that epistemic success is to be cashed out in terms of truth or accuracy, what this comes to is that belief regulation is epistemically virtuous, skillful, or competent if and only if it is the sort of belief regulation that reliably results in true/ accurate beliefs. Finally, an alethic virtue-theoretic account holds that the epistemic success of S’s belief that p is attributable to the subject’s epistemic virtue, skill, or competence when it is the subject’s epistemic virtue, skill, or competence that explain (in the right way, e.g., by avoiding deviant causal chains) why the subject achieves epistemic success in believing that p. By way of modeling the relevant sort of explanation: perhaps the epistemic success of S’s belief that p manifests S’s epistemic virtue, skill, or competence in regulating their doxastic state(s) toward p in much the same way that the shattering of a porcelain vase upon hitting a tiled floor manifests the vase’s fragility.7 Perhaps the fact of S’s belief that p’s epistemic success is caused by an exercise of S’s epistemic virtue, skill, or competence in regulating their doxastic state(s) toward p in much the same way that the fact of Serena Williams’s opponent’s failure to return her serve is caused by Serena’s having exercised her impressive skill as a tennis player in executing the serve.8 However, the relevant sort of explanation is modeled, the resulting account of knowledge maintains that S knows that p when the reliability of the way that S regulates their present doxastic state toward p in producing and sustaining true beliefs generally explains (in the right way) the fact that S’s present belief that p is true. Of course, a great deal more could be (and has been) said to fill in the details of this sort of alethic virtue-reliabilist account of knowledge.9 But the preceding paragraphs provide an illustration of what operation within the framework for inquiry that the virtue-theoretic strategy provides would involve that is sufficiently detailed for my purpose here. In what follows, my goal is to pursue the virtue-theoretic strategy, but in a different direction, following a different trajectory, toward an alternative to the sort of now-familiar alethic virtue-reliabilist account of knowledge I’ve briefly sketched above.

Knowledge without Factivity 291

17.3 Developing an Action-Oriented Conception of Epistemic Success The idea that epistemic success with respect to p (at least the sort of epistemic success that is relevant in the context of knowledge that p) is properly understood in terms of truth or accuracy certainly has deep historical roots and enjoys a great deal of contemporary sympathy. Developing the familiar sort of virtue-theoretic account of knowledge I briefly sketched in the preceding section involves, in essence, treating this alethic conception of epistemic success as a kind of starting point for deploying the virtue-theoretic strategy. In the remainder of this chapter, however, I want to explore what it might look like to adopt an alternative starting point, one that has both intuitive appeal and historical precedent but has not yet received sustained attention that I think it merits in contemporary discussions. That is, I want to explore what might be involved in deploying the virtue-theoretic strategy to develop an account of knowledge if one does not begin with an alethic conception of epistemic success, but instead with what I will call an action-oriented conception of epistemic success. The core line of thought animating this action-oriented conception is that a doxastic state is a mental tool of sorts, one that has a distinctive job to perform in the service of action. Accordingly, whether S achieves epistemic success in believing that p is determined by whether S’s belief is well-suited to subserve action in the distinctive way in which beliefs paradigmatically do. I won’t provide anything approaching a systematic defense of an action-oriented conception of epistemic success here, but I will take a moment to very briefly motivate this conception as an alternative to its more popular alethic competitor. First, expressions of sympathy for the core line of thought animating an action-oriented conception of epistemic success are relatively commonplace both in the history of philosophy and in contemporary philosophical discussions.10 Additionally, research in experimental philosophy suggests that knowledge judgments closely track judgments of suitability for guiding action.11 Thus, both reflective intuition and ordinary practice seem to supply some preliminary support of an action-oriented conception of epistemic success. At the very least, then, it is worth investigating where the adoption of an action-oriented conception of epistemic success might lead to a virtue-theoretic inquiry aimed at developing an account of knowledge. An action-oriented conception of epistemic success maintains that epistemic success is properly understood in terms of and by reference to the demands of the distinctive job, role, or function that doxastic states stand ready to perform within the mental economies of epistemic subjects in the service of situationally appropriate action-production. Epistemically successful belief is, in essence, belief that is well-suited to meet these demands.

292  Kate Nolfi A bit of analogical thinking may be helpful in unpacking the central idea here. Consider a familiar tool: a standard carpenter’s hammer. This hammer is a tool that paradigmatically operates in a particular way (one that is notably different from, e.g., a nail gun) to achieve a certain result (wherein a straight nail is embedded into a piece of wood at a chosen angle) by a particular sort of user (a typical adult human being) in a particular range of normal use-cases (e.g., not inside a black hole, not when the ambient temperature is pushing 1000 degrees). Of course, some hammers make for better nail-driving tools than others. And this is because the task of nail-driving makes particular demands on the size, weight, shape, etc. of a hammer. A hammer is a better tool to the extent that the hammer’s features make it well-suited to drive nails in the characteristic way that hammers are meant to by a typical adult human being in normal circumstances. Thus, if one’s goal is to spell out what it takes for a hammer to be a truly excellent tool, one must understand a great deal about the distinctive way in which a hammer is meant to be used for nail-driving by its paradigmatic user in a normal use-case. Analogously, then, on the action-oriented conception, to say that S achieves epistemic success in believing that p is, in effect, to say that S’s belief that p manifests excellence as a certain kind of tool for situationally appropriate action-production within a certain sort of mental economy. Accordingly, epistemic success marks a belief’s well-suitedness to facilitate action in the characteristic way that beliefs paradigmatically do within the mental economy of a typical epistemic subject in “normal” circumstances. So, for the proponent of the virtue-theoretic strategy which hopes to employ an action-oriented conception of epistemic success, spelling out what it takes for a belief to achieve the sort of epistemic success required for knowledge will involve spelling out what it takes to perform this particular job, role, or function well. And to accomplish this task, the proponent of an action-oriented virtue-theoretic account will first need to identify and describe the distinctive action-oriented job, role, or function that doxastic states paradigmatically perform within the mental economies of epistemic subjects like us. As a somewhat metaphorical and somewhat simplistic starting point, we might characterize the job, role, or function that our doxastic states paradigmatically perform as being map-like. When all goes well, a subject’s beliefs make their actions sensitive to the facts of their particular circumstances in much the same way that a good map makes a traveler’s decision about which way to turn sensitive to the relative positioning of their present location and their destination, as well as to facts about the surrounding geography. A good map makes it possible for the traveler to both identify and determine the likely consequences of traveling in the various directions in which they might choose to travel. Similarly, the proposal here is that, in the good case, a subject’s beliefs make it possible for the subject to both identify and determine the

Knowledge without Factivity 293 likely consequences of the various courses of action (or inaction) open to them, given their present circumstances. Put a bit more carefully, a subject’s doxastic states are predictive tools. Their distinctive, action-­ oriented job, role, or function is to predict the possible consequences of various courses of action that the subject might elect to pursue in any given set of circumstances they may face so as to reveal those courses of action that are most appropriate in those particular circumstances. When all goes well, a subject’s beliefs make it possible for the subject to anticipate the likely outcomes of the range of different courses of action open to the subject in those circumstances. And these sorts of predictions, in turn, ground a risk-­sensitive assessment of what course(s) of action are most appropriate that is genuinely responsive to the facts of the circumstances at hand. This is the characteristic way in which a subject’s doxastic states function to facilitate situationally appropriate action-production. And, thus, on an action-oriented conception, epistemically successful beliefs are beliefs that are well-suited to function in this characteristic way within the sort of mental economy that epistemic subjects like us have. What does it take for an action to be situationally appropriate in the sense relevant here? For the present, let me adopt the following as a kind of incomplete working hypothesis: an action is situationally appropriate just in case the action constitutes a proper response to various ends, goals, or values at stake in the particular circumstances at hand. Notice that the proponent of an action-oriented conception of epistemic success who embraces this hypothesis faces further theoretical choice-points. They might conceive of the various ends, goals, or values that might be at stake in any particular circumstances as whatever ends, goals, or values the relevant epistemic subject-agent has endorsed or adopted. Alternatively, they might endorse a more intersubjective or objective conception of the various ends, goals, or values at stake in any particular circumstances. And, of course, there are a number of different ways in which one might conceive of what makes for a proper response to the relevant ends, goals, or values. By way of illustration, a proper response might be one that successfully secures, promotes, respects, or preserves the ends, goals, or values in question. Or perhaps which of these different of these relationships a proper response will bear to the particular end, goal, or value in question will be different in different contexts. So, fleshing out the working hypothesis I offer here would require a rather significant detour into the domains of ethical theorizing and the philosophy of action. But it should not be surprising that embracing an action-oriented conception of epistemic success would hitch epistemological inquiry to ethical inquiry in this way. After all, understanding what it takes for a hammer to be an excellent tool for driving nails requires understanding what constitutes excellence in the result that the hammer’s use is meant to facilitate. So, if epistemic success effectively marks a doxastic state’s

294  Kate Nolfi excellence as a specific sort of tool for action-production, we should expect that a full account of what constitutes epistemic success will have to appeal to an account of what excellent action involves. Thankfully, this detour can wait for another occasion. My present ambition is to explore the contours of an action-oriented, virtue-theoretic account of knowledge, with an eye toward contrasting this sort of account with its most familiar competitors. And, as we will see, understanding situationally appropriate action as action which constitutes a proper response (whatever this comes to) to various ends, goals, or values at stake in the particular circumstances at hand (however, it turns out we ought to conceive of these) will more or less suffice for this purpose.

17.4 From an Action-Oriented Conception of Epistemic Success to an Action-Oriented Virtue-Theoretic Account of Knowledge Recall that to embrace the virtue-theoretic strategy is to embrace the following schema for a theoretical account of knowledge: S knows that p if and only if S’s epistemic success in believing that p is attributable (in the right way) to S’s epistemic virtue, skill, or competence. An action-­ oriented conception of epistemic success maintains that S achieves epistemic success in believing that p when S’s belief that p is well-suited to function within a typical epistemic subject’s mental economy as a predictive tool in the service of situationally appropriate action-production. So, appealing to an action-oriented conception of epistemic success in fleshing out the virtue-theoretic schema for an account of knowledge yields the following rather clunky formulation: S knows that p if and only if S’s belief that p’s being well-suited to function within a typical epistemic subject’s mental economy as a predictive tool in the service of situationally appropriate action-production is attributable (in the right way) to S’s epistemic virtue, skill, or competence. Somewhat abbreviated, S knows that p just in case the well-suitedness of S’s belief that p to serve as a predictive tool for action-production is attributable (in the right way) to S’s epistemic virtue, skill, or competence. At best, this formulation yields an incomplete account of knowledge, one that requires further elaboration. A fully developed action-oriented virtue-theoretic account of knowledge must have something to say about what, exactly, epistemically virtuous, skillful, or competent believing involves. The account must also have something to say about when it is that the epistemic success of a subject’s belief is attributable (in the right way) to the subject’s epistemic virtue, skill, or competence. But on both of these fronts, I think, an action-oriented virtue-theoretic account of knowledge can and should simply crib from what proponents of the sort of alethic virtue-reliabilism that has dominated contemporary discussions have already offered.

Knowledge without Factivity 295 Recall that, for the alethic virtue-reliabilist, a subject exercises epistemic virtue, skill, or competence in regulating their doxastic states when the ways in which the subject regulates their doxastic states reliably engenders epistemic success. So, epistemically virtuous, skillful, or competent belief regulation is belief regulation that is well-suited, under normal circumstances, to equip typical epistemic subjects with a corpus of epistemically successful beliefs. Of course, the alethic virtue-reliabilist marries this sort of reliabilist conception of epistemic virtue, skill, or competence with an alethic conception of epistemic success. But it is easy to see that a reliabilist conception of epistemic virtue, skill, or competence could be married with an action-oriented conception of epistemic success instead. And on the resulting the action-oriented virtue-theoretic account, exercising epistemic virtue, skill, or competence in believing that p is just a matter of regulating one’s belief that p in ways that would reliably (under normal circumstances) engender belief that is well-suited to serve as a predictive tool for action-production. Similarly, an action-oriented virtue-theoretic account of knowledge can crib from the work their alethic counterparts have already done in spelling out what it takes for the epistemic success of S’s belief to be attributable to S’s epistemic virtue, skill, or competence in the sense required for knowledge. Accordingly, the action-oriented virtue-reliabilist can say that for S’s epistemic success in believing that p to be attributable to S’s epistemic virtue, skill, or competence, S’s having exercised epistemic virtue, skill, or competence in believing that p must explain (in the right way, e.g., by avoiding deviant causal chains) why S achieves epistemic success in believing that p. Recalling above, perhaps this is a matter of S’s epistemic success in believing that p manifesting or being appropriately credited to S’s epistemic virtue, skill, or competence. Regardless, the result is that S knows that p when the fact that the way(s) in which S actually regulates their present doxastic state toward p are ways of regulating belief that would reliably equip a typical epistemic subject in normal circumstances with a corpus of beliefs that are well-suited to serve as a predictive tool for action-production explains (in the right way, e.g., by avoiding deviant causal chains) the fact that S’s present belief that p is well-suited to serve as a predictive tool for action-production. In effect, I’ve proposed that coupling an action-oriented conception of epistemic success with a more familiar reliabilist conception of epistemic virtue, skill, or competence and of when success is attributable to virtue, skill, or competence offers a promising alternative path to developing a virtue-theoretic account of knowledge. And the basic contours of the resulting account have now come into view: An action-oriented virtue-theoretic account of knowledge: S knows that p iff

296  Kate Nolfi a S’s belief that p is well-suited to serve as a predictive tool for engendering situationally appropriate action, b the patterns of belief regulation that produce and sustain S’s belief that p are generally effective, in normal circumstances, in equipping believers like S with a corpus of beliefs that are wellsuited to serve as predictive tools for engendering situationally appropriate action, c that (b) obtains explains (in the right way, e.g., by avoiding deviant causal chains) why (a) obtains. In telescopic form, an action-oriented virtue-theoretic account maintains that when S knows that p, S’s belief that p is an excellent tool for action because the belief-regulating processes, mechanisms, or habits that produce and sustain S’s belief that p are the sorts of belief-­regulating processes, mechanisms, or habits that reliably produce and sustain beliefs which are excellent tools for action.

17.5 An Action-Oriented Virtue-Theoretic Account of Knowledge: A Closer Look The action-oriented virtue-theoretic account of knowledge cashes out what is involved in knowing that p by appeal to what is involved in being a specific sort of high-quality tool. I want to spend the remainder of this piece exploring an action-oriented virtue-theoretic account of knowledge in more detail. And so, it is perhaps worth beginning by highlighting, again, a few points about the way that we grade or rank the quality of tools, in general. First, the features of an excellent tool are optimized relative to the features of a particular sort of paradigmatic user. The handle of an excellent hammer, for example, is shaped to be comfortably gripped by a typical adult human hand. And the weight of an excellent hammer is optimized so that the hammer can be comfortably swung to apply force to the head of a nail by an adult human with typical upper body strength. It is no mark against a hammer’s quality that it cannot be comfortably gripped and swung by a toddler, by someone who has lost their thumbs, or by someone whose arm muscles are significantly atrophied. Second, the features of an excellent tool are optimized relative to the particular goal or end or purpose that the tool is for. An excellent carpenter’s hammer has a flat head positioned in a certain way relative to its handle because a carpenter’s hammer is for driving nails. Having a flat head best here is best because nails have a flat top. By way of contrast, a carpenter’s screwdriver is for driving screws. The size and shape of the head, the relative positioning of the head and the handle, etc. on an excellent screwdriver differ from size and shape of the head, the relative positioning of the head and the handle, etc. on an excellent hammer

Knowledge without Factivity 297 because what it takes to embed a screw in a piece of wood differs from what it takes to embed a nail in a piece of wood. It is no mark against a hammer’s quality that it does a poor job of driving screws, and it is no mark against the quality of a screwdriver that it does a poor job of driving nails. Moreover, what a particular sort of tool is for can be more or less specialized. Compare a screwdriver and a multitool that include one component for driving screws. The multitool is not a lower quality or less excellent tool by virtue of being somewhat less efficient or less comfortable to use in driving screws than the screwdriver is. And this is because, although in some sense both tools are for driving screws, the screwdriver is only for driving screws, whereas the multitool is for much more besides. As a result, the standards of excellence with respect to which we evaluate the quality of the multitool are different from the standards of excellence with respect to which we evaluate the quality of the screwdriver. Third, the features of an excellent tool are optimized relative to a particular sort of paradigmatic use-case (or a paradigmatic range of usecases). It is no mark against the quality of a hammer that it cannot be used to drive a nail into a board where some obstruction makes it impossible to swing the hammer so as to connect with the head of the nail in the usual way. More outlandish use-cases are similarly irrelevant: a truly excellent hammer need not and is not likely to be an effective tool for driving nails in 160 mph winds, or at a temperature hot enough to melt steel, or in a possible world where a powerful demon’s sole ambition is to foil any attempt to embed a nail in a piece of wood. The takeaway here is that the standards with respect to which we evaluate the quality of a tool are inevitably indexed (i) to a paradigmatic user (or range of users), (ii) to the tool’s particular (more or less specialized) end, goal, or purpose, and (iii) to normal use-cases. More than this, it would be nonsensical to attempt to evaluate the quality of a tool without indexing one’s standards of evaluation in these three ways. An action-oriented virtue-theoretic account of knowledge treats epistemic success as a marker of a doxastic state’s excellence or high quality as a certain sort of tool. So, as holds generally, we should expect the standard for epistemic success that an action-oriented virtue-theoretic account of knowledge invokes to be indexed (i) to the cognitive economy of a paradigmatic sort of epistemic subject cum agent, (ii) to the particular predictive role that beliefs paradigmatically play in the production of situationally appropriate action within this sort of cognitive economy, and (iii) to normal epistemic environments. By extension, we should expect that what knowledge, itself, requires will be similarly indexed. Indeed, I have already implied as much in the preceding sections. But in the remainder of this piece, I want to focus on drawing out certain theoretical consequences of the way in which this sort of indexing is baked into the action-oriented virtue-theoretic account of knowledge. In particular,

298  Kate Nolfi I’ll suggest that the way in which an action-oriented virtue-theoretic account indexes what knowledge requires (to (i) the cognitive economy of a paradigmatic sort of epistemic subject cum agent, (ii) the particular sort of predictive role that beliefs play in the production of situationally appropriate action within this sort of cognitive economy, and (iii) normal environments) strongly favors the thesis that knowledge is not factive. True belief is not, it turns out, a metaphysically and/or conceptually necessary prerequisite for knowledge. And, accordingly, the proponent of an action-oriented virtue-theoretic account ought to remain open to the possibility that there are cases where S knows that p, even though p is, strictly speaking, false. Moreover, I’ll argue that, although unorthodox and perhaps initially off-putting, we ought not to regard this result as a serious theoretical liability. On an action-oriented virtue-theoretic account, to ask whether truth is a prerequisite or necessary condition for knowledge is, in essence, to ask whether truth is an inevitable feature of a high-quality predictive tool for situationally appropriate action-production. More carefully, it is to ask whether truth is an inevitable feature of belief that is optimized relative to (i) the mental economy of a paradigmatic sort of epistemic subject cum agent, (ii) the particular predictive role that beliefs paradigmatically play in the production of situationally appropriate action within this sort of mental economy, and (iii) normal environments. Characterizing the features of high-quality hammers requires a careful study of the demands imposed by optimization for use by a typical adult human for the purpose of driving nails in normal conditions. And it is by appeal to the demands of optimization that one explains why, e.g., high-quality hammers have a metal head, a handle that is more than three inches long, etc. Similarly, then, if truth were an inevitable feature of epistemically successful belief, it would have to be that belief’s optimization for use as a predictive tool for situationally appropriate action-production in the mental economy of a paradigmatic epistemic subject cum agent in normal environments requires truth. At first pass, then, it might seem that an action-oriented virtue-­ theoretic account of knowledge straightforwardly preserves the factivity of knowledge. After all, if S’s beliefs grossly misrepresent the facts, then S’s beliefs will be rather poorly suited to serve as predictive tools in the service of situationally appropriate action-production in normal environments. If S believes that a particular mushroom is safe to eat when, in fact, the mushroom is poisonous, the deployment of S’s belief as a predictive tool in the service of action-production in normal environments will almost certainly result in disaster. Whether S’s immediate concern happens to be S’s own nourishment, S’s security to be achieved by the elimination of a potential rival or threat, etc., S’s belief sets up a (potentially catastrophic) failure of situationally appropriate action precisely because it will give rise to misleading predications about the

Knowledge without Factivity 299 likely consequences of potential actions S might perform. If S’s belief about whether or not the mushroom is poisonous is to serve S well as predictive tool for situationally appropriate action-production in normal environments, then it certainly seems obvious that S’s belief will have to be true. So, generalizing from cases like this one, perhaps truth is, after all, an inevitable feature of a high-quality predictive tool for situationally appropriate action-production in normal environments. As Quine famously says, “creatures inveterately wrong in their inductions have a pathetic but praiseworthy tendency to die before reproducing their kind.”12 And accordingly, it can seem that truth turns out to be a necessary prerequisite for knowledge, even on an action-oriented virtue-­ theoretic account.

17.6  Against the Factivity of Knowledge Alas, a careful study of the demands imposed by the sort of indexed optimization built into an action-oriented standard for epistemic success suggests that the picture is rather more complicated than this first pass suggests. After all, if an action-oriented virtue-theoretic account of knowledge is to tell us anything about what it takes for us (ordinary adult humans, living our lives in an ordinary—e.g., philosophical-­demon-free, non-brains-in-vats—world) to know, then the action-oriented standard for epistemic success that this account invokes must be indexed to the kind of mental economy that we actually have, and to the way in which belief serves as a predictive tool within this kind of mental economy operating in a normal epistemic world. Moreover, there is good reason to think that strictly true (i.e. perfectly accurate) beliefs are not always maximally well-suited to serve as predictive tools for situationally appropriate action-production within this sort of mental economy. Yes, beliefs that wildly or radically distort the truth will not serve us well as predictive tools for situationally appropriate action-production. But given the characteristic idiosyncrasies and limitations of our (normal adult human) psychologies, sometimes beliefs that mildly distort the truth make, for us, better predictive tools in the service of situationally appropriate action-production than true, fully accurate beliefs would.13 At least from a certain perspective, this result ought not to be surprising. After all, we are biological organisms, and so evolution has shaped the kind of mental economy that we (normal adult humans) have into what it is. Evolutionary pressures rarely, if ever, produce the most elegant solutions to evolutionary problems. Instead, evolution often produces cheap, messy solutions that more or less get the job done. The evolved mental economies that we actually have equipped us with a way of responding to the facts of circumstances in action that allows us to cope with, and, indeed, capitalize on, the dynamism and complexity of our environment. But, of course, we ought to expect that our actual

300  Kate Nolfi mental economies manage this feat rather more messily than would the sort of elegant, idealized mental economies that a priori reflection might lead us to imagine we have. Mapping the characteristic idiosyncrasies and limitations of our mental economies and understanding the paradigmatic way in which belief operates therein is the kind of project that requires significant empirical work. But rather than tackling this project head-on, my strategy here will be to make some progress in characterizing the features of beliefs that are optimized to serve as predictive tools in our mental economies by way of a plausible and particularly illustrative case study. I suggest that the empirical hypotheses about normal adult human mental economies that this case study makes plausible deliver a kind of proof-of-­ empirical-possibility which makes it reasonable for the proponent of an action-oriented virtue-theoretic account of knowledge to give up on factivity. Let me begin by considering an epistemic subject cum agent who exhibits significant risk aversion or fear of failure. The mechanisms or processes responsible for action-production with this subject’s mental economy systematically inhibit or avoid actions that are represented as carrying even some minimal-to-moderate risk/possibility of failure. These mechanisms or processes are, in effect, overly sensitive to predicted risk; they over-react to the perceived possibility of failure. Within the mental economy of an epistemic subject cum agent whose beliefs accurately represent the facts (and so accurately predict the risk/possibility of failure attending the various actions available in any particular circumstances), these mechanisms or processes will lead to exceedingly conservative behavior. At an extreme, they may even lead to a kind of fear-of-failure-induced action-paralysis on the part of this epistemic subject cum agent. A predictively accurate, risk-averse mental economy rather poorly equips an epistemic subject cum agent to respond to whatever circumstances they happen to face with the most situationally appropriate action. But imagine that the same risk-averse mechanisms or processes for action-selection are embedded within a mental economy equipped with belief-regulating mechanisms or processes that systematically distort the facts in certain specifically targeted ways. Of course, belief regulation that systematically distorts the facts in certain specifically targeted ways will equip an epistemic subject with beliefs that systematically distort the facts in the relevant specifically targeted ways. And when this subject’s beliefs operate as predictive tools in the service of action-­production, they will generate correspondingly distorted predictions about risk/ possibility of failure attending the various actions available in any particular circumstances. But, if appropriately calibrated, the distorted way in which this epistemic subject cum agent’s mental economy represents risk/possibility of failure in evaluating potential actions might directly

Knowledge without Factivity 301 counteract the effects of risk aversion/fear of failure built into the operative mechanisms or processes of action-selection. And as a result, the epistemic subject cum agent’s behavior might be more or less indistinguishable from the behavior of a non-risk-averse epistemic subject cum agent whose belief corpus accurately represents the facts. A certain sort of predictively inaccurate, risk-averse mental economy might equip an epistemic subject cum agent to respond to whatever circumstances they happen to face with situationally appropriate action just as well as a predictively accurate, more appropriately risk-sensitive mental economy might. Interestingly, there is some reason to think that our (normal adult human) mental economies might, at least in certain domains, actually resemble the sort of hypothetical predictively inaccurate, risk-averse mental economy I’ve asked us to imagine here. I’ve suggested elsewhere that empirical work, e.g., on the psychology of so-called positive illusions, provides some preliminary evidence for this conclusion.14 But for the present, I’ll rely on an illustrative example to make this case. Imagine two students who are learning to construct proofs in an introductory logic course. Both students have achieved the same level of mastery of the proof system they’re learning. They can list the moves that the various rules of the system allow, and they know how to set up different sorts of proofs within the system. But both students have struggled equally in developing the kind of logical intuition that makes it possible to see where and how to deploy these moves in order to work from a specific set of premises to a specific conclusion. Both students begin a particular exercise. The first student lacks confidence in their ability to construct a proof to complete the exercise. They believe that, although they’ll probably make some progress, it is extremely likely that they will get stuck at some point and be unable to finish the proof. And what’s more, this level of pessimism is warranted by a straightforward probabilistic assessment of their evidence. This sort of assessment suggests that it is quite likely that this student will get stuck at some point and be unable to finish the proof. From a certain perspective, then, the first student’s pessimism here is just clear-eyed realism. Their belief accurately represents the facts as they are. The second student, however, is somewhat more optimistic about their chances of completing the exercise. This second student believes that, although they’ll likely get stuck at some point, there is a good chance that, if they keep at it, they’ll eventually be able to get unstuck and figure out how to finish the proof. But, just as in the case of their more pessimistic peer, a straightforward probabilistic assessment of the evidence suggests that it is extremely likely this second student will get stuck at some point and be unable to finish the proof. So, the second student’s

302  Kate Nolfi optimism/confidence exceeds or outstrips, if only marginally, the degree of optimism/confidence that a straightforward probabilistic assessment of the student’s evidence indicates would be appropriate. Their belief is overly optimistic, relative to this sort of straightforward probabilistic assessment. It distorts the facts, if only mildly. And so, it is, strictly speaking, false. Both students start working on the exercise in earnest. And both initially recognize that the exercise calls for a proof by cases but do not immediately see how to fill in the two cases to complete the proof. At this point, the first student finds themselves thinking “Oh, no! I’m stuck already! I guess there’s really almost no chance I’m going to be able to figure this one out … it’d basically take a miracle.” Updating their initial belief about their own chances of being able to complete the exercise in light of their present circumstances (having made a bit of progress, now being stuck), this first student now predicts that their chances of failure here are quite high. As a result, they become discouraged and give up on the exercise. They certainly don’t bother sketching out the proof by cases. And perhaps they even begin to question their original judgment that a proof by cases is the right approach here. The second student, however, reacts differently. On recognizing that the exercise calls for a proof by cases, the student thinks “Great, I’ve figured out the first step!” And owing to their antecedent beliefs about their own chances of being able to complete the exercise, this second student doesn’t yet view the fact that they don’t see how to fill in either of the two cases to complete the proof as confirmation that they’ll be unable to complete the proof. As a result, they are not discouraged. Instead, they think “Well, I should just map out the proof by cases and leave some black space where I don’t yet see how to fill in the necessary steps. Then I can go back and focus on figuring out how to fill in each of these holes in my proof. It might take me some time, but if I stick with it, there is a good chance that I can figure out what the missing steps in my proof should be.” And this is how the student proceeds. Ultimately, this second student is able to figure out how to complete the first case in the proof by cases. But, even though this student rules out a number of different strategies for completing the second case, a string of steps that will fill in this second hole in their proof continues to elude them. So, although this second student makes more progress than the first student, they too are unable to complete the exercise. And, of course, this outcome is expected—it is precisely the outcome that a straightforward probabilistic assessment of the student’s chances of success here would have identified as most likely by a significant margin. Nevertheless, this failure does not dampen the second student’s general confidence or optimism about their ability to construct proofs. A day or two later, this student starts on a new exercise. As they look the exercise over for the first time, they believe that they’ll probably get stuck at some point. But they also

Knowledge without Factivity 303 believe, just as before, that there is a good chance that, if they keep at it, they’ll eventually be able to get unstuck and figure out how to finish this proof. Stepping back, it seems clear that the second student is significantly more likely than the first to eventually master the ability to construct the sorts of proofs that one encounters in an introductory logic course. Moreover, part of what seems to explain this fact is the way in which, at this stage, at least, the second student’s optimistic beliefs about their own chances of completing the exercise at hand mildly distort the facts. It is because the second student is a bit over-confident, relative to a straightforward probabilistic assessment of the evidence, in their own ability to complete the particular exercise at hand that the second student’s present failure comes to be one step on a path toward mastery. In contrast, the first student’s failed attempt at completing the exercise at hand does nothing to further develop their ability to construct proofs. It does not constitute the kind of practice that leads to mastery. And should the first student’s present failure further entrench or erode their initial pessimistic assessment of their own proof-writing ability, it is not hard to imagine pessimistic realism leading to the kind of crisis of confidence that constitutes a significant barrier to further skill development. Indeed, my own experiences teaching introductory logic courses suggest that, for many students, there is a certain stage of learning at which cultivating a mildly distorted view of one’s chances of being able to get unstuck and complete the proof at hand is part of the most effective strategy for achieving eventual mastery. So, the first student’s belief represents reality as it is. The second student’s belief distorts reality, if only mildly. Strictly speaking, the second student’s belief is false and inaccurate; it is a misrepresentation. Nevertheless, the second student’s belief seems to be significantly better suited to serve as a predictive tool for situationally appropriate action-production than the first student’s belief seems to be. Of course, neither belief puts the student who holds it in a position to complete the proof at hand. Both students fail to realize this particular narrowly specified and highly circumscribed goal. But on any plausible construal, what it takes for either student’s actions here to be situationally appropriate must be characterized more broadly. After all, whether or not either student is able to complete the particular proof at hand certainly matters less than, e.g., whether either student eventually masters the ability to construct the relevant sort of proofs, whether either student learns to persevere, undaunted, when faced with a problem that they don’t immediately see how to solve, whether either student acquires the ability to break a complex problem into smaller steps as a way of making progress toward a solution, etc. This much is evidenced by the fact that we don’t explain the point or value of taking a course in introductory logic by listing the particular proofs that one will have constructed if

304  Kate Nolfi one successfully completes the coursework. And so it seems that, on any plausible construal of the aims, goals, or values at stake in the sort of situation our two students presently face, the second student’s belief enables situationally appropriate action more effectively than the first student’s belief does. Moreover, the second student’s belief is better suited to facilitate situationally appropriate action here precisely because of the way in which it supplies distorted predictions about the outcomes of various actions that the second student might take as they work on the proof at hand. For example, the second student bothers to map out the proof by cases, leaving blank space where they don’t yet see how to fill in the necessary steps, because they predict that this action is reasonably likely to eventually lead them to figure out what those presently elusive missing steps should be. A plausible explanation of what is going on here proposes that action-selection mechanisms or processes operative in both students’ mental economies are constituted so as to select against actions that will likely result in a perceived failure. Metaphorically, the motto for action-selection within both students’ mental economies is something like: better to not even try than to try and fail. But when the second student’s belief that operates as a predictive tool, it yields predictions about the likely outcomes of various potential actions that mildly, but systematically, underestimate the likelihood of outcomes that would be perceived as constituting failure. And when these predictions interact with or feed into the failure-averse action-selection mechanisms or processes operative in the second student’s mental economy, they counterbalance the tendency to select against actions that will likely result in a perceived failure. So, given the character of the particular sort of mental economy that both students happen to have, and the particular predictive role that their beliefs play in the production of situationally appropriate action within this sort of mental economy, the second student’s belief is better suited to serve as a predictive tool in the service of situationally appropriate action-production precisely because of the particular way in which it distorts the facts.15 And if this extended example supplies a recognizably plausible illustration of a typical adult human mental economy in action, then it provides good reason to accept that truth is not always a feature of belief that is optimized to serve as a predictive tool in the sort of mental economies that we have. At least for epistemic subjects cum agents like us, there may be certain beliefs (perhaps, e.g., beliefs about our own abilities) that achieve epistemic success by representing a mildly distorted version of the facts. Truth, it seems, is not always a prerequisite for epistemic success. Moreover, an action-oriented virtue-theoretic account of knowledge maintains that epistemically virtuous, skillful, or competent belief regulation is just belief regulation that reliably (in normal environments)

Knowledge without Factivity 305 equips an epistemic subject with a corpus of epistemically successful beliefs. But then, if the example of our two students suggests that sometimes beliefs are epistemically successful because of the specific ways in which they misrepresent/distort the facts, it also suggests that sometimes epistemically virtuous, skillful, or competent belief regulation actively introduces or injects mild, but systematic, distortions into the way in which a subject’s belief corpus represents the facts. What strikes us as admirable, worthy of emulation, and even praise, about the second student is not only their optimistic belief in the present circumstances, but also what we presume this optimistic belief reveals about their cognitive character. Certainly, we think the second student would be less admirable, worthy of emulation, and even praise if their optimistic belief in the case at hand were merely a one-off lucky accident—if, that is, the second student were generally realistic in the manner of our first student, and their optimism in the present case were an aberration of sorts. We presume that the second student’s optimistic belief in the case at hand is explained by some sort of cognitive habit, tendency, or disposition. The relevant sort of cognitive habit, tendency, or disposition systematically leads our second student to regulate beliefs about their own abilities in the face of challenging circumstances which present opportunities for growth optimistically, in ways that mildly distort the facts, in the domain of formal logic and beyond. The logic instructor might even view cultivation of the relevant sort of cognitive habit, tendency, or disposition as a particularly valuable contribution that their logic course could make to a student’s overall education.16 Of course, the particular cognitive habit, tendency, or disposition in play here may be restricted in scope (perhaps only applying to certain varieties of self-belief).17 And, regardless, precisely characterizing this particular cognitive habit, tendency, or disposition would, of course, require significantly more work. But already, the illustration that our two students supply strongly indicates that some sort of cognitive habit, tendency, or disposition in this vicinity seems to be especially effective in helping to equip normal adult human epistemic subjects with a corpus of beliefs that are well-suited to serve as predictive tools for situationally appropriate action-production within the particular sort of mental economy we have. And, accordingly, it seems that the proponent of an action-oriented virtue-theoretic account ought to view the relevant sort of cognitive habit, tendency, or disposition as an epistemic virtue, skill, or competence. If this is right, then epistemically virtuous, skillful, or competent belief regulation will not always reliably equip epistemic subjects with true or accurate beliefs. Rather, epistemically virtuous, skillful, or competent belief regulation can, in certain contexts, introduce or inject certain kinds of mild but systematic distortion into a subject’s belief corpus. And, as a result, sometimes the epistemic success a subject achieves in believing that p can manifest or be credited to the subject’s

306  Kate Nolfi epistemic virtue, skill, or competence, even when p is, strictly speaking, false. On an action-oriented virtue-theoretic account, the features of epistemically successful beliefs are optimized relative to (i) the mental economies of the paradigmatic sorts of epistemic subject cum agents that normal adult humans are, (ii) the particular sort of predictive role beliefs play in the production of situationally appropriate action within these mental economies, and (iii) normal environments. And careful reflection on what this sort of indexing comes to gives us good reason to think that epistemically successful belief is not always strictly true, and that epistemically virtuous, skillful, or competent belief regulation is not always belief regulation that reliably equips us with true beliefs. But if all this is right, then it is possible for the epistemic success of S’s belief that p to manifest S’s epistemically competent belief regulation, although S’s belief that p is, strictly speaking, false. So, the theorist who embraces an action-oriented virtue-theoretic account of knowledge ought to embrace, as a corollary of sorts, the result that knowledge is not (always) factive. The second student might know that there is a good chance they’ll be able to complete a particular proof even when, given their current level of mastery, it is, in fact, rather unlikely that they will be able to complete the proof. Strictly speaking, the student knows , although is, strictly speaking, false. The first student, however, fails to know . And there are at least two significant respects in which the first student’s belief falls short of knowledge. First, although this first student’s belief accurately represents their chances of completing the particular proof at hand on a straightforward probabilistic assessment of the evidence, the belief is not particularly well-suited to serve as a predictive tool for situationally appropriate action-production within the kind of mental economy that this student has. Second, it seems that the sort of belief regulation that gives rise to and sustains the first student’s belief will not reliably equip epistemic subjects like us with beliefs that well-suited to serve as predictive tools for situationally appropriate action-production within the sorts of mental economies we actually have. So, the sort of belief regulation that gives rise to and sustains the first student’s belief falls short of epistemic virtue, skill, or competence. The proponent of an action-oriented virtue-theoretic account of knowledge ought to reject the thesis that knowledge is factive. At first pass, perhaps this result will strike many contemporary theorists as grounds for rejecting an action-oriented virtue-theoretic account of knowledge out of hand. But this kind of thinking is overly hasty at best, and pure dogmatism at worst. After all, an action-oriented virtue-theoretic account is well-positioned to rule out cases in which S knows that

Knowledge without Factivity 307 p and p wildly misrepresents the facts. Part of what makes it plausible that the second student’s optimistic self-belief is well-suited to serve as a predictive tool for situationally appropriate action-production is that the degree to which this belief distorts the facts is mild. If we imagine the relevant sort of optimistic self-belief landing significantly further off the mark, so to speak—distorting the facts rather more aggressively, to a significantly greater degree—it seems clear that the belief’s service as a predictive tool would actively impede situationally appropriate action across a wide range of circumstances. In general, the more dramatically a belief misrepresents the facts, the more the kind of reasoning that I suggested might initially make it seem that truth is a necessary prerequisite for epistemic success (on an action-oriented conception thereof) seems to gain purchase. The more radical a belief’s departure from the truth, the more obvious it seems that the belief will, like the belief that represents a particular mushroom as safe to eat when the mushroom is poisonous, be rather poorly suited to serve as an effective predictive tool for situationally appropriate action-production. Accordingly, an action-oriented virtue-theoretic account of knowledge can vindicate the intuition that there is some kind of necessary connection between knowledge and truth.18 Although the proponent of an action-oriented virtue-theoretic account rejects truth as a necessary prerequisite for knowledge, they can and should accept that the degree to which S’s belief that p represents the facts as they are is always and inevitably relevant to the question of whether S knows that p. That S knows that p does not entail that p is true. Still, if one aims to determine whether S knows that p, whether p is true always and inevitably matters. It turns out, then, that giving up on the factivity of knowledge in the way that an action-oriented virtue-­ theoretic account does need not involve giving up as much as one might think. An action-oriented virtue-theoretic account does not sever the relationship between knowledge and truth; it merely complicates that relationship. And so, the fact that an action-oriented virtue-theoretic account rejects factivity does not constitute the kind of serious theoretical liability that contemporary theorists might initially suspect.19

Notes 1. My focus throughout is propositional knowledge, exclusively and exhaustively. 2. For the purposes of this chapter, I’ll understand factivity not merely as a linguistic/conceptual thesis about the meaning of “knows,” but as a thesis about the nature of knowledge itself: if the belief that p constitutes knowledge, then, necessarily, p is true. 3. But might certainty be a sort of epistemic status more praiseworthy or valuable than (mere) knowledge, which an isolated doxastic state toward p can attain? I suggest not. An epistemic subject who is certain that p might be more epistemically praiseworthy than a subject who (merely) knows that p. But if this is the case, it is because being certain that p involves more than

308  Kate Nolfi merely having a doxastic state toward p. Perhaps, by way of example, certainty additionally involves knowing that you know that p. If this is the case, then the epistemic subject who is certain that p not only has an epistemically praiseworthy doxastic state toward p (one that constitutes knowledge that p), but they also have an epistemically praiseworthy doxastic state toward (one that constitutes knowledge ). Thus, this epistemic subject has two distinct doxastic states, each of which constitutes knowledge (of a distinct proposition) and is doubly praiseworthy as a result. Nevertheless, when we consider the doxastic state toward p that this subject holds in isolation, we find that it is nothing more and nothing less than a doxastic state toward p that constitutes knowledge. And, thus, on its own, it certainly merits no more epistemic praise than would the doxastic state toward p held by a subject who merely knows that p (without being certain). 4. Many others have already taken up this task (and with what in my view amounts to significant success). See, e.g., Zagzebski (1996), Sosa (2007), Greco (2010, 2012), Kelp (2011, 2016), Carter (2016), and Greco and Reibsamen (2018). 5. Vincent van Gough, Saint Rémy, June 1889. 6. By explicitly relativizing to normal circumstances or normal environments, the alethic virtue-reliabilist secures the result that epistemically virtuous belief regulation need not reliably lead to true belief in, e.g., demon worlds, and brain-in-a-vat worlds. 7. Sosa (2010). 8. Greco (2010). 9. For extended discussions, see especially Sosa (2007) and Greco (2010). 10. Expressions of sympathy for this core line of thought can be found in, e.g., Locke (1690/1975), James (1879), Ramsey (1927), Papineau (1987), Lycan (1988), Millikan (1993), Kornblith (1993, 2002), Hawthorne and Stanley (2008), Burge (2010), Hyman (2015), Nolfi (2015, 2018a, 2018b, 2019), and Hetherington (2017). That said, not all of these theorists go so far as to fully embrace the sort of thoroughgoing action-oriented conception of epistemic success that I develop below. And even among those who do, some explicitly embrace and attempt to argue for the thesis that I’ll aim to undermine below—i.e. the thesis that knowledge, on an action-oriented account thereof, is factive. 11. See, especially, Turri et al. (2016). 12. Quine (1969, p. 126). 13. Certain other theorists, often approaching matters from a background in the philosophy of science, have already developed arguments for the conclusion that certain only approximately true beliefs (i.e. approximations, idealization, or models that are, strictly speaking, false) sometimes serve us well or better than strictly true counterparts would. See, e.g., Cartwright (1980), Elgin (2017), and Buckwalter and Turri (2020a, 2020b). I, myself, have tried to develop other arguments for this sort of conclusion in previous work— Nolfi (2018a, 2020). I see the argument I develop in this chapter as offering a distinct route to the conclusion that epistemic success does not (always) require truth, one that might complement the aforementioned routes already on offer. 14. For an overview of the relevant empirical results, see, e.g., Taylor and Brown (1988, 1994), Johnson and Fowler (2011), Sharot (2011), and ­Bortolotti and Antrobus (2015). See Hazlett (2013) or McKay and Dennett (2009) for a philosophical discussion of some of the relevant psychological research.

Knowledge without Factivity 309 15. Lurking beneath the surface here is one way in which a kind of subject-­ sensitivity may be baked into an action-oriented virtue-theoretic account of knowledge. While it is plausible that the second student’s beliefs about their own chances of completing a particular proof achieve epistemic success by mildly distorting the facts, it is also plausible that a social science researcher studying the effects of self-beliefs on perseverance achieves epistemic success when their belief about whether the second logic student is likely to be able to complete this proof represents the facts without distortion, as they are. 16. It is plausible, after all, that having the relevant sort of cognitive habit, tendency, or disposition might contribute to or even help to constitute the kind of growth mindset that has captured the interest of contemporary psychologists following Carol Dweck’s influential research (see, e.g., Dweck, 2006) and/or the kind of grit that Sarah Paul and Jennifer Morton (2018) have tried to describe. 17. If it turns out that the human cognitive system is structured such that the belief-regulating mechanisms or processes that operate in the domain of self-belief exhibit a fair amount of specialization, encapsulation, or modularity, this would give us good reason to expect that the relevant sort of epistemically virtuous, skillful, or competent cognitive habit, tendency, or disposition will be accordingly restricted in the scope of its operation. 18. Precisely characterizing the nature of this connection will, alas, have to wait for another occasion. 19. I am grateful to audiences at McGill University and the Helsinki Moral and Political Philosophy Seminar, as well as my colleagues at the University of Vermont for invaluable discussion and feedback on drafts of this chapter.

References Bortolotti, L., & Antrobus, M. (2015). Costs and benefits of realism and optimism. Current Opinion in Psychiatry, 28(2), 194–198. Buckwalter, W., & Turri, J. (2020a). Knowledge, adequacy, and approximate truth. Consciousness and Cognition, 83, 102950. Buckwalter, W., & Turri, J. (2020b). Knowledge and truth: A skeptical challenge. Pacific Philosophical Quarterly, 101(1), 93–101. Burge, T. (2010). Origins of objectivity. Oxford University Press. Carter, J. A. (2016). Robust virtue epistemology as anti-luck epistemology: A new solution. Pacific Philosophical Quarterly, 97(1), 140–155. Cartwright, N. (1980). The truth doesn’t explain much. American Philosophical Quarterly, 17(2), 159–163. Dweck, C. (2006). Mindset: The new psychology of success/Carol S. Dweck (1st ed.). Random House. Elgin, C. (2017). True enough. MIT Press. Greco, J. (2010). Achieving knowledge: A virtue-theoretic account of epistemic normativity. Cambridge University Press. Greco, J. (2012). A (different) virtue epistemology. Philosophy and Phenomenological Research, 85(1), 1–26. Greco, J., & Reibsamen, J. (2018). Reliabilist virtue epistemology. In N. Snow (Ed.), The Oxford handbook of virtue (pp. 725–746). Oxford University Press. Hawthorne, J., & Stanley, J. (2008). Knowledge and action. Journal of Philosophy, 105(10), 571–590.

310  Kate Nolfi Hazlett, A. (2013). A luxury of the understanding: On the value of true belief. Oxford University Press. Hetherington, S. (2017). Knowledge as potential for action. European Journal of Pragmatism and American Philosophy, 9(2). http://journals.openedition. org.lp.hscl.ufl.edu/ejpap/1070; DOI: https://doi-org.lp.hscl.ufl.edu/10.4000/ ejpap.1070 Hyman, J. (2015). Action, knowledge, and will. Oxford University Press. James, W. (1879). The sentiment of rationality. Mind, 4(15), 317–346. Johnson, D. D., & Fowler, J. H. (2011). The evolution of overconfidence. Nature, 477(7364), 317–320. Kelp, C. (2011). In defence of virtue epistemology. Synthese, 179(3), 409–433. Kelp, C. (2016). How to be a reliabilist. Philosophy and Phenomenological Research, 2(2), 346–374. Kornblith, H. (1993). Epistemic normativity. Synthese, 94(3), 357–376. Kornblith, H. (2002). Knowledge and its place in nature. Oxford University Press. Locke, J. (1690/1975). An essay concerning human understanding. Edited by P. H. Nidditch. Clarendon Press. Lycan, W. G. (1988). Judgement and justification. Cambridge University Press. McKay, R., & Dennett, D. (2009). The evolution of misbelief. Behavioral and Brain Sciences, 32, 493–561. Millikan, R. G. (1993). Naturalist reflections on knowledge. In White Queen psychology and other essays for Alice (pp. 241–264). MIT Press. Nolfi, K. (2015). How to be a normativist about the nature of belief. Pacific Philosophical Quarterly, 96(2), 181–204. Nolfi, K. (2018a). Another kind of pragmatic encroachment. In B. Kim & M. McGrath (Eds.), Pragmatic encroachment (pp. 35–54). Routledge. Nolfi, K. (2018b). Why only evidential considerations can justify belief. In C. McHugh, J. Way, & D. Whiting (Eds.), Normativity: Epistemic and practical (pp. 179–199). Oxford University Press. Nolfi, K. (2019). Epistemic norms, all things considered. Synthese, 198(7), 6717–6737. Nolfi, K. (2020). Epistemically flawless false beliefs. Synthese, 198(12), 11291–11309. Papineau, D. (1987). Reality and representation. Blackwell. Paul, S., & Morton, J. (2018). Grit. Ethics, 129(2), 175–203. Quine, W. V. O. (1969). Natural kinds. In Ontological relativity and other essays. Columbia University Press. Ramsey, F. P. (1927). Facts and propositions. Proceedings of the Aristotelian Society (Supplementary), 7, 153–170. Sharot, T. (2011). The optimism bias: A tour of the irrationally positive brain. Vintage. Sosa, E. (2007). A virtue epistemology: Volume I: Apt belief and reflective knowledge. Oxford University Press. Sosa, E. (2010). How competence matters in epistemology. Philosophical Perspectives, 24(1), 465–475.

Knowledge without Factivity 311 Taylor, S. E., & Brown, J. D. (1988). Illusion and well-being: A social psychological perspective on mental health. Psychological Bulletin, 103(2), 193–210. Taylor, S. E., & Brown, J. D. (1994). Positive illusions and well-being revisited: Separating fact from fiction. Psychological Bulletin, 116(1), 21–27. Turri, J., Buckwalter, W., & Rose, D. (2016). Actionability judgments cause knowledge judgments. Thought: A Journal of Philosophy, 5(3), 212–222. Zagzebski, L. (1996). Virtues of the mind: An inquiry into the nature of virtue and the ethical foundations of knowledge. Cambridge University Press.

18 Knowing the Facts, Alternative and Otherwise Clayton Littlejohn

18.1 Introduction Consider two intuitively plausible claims about knowledge and truth. The first is that it’s not possible to acquire knowledge unless we reason from true assumptions. Let’s call this the basis restriction. It says, in effect, that it’s not possible for a belief to constitute knowledge unless it’s provided with the right kind of basis. The right kind of basis, at least in the inferential case, must consist of truths or true propositions that the subject believes.1 The second is that our knowledge consists of truths, facts, or true propositions. Let’s call this the object restriction. It says, in effect, that the things that are known or the objects of propositional knowledge must always be true, veridical, accurate, or constituted by a fact. 2 Putting these ideas together, we might say that propositional knowledge is of truth and from truth. These ideas, in turn, might help explain the value of knowledge. We often speak as if we value the truth. We seem to have a strong aversion to a life detached from reality even if that life is filled with pleasure (Lynch, 2004). What we really seem to want, however, is not that some propositions (e.g., the ones we happen to believe) are true, but that we have the truth. Being in touch with the truth or having the truth might require more than having a belief that happens to be true. Arguably, it’s only through knowledge that we gain contact with these truths or facts (Littlejohn, 2017). We undermine our best explanation of the value of knowledge if we abandon the object restriction. While these two claims about knowledge and truth might seem initially quite plausible, they have been challenged quite recently.3 We’ll look at the role that cases of approximations play in this discussion.4 Before we get to the details, I wanted to note one thing. Some of the cases used to cast doubt on the object restriction are quite similar to cases that have been used to cast doubt on the basis restriction. We can imagine someone trying to defend the basis restriction by appealing to the general principle that in the case of inferential knowledge, knowledge can only be begotten by more knowledge.5 This counter-closure principle, if DOI: 10.4324/9781003118701-25

Knowing the Facts, Alternative and Otherwise 313 correct, might seem to support the basis restriction, but strictly speaking, it does so only if we also assume that everything we can know is true. Without this assumption, the claim that only knowledge can give us inferential knowledge wouldn’t support the claim that the inferential basis that provides us with knowledge must consist of truths. If we don’t assume the object restriction, the counter-closure principle that says that only knowledge of supporting premises can provide us with inferential knowledge wouldn’t support the basis restriction. Owing to the importance of the object restriction in these debates about the connections between knowledge and truth and to debates about the value of knowledge, it makes sense to focus on that restriction here.

18.2  False Knowledge and a Skeptical Challenge To start our discussion, let’s consider Buckwalter and Turri’s (2020) challenge to the object restriction. They first offer us this skeptical argument: The skeptical argument P1. A representation is known only if it is true. P2. Approximations are not true. C1. Therefore, approximations are not known. P3. Many of our representations are mere approximations. C2. Thus, many of our representations are not known.6 The default assumption in contemporary epistemology seems to be that valid skeptical arguments contain some mistaken premise. They propose that the best non-skeptical response to this argument is to deny the object restriction. Buckwalter and Turri think that the approximations at issue are ‘ubiquitous’.7 Their solution to this skeptical problem is to embrace the possibility that we have knowledge in cases where our beliefs are only approximately true. (They suggest that pragmatic factors might help us decide whether a belief is sufficiently close to the truth to constitute knowledge.) This move might enable them to say that our intuitive sense of how successful our knowledge attributions are isn’t wildly off the mark (thus undercutting the skeptical argument by rejecting [P1]), but the cost, if it is one, is rejecting the object restriction by allowing that some falsehoods might be known. Let’s consider the kinds of approximations that they’re concerned with. Buckwalter and Turri observe: We approximate what temperature the coffee is safe to drink at, or the time needed to arrive at work, for example, to avoid becoming scalded or fired. Scientists and engineers rely on approximation

314  Clayton Littlejohn when computing significant figures such as the decimals of pi to calculate accurate distances. (2020, p. 1) Here are some examples: 1 France is a hexagon; Fact: there is no hexagon that contains all and only the parts of France within it. 2 Maria arrived at 3:00; Fact: she arrived at 3:03. 3 There were 300 philosophers at the conference; Fact: there were 299 philosophers at the conference. A speaker might assert (1), (2), or (3) in situations where the italicised facts obtain. We might think that there’s nothing wrong with the speaker’s asserting such things. We might note, for example, that (1) is a perfectly fine thing to say when enriching (1) with the further claim, ‘but it is not a hexagon’ turns this into something infelicitous. Can we say that we know things like (1)–(3) in the situations envisaged? Buckwalter and Turri suggest that we can even though none of these claims seems to express a true proposition in the situations envisaged. Their defense of this unorthodox idea seems to be (roughly) that the best non-skeptical response to the above would be to deny (P1) and so allow for ‘false knowledge’. Alternative responses, they claim, face difficult questions that their proposal does not. In place of the seemingly traditional view that knowledge is restricted to truths, they offer the alternative suggestion that we think of knowledge as a kind of ‘adequate’ representation of truth where something can be adequate in this regard even if not true. What would it mean for something to serve as an adequate representation of some truth if it’s not itself true? They write: Although there are potentially many ways that approximations could be adequate, one way is for them to serve our purposes well enough to facilitate action and help us to achieve our goals in a particular circumstance. For example, 3.14 might be adequate, and hence known, as the value of pi in the grade school classroom but inadequate, and hence not known, as the value of pi in the lab engineering a global positioning system. (2020, p. 97) Even if the details of some accounts of adequacy are sketchy or unclear, the case for abandoning the object restriction might be compelling, so I don’t propose to put too much pressure on the positive proposal that

Knowing the Facts, Alternative and Otherwise 315 they offer. Instead, I would like to consider in detail some responses to the skeptical argument that, in my view, deserve further consideration.

18.3  What’s So Let’s consider an initial line of response that might seem to undercut their argument for the possibility of false knowledge. We only have an argument for recognizing the possibility of false knowledge if we assume that (1)–(3) are known but not true. (If they were either not known or true, they wouldn’t support the argument against the object restriction.) Why shouldn’t we think that (1)–(3) might be true in the situations envisaged? Just to put the fans of false knowledge on the defensive, these claims don’t sound very good: 1’. France is a hexagon, as James knows, but it’s not. 1’’. James knows France is a hexagon. Not only that, France is a hexagon. Don’t the fans of false knowledge accept something in the neighborhood of (1’)? Isn’t their proposal that James knows that France is a hexagon even though it’s not? The fans of false knowledge need to explain the badness of (1’). One explanation of the badness of (1’) is that the content that follows the conjunction is just flatly inconsistent with the content that precedes it. If this is the best explanation of the infelicity of (1’), we have our argument that knowledge is factive. Similarly, (1’’) seems infelicitous. The fans of false knowledge need to explain that, too. It’s common ground that (1’’) might be true. If the best explanation of the infelicity of (1’’) is that the information contained in the second part (i.e., ‘Not only that …’) is redundant given what’s said prior to it, that’s an indication that the second part is entailed by the first part. We’ll see in a moment if the fans of false knowledge have anything helpful to say about such cases. The first thing they’ll say, however, is that we need to tease apart two different issues. One issue is about knowledge ascriptions and the felicity of ascribing knowledge while asserting things about the truth or falsity of the target proposition. The second issue is whether the propositions in (1)–(3) might be false but known. Let’s focus on their falsity first. A natural thought is that propositions like the ones expressed by (1)– (3) might be true in the circumstances imagined. How might this work? We might think that there’s a kind of slack or looseness here that lets us speak the truth. The details of the positive proposal might vary, but we could imagine a kind of contextualist view on which a sentence’s meaning along with features of context (e.g., which of potentially many

316  Clayton Littlejohn different standards for determining whether something belongs to a category is operative) determines which proposition the sentence expresses so that (1)–(3) might, in the circumstances envisaged, express a true proposition. To take an example from a different context: Handing you a packet from the butcher’s I say, ‘Here’s the meat I bought for dinner’. You open it and find the kidneys. ‘I don’t call that meat’, you say. ‘Meat, for me, is muscle’. ‘Well, I do’, I say helpfully. Again one of us may be demonstrably wrong. Lamb’s kidneys are no more meat than wool is, to one who knows what meat is. But perhaps not. In fact, there are various understandings one might have of being meat, consistent with what being meat is as such. In that sense, being meat admits of understandings. We sometimes distinguish (such as in good markets) between meat and offal. Then if the kidneys wound up in the meat section they are in the wrong place. On the other hand, one would not (usually) serve kidneys to a vegetarian with the remark, ‘I made sure there would be no meat at dinner’. Similarly, brains or spinal column, however delicious fried, gristle, however tasty stewed, would count as meat on some occasions for so counting, but not on others. There are various ways being meat admits of being thought of. (Travis, 2013) The variable eligible understandings of what might be meat (or described as ‘meat’) would need to be cut down to understand how, say, someone might be wrong or right in saying that the kidneys were or were not meat. And it seems obvious to me that in some contexts it would be right to say that the dish contains meat (e.g., when vegetarians are seated) and some in which it’s wrong to say that it does (e.g., when picky omnivores ask about an item on a menu). If one sentence (e.g., ‘We will serve meat tonight’) can express a truth or express a falsehood when we keep the dish fixed but toggle the facts about the conversational interests of participants, it seems we get some evidence that there’s a kind of contextually variable standard that determines whether it might be true or false that this kidney containing dish contains meat.8 Could similar contextually variable standards determine whether something counts as hexagonal, as being at 3:00, etc.? If we wanted to develop this contextualist view, we could say something like this. A sentence of the form ‘a is F’ might, without changing meaning, express different propositions with distinct truth-conditions depending upon the standards for something to be counted as falling under ‘… is F’ operative in a given context. In any context in which it’s felicitous to assertively utter ‘a is F’ or ‘So and so knows that a is F’, the contextual standards governing the application of ‘… is F’ (which can vary without changing the meaning of ‘a is F’) are ones according

Knowing the Facts, Alternative and Otherwise 317 to which the individual designated by ‘a’ belongs to the class of things that fall under ‘… is F’. This might be a way of developing the idea that (1), for example, expresses nothing stronger than the proposition that France is nearly or approximately hexagonal. We might then generalize this account to handle cases like (2) and (3) (e.g., that times near enough to a precise temporal point count as ‘3:00’ and that numbers of philosophers close enough to 300 counts as ‘being 300 in number’). The important point is that if such things are true, the things we felicitously say in the relevant cases of loose talk are, in fact, true. If such things are true, the argument for false knowledge is undercut. One virtue of this view is that it might make similar predictions to Buckwalter and Turri’s proposal when it comes to the felicity of knowledge ascriptions. On their view, a kind of practical adequacy is sufficient for knowledge and warranted assertion without ensuring the truth of what’s said. In this view, practical adequacy is part of what determines what’s said and so what’s true. Unfortunately, fans of the object restriction should recognize that this isn’t a very popular strategy amongst philosophers of language working on loose talk. Let’s focus on (2). It might be thought that (2) differs from (2’) in terms of the proposition expressed: 2’. Maria arrived at 3:00 on the dot. One problem we face if we hold that (2) differs from (2’) in that (2) might be true if Maria arrived near enough to 3:00 is that it seems quite obviously wrong to append things like, ‘When she arrived a few minutes after 3:00, we were able to begin the meeting’. It’s obvious why it would be wrong to append that to (2’). If (2) expressed the proposition that Maria arrived during some interval of time (e.g., between 2:45 and 3:15), we wouldn’t speak falsely if we added that she arrived a few minutes after 3:00.9 As for the challenge of explaining our observations about (1’) and (1’’), fans of false knowledge wouldn’t find it too terribly difficult to make sense of our intuitions about these examples. The fans of false knowledge might want to say that (1’) might be true but nevertheless infelicitous. In any context in which it’s good enough for our conversational purposes to say that France is hexagonal, there wouldn’t be a good reason to deny that it is. Similarly, in any conversational setting in which it’s good enough for our conversational purposes to say that someone knows, it’s good enough for those purposes to assert what’s known. The additional utterance adds nothing useful for the conversational purpose and that might explain the feeling of redundancy. For these reasons, I don’t want my response to build on some controversial claims about the meaning and content of sentences like (1)–(3). We can acknowledge that each of (1)–(3) would be true only if strictly true and try to find a way to

318  Clayton Littlejohn block the argument for false knowledge without having to get entangled in debates in the philosophy of language.

18.4  What’s Known We shall assume that (1)–(3) are true only if strictly true and assume that they aren’t strictly true. The question we want to ask is whether they might be known. It’s here that I think the defenders of the object restriction are on stronger ground. I don’t think I’ve seen this noted elsewhere in the literature, so let me begin by observing something that I find somewhat surprising. Consider the following: 1t It is true that France is a hexagon. Fact: there is no hexagon that contains all and only the parts of France within it. 2t Maria arrived at 3:00, so it is not true that she did not arrive at 3:00. Fact: she arrived at 3:03. 3t There were 300 philosophers at the conference, so it is true that there were 300 philosophers at the conference. Fact: there were 299 philosophers at the conference. The line that the fans of false knowledge take when it comes to (1)–(3) might be felicitous but false where they’re felicitous because, in part, the things that make them false aren’t relevant to the conversational and/ or practical purposes. The little differences between France’s shape and the shape of any hexagon might not matter when choosing between the available shaped blocks to represent France. The differences between Maria’s time of arrival and 3:00 might not matter for the purposes of determining whether our colleagues are conscientious about attending meetings. So far, so good. Let that be common ground. How should we think of (1t)–(3t)? My gut instinct is to say that (1) is true iff (1t) is, that (2) is true iff (2t) is, and that (3) is true iff (3t) is. My gut also tells me that (1) can be felicitously asserted iff (1t) can, (2) can be felicitously asserted iff (2t) can, and that (3) can be felicitously asserted iff (3t) can. If that’s right, I think this is important for our debate about knowledge ascriptions. My approach to (1t)–(3t) seems to fit with the standard line on loose talk—that something that is strictly false might be felicitously asserted if it suits the conversational purposes sufficiently. My hunch is that there will not be interesting differences in most conversational settings between asserting (1) and asserting (1t). If it serves our conversational interests sufficiently well to be told that France is a hexagon, it will not serve our conversational interests insufficiently to be told that it is true that France

Knowing the Facts, Alternative and Otherwise 319 is a hexagon. What serves those interests in both cases is strictly false, but that’s no reason to think that what’s asserted is asserted felicitously. I think it’s also worth considering one further data point. Consider: 1w. She knows that France is a hexagon and that hexagons have straight sides, but I wonder if France has straight sides. 2w. Maria knew that she arrived at 3:00 but wondered whether she might have arrived at 2:58. 3w. We knew there were 300 philosophers at the conference, but we wondered whether there would be an even number. Each of these claims strikes me as rather strange. If it’s possible to ascribe knowledge to a subject of a false proposition when (e.g.) the falsehood is sufficiently close to the truth given the relevant purposes, it should make sense to wonder, say, whether the strict truth might deviate from the thing we’ve said is known. Compare (1w)–(3w) with: 1n. France is at least roughly hexagonal, but I wonder if it has straight sides. 2n. Maria arrived in the neighborhood at 3:00, but I wonder whether she arrived at 2:58. 3n. The number of philosophers was in the neighborhood of 300, but I wonder whether it was an even number. Each of these claims seems perfectly fine and that makes me think that it might be better to think of the relevant knowledge ascriptions as strictly false. If they are strictly false, however, the cases don’t support the view that knowledge can be false, because the case for false knowledge is supposed to be built on cases where we have true knowledge ascriptions where the object of knowledge is a falsehood. Bearing this in mind, let’s imagine a debate between two fictional philosophers. They offer different theories of why it’s felicitous to assert (1)–(3): Theory 1: Because we can felicitously assert (1)–(3) and (1t)–(3t) when it’s clear that the former are false, we should conclude that ‘It is true that …’ is not factive. Theory 2: Although we can felicitously assert (1)–(3) and (1t)–(3t) when it’s clear that the former are false, we should conclude that the latter are false but felicitously assertable. I’m inclined to say that the proponents of Theory 2 win this debate. We should draw precisely zero lessons about the truth-conditions of sentences that involve a truth predicate or truth operator. Both sides agree that the false can be felicitously asserted and agree that there

320  Clayton Littlejohn are entailments that hold between (1) and (1t) and so on. There are no grounds for taking the felicity facts to be grounds for challenging the entailment facts in this instance. This debate about ‘It is true that …’ and ‘… is true’ isn’t our primary concern, but it’s instructive. We have to choose again between two approaches to the case of knowledge ascriptions like this: 1k. We know that France is a hexagon; 2k. We know that Maria arrived at 3:00; 3k. We know that there were 300 philosophers at the conference; Here’s what we agree on. We agree that (1)–(3) are false. We agree that they are nevertheless things we might assert felicitously. We also agree that (1k)–(3k) are things we can assert felicitously. We now have to choose between two options: Theory 1: In addition to saying that (1)–(3) are false and that (1k)– (3k) are felicitous, we want to say that (1k)–(3k) are true, so that we deny that ‘S knows that p’ is true only if p is true. Theory 2: In addition to saying that (1)–(3) are false and that (1k)– (3k) are felicitous, we want to say that (1k)–(3k) are false, so that we don’t have to deny that ‘S knows that p’ is true only if p is true. In line with the first theory, fans of false knowledge might say, in keeping with Buckwalter and Turri’s suggestion, that knowledge needs only to be adequate: Call this the approximation account of knowledge. On this view, representations need not be true in order to count as knowledge. Instead, they only need to adequately represent the truth. Although there are potentially many ways that approximations could be adequate, one way is for them to serve our purposes well enough to facilitate action and help us to achieve our goals in a particular circumstance. For example, 3.14 might be adequate, and hence known, as the value of pi in the grade school classroom but inadequate, and hence not known, as the value of pi in the lab engineering a global positioning system. (2020, p. 97) One virtue of this proposal (in addition to making sense of the intuitions we might have about the felicity of the relevant claims) is that it seems to give us an alternative explanation as to why we value knowledge. If an adequate representation is genuinely adequate, it’s not clear that we’d value this any less than we’d value a strictly true representation when

Knowing the Facts, Alternative and Otherwise 321 the difference between the merely adequate and the strictly true is, by hypothesis, not one that matters for, say, our practical purposes.10 If such a difference did matter, the account predicts that we couldn’t know the false target proposition. If it doesn’t, we either need to see the satisfaction of curiosity as irrelevant to practical adequacy or we have further pressure toward accepting the object restriction that comes from the idea that beliefs that are adequate representations should be classified as knowledge. While one could embrace Theory 1 and seek some non-traditional account of knowledge along the lines of the approximation account, I have concerns about non-traditional accounts that have this shape. (I’ll discuss these in the next section.) Moreover, I don’t think that our intuitions significantly challenge Theory 2. But Theory 2 is compatible with the orthodox view that upholds the object restriction. Dialectically, I think that the proponents of Theory 1 will have a hard time overturning Theory 2 for reasons we’ll touch upon in the next section. I won’t wade here into the debates about whether the linguistic evidence supports the widely held view that ‘knows’ is factive. My aim has been to try to show that there’s a way of accommodating the observations about loose talk without abandoning the view that we can only know what’s true or what’s strictly true and thus maintain the object restriction. But doesn’t this force us to accept the conclusion of the skeptical argument? As someone who isn’t sympathetic to skepticism and thinks that knowledge is cheap and plentiful, I would prefer not to be forced into accepting any sort of skeptical view. In the next section, I shall sketch a response to the skeptical argument that I hope is satisfying.

18.5  What’s Skepticism? Recall the skeptical argument: P1. A representation is known only if it is true. P2. Approximations are not true. C1. Therefore, approximations are not known. P3. Many of our representations are mere approximations. C2. Thus, many of our representations are not known. I would be happy to grant (P3). I’ve suggested that there’s good evidence for thinking that, in a sense, (P2) is true. When we review claims like (1)–(3), they don’t say explicitly that France is nearly hexagonal or that Maria arrived on or near 3:00 and I don’t think it’s a winning s­ trategy to insist that these hedged or weaker claims are invariably the ones that capture the content of what we assert or believe. I’ve also suggested that while what’s not strictly true isn’t true, we might retain the idea that knowledge is factive and say that the knowledge ascriptions that

322  Clayton Littlejohn we use are also not strictly true. Thus, we can retain (P1) in the face of the cases that have convinced some philosophers to embrace false knowledge. This, in turn, seems to leave us with a problem. If we don’t reject one of the argument’s premises, aren’t we committing ourselves to skepticism? Isn’t that bad? In a few words, ‘Possibly’ and ‘Probably not’. In arguing that there is the potential for a skeptical problem having to do with approximations, Buckwalter and Turri seem to assume that there would be a skeptical problem here if many of our routine knowledge ascriptions are false. Fair enough, I suppose. If I were to try to represent myself as having an anti-skeptical outlook, I probably wouldn’t stress or emphasize that I thought that many or most of our ordinary knowledge ascriptions were false. Still, I feel that if someone were to say that many or most of our knowledge ascriptions were false, I’d be open to the idea that their grounds for thinking that they were false might show that their position didn’t support the skeptical view as we ordinarily think of it. As I normally think of them, skeptical challenges normally have one or more of these features. They are supposed to point toward possibilities that are, from the point of view of unenlightened ‘common sense’, surprising and possibly disturbing. They are supposed to call for us to rethink our connection to reality. Think here about the possibility that it’s all a dream, that we’re brains in vats, etc. Knowledge is, after all, valued in part because we supposedly value being in touch with reality and it’s hard to see that we could be tethered to reality if we lacked knowledge. Skeptical challenges are supposed to challenge the rational authority for our beliefs. Even if there is some subjective sense in which it might be appropriate to believe what is not known, it might seem that whether our beliefs are truly fitting responses to the world depends upon whether they might constitute knowledge. Finally, skeptical challenges are supposed to show us things that we don’t already know and, possibly, would struggle to reconcile with our current practice or outlook. They should be difficult to cope with insofar as we aspire to continue to enquire and seek to know things in some domain. The challenge before us doesn’t have these features, as I hope to show. The most familiar skeptical arguments purport to tell us something that is, for most, disturbing. If we think of knowledge in terms of the relation we bear to parts of reality when we’re in touch with it or attuned to it, the thought that we might lack knowledge would seem to be disturbing because it suggests that we might be cut off from things we care about. Think, for example, about the intuitions that convince people that they wouldn’t or shouldn’t plug into Nozick’s (1974) experience machine. We think there’s something undesirable about life in the machine that’s distinct from the pleasurable experiences it promises and a kind of world-connectedness that we seem to have iff we have knowledge of things external to us seems to be an important part of

Knowing the Facts, Alternative and Otherwise 323 our explanation as to why there’s something missing from a life where appearance and reality diverge so radically. In contrast to, say, arguments that ask us to consider the possibility of demonic deception or the possibility of brains in vats, the skeptical argument being offered here seems not at all threatening. Perhaps this is for two reasons. The first is that we’re already aware of the kinds of limits to our discriminatory powers or our ability to measure that this argument appeals to. The second is that we know all too well how to cope with these limits. Indeed, this seems to be something that’s assumed by the approximation account of knowledge. According to the approximation account of knowledge, the reason why it’s felicitous to assert (1k)–(3k) in the situations envisaged is that it’s evident to the relevant parties that the representations are both strictly false and false in ways that wouldn’t matter for our interests. Thus, in some sense, the view assumes that the difference between a life in which the things known are true (or strictly true) and the life in which the things believed are strictly false but adequate is really nothing to be concerned by. Perhaps it’s not essential to a skeptical argument that it supports a disturbing conclusion, but it does highlight one key difference between the most familiar skeptical arguments and the skeptical problem that the approximation account is introduced to address. The thing that I’d note, though, is that the defenders of the approximation account seem to be committed to the idea that the implications of their account and the more traditional accounts of knowledge that uphold the object restriction are practically irrelevant. Presumably this is because we can reliably control for the falsity of our representations because the falsity of these representations in the cases that matter (i.e., those that the approximation view says are cases of false knowledge) doesn’t matter given our practical interests. Part of the explanation here as to how we control for this is that we are all fully aware that the relevant representations are false and fully aware that we’re not far off from the truth. This, to my mind, seems to assume that the skeptical threat that we’d be saddled with differs significantly from more familiar skeptical challenges. If we concede that it’s just acknowledged that it’s true that the real values of various variables are not far off from the values our representations indicate these variables take, the approximation theorist has to concede that we would (by traditional standards) know quite a lot about the domains we’re thinking about even if many of our representations were false. That, to my mind, is not a worrying form of skepticism. If, say, the external world skeptic conceded that many things we believed were mistaken but then added that many of our beliefs about approximations were strictly true and known (by traditional standards) to be true, they would have just conceded the falsity of external world skepticism.

324  Clayton Littlejohn Are there grounds for choosing between the approximation view and the more traditional view that upholds the object restriction that doesn’t appeal to highly contentious claims that philosophers of language will need to sort out? Perhaps. The approximation account seems to share something in common with epistemic contextualist views that I think is worth highlighting. Some of us believe that knowledge has a kind of normative significance for belief and speech, so that there are norms of roughly the following sort: KA: If you aren’t in a position to know that p, you shouldn’t assert that p. KB: If you aren’t in a position to know that p, you shouldn’t believe that p. (DeRose, 2002; Littlejohn, 2013b; Sutton, 2005; Williamson, 2000) Contextualists about knowledge ascriptions might say that the set of things known by some indicated subject depends upon the conversational interests of the speaker attributing knowledge. It’s an important part of this view that the target of the attribution and the speaker might be located in different contexts so that, say, relative to the standards in the speaker’s context, it’s correct to say, ‘So and so does not know that p’, even if there is some other context in which it’s correct to say, ‘So and so does know that p’. Williamson (2005) noted that there’s a potential problem here for contextualists about knowledge ascription. Suppose some individual believes, say, that there’s some meat in this dish and isn’t sure whether it could be served to the guests. According to KB, if this individual doesn’t know this proposition, they shouldn’t believe it. According to the contextualist, there can be one context in which a speaker says correctly this individual does know (we’ll say that this individual does know1) and another in which a speaker says correctly this individual does not know (we’ll say that this individual does not know2). Suppose this individual wants to know whether it’s okay to continue to believe this or whether in fact they shouldn’t believe this. It doesn’t seem right to say this: well, I have good news and bad news. Since you know1, it’s fine for you to believe it. That’s the good news. Since you don’t know2, you shouldn’t believe it. This individual, quite reasonably, would want to know whether they should or shouldn’t believe, but it seems that the contextualist, if they accept KB, must answer that they should (or, perhaps, may) and shouldn’t. It seems a system of norms isn’t very good if it issues these kinds of conflicting directives. This points to a tension between the contextualist’s attitude toward knowledge ascriptions (i.e., that the propositions expressed have

Knowing the Facts, Alternative and Otherwise 325 truth-conditions set by the interests of attributors as opposed to targets) and an independently attractive view about the normative significance of knowledge.11 If knowledge is normatively significant, it’s presumably because there’s some knowledge relation (e.g., knowing1 and knowing2) that determines whether some individual may believe or should instead not believe. One feels that the same sort of problem would arise for the approximation account. Recall the remark, ‘3.14 might be adequate, and hence known, as the value of pi in the grade school classroom but inadequate, and hence not known, as the value of pi in the lab engineering a global positioning system’ (2020, p. 97). If this is what the approximation theorist thinks, we can easily imagine a scenario in which Tim believes pi is 3.14 where there’s one context in which this is correctly described as a case of knowledge and a second context in which this is correctly described as a case of non-knowledge. To avoid any sort of contradiction, we might want to embrace a kind of contextualism according to which Tim bears the knows1 relation to a proposition without being the knows2 relation to it, but then we’re either stuck denying KB or saying that there are some propositions that Tim shouldn’t believe even though it’s fine for him to believe them. This sort of problem won’t arise if we impose the object restriction and take practical adequacy to be, at best, a necessary condition for knowing without committing to any view on which an adequate representation might be known if not true.

18.6 Conclusion I have argued that the argument for rejecting the object restriction isn’t decisive. The challenge to the factivity of knowledge seems to generalize in problematic ways. It seems that the considerations that support this challenge support a challenge to the idea that ‘It is true that …’ or ‘It is a fact that …’ is factive. So long as we allow that the knowledge ascriptions themselves are false but felicitous, we can retain the object restriction. This leaves us with the problem that many of our ordinary knowledge ascriptions are false, strictly speaking. It’s not clear, however, that this forces us to confront a very serious skeptical problem. Unlike more familiar skeptical problems, we already know how to cope with it and that its existence doesn’t undermine our confidence that we’re properly connected to reality.

Notes 1. One problem with this argument is that it’s not clear why the inferential case is special. Must all knowledge be based on accurate representations and only accurate representations? I hope not. There are good reasons to be skeptical of the idea that every piece of non-inferential knowledge rests on such a basis. In turn, we need a story about how these beliefs can constitute ­knowledge

326  Clayton Littlejohn without being supported by accurate representational states. Perhaps a basis can be safe without being accurate (e.g., an experience without content might nevertheless dispose us to believe things in such a way that we couldn’t easily be mistaken). Once we countenance this possibility, it’s less obvious that our inferential beliefs couldn’t similarly constitute knowledge by virtue of the fact that it, too, has a safe basis that might not be accurate. For arguments that our visual beliefs won’t be based on mental states or events that have representational contents of the sort that our beliefs have, see Brewer (2011), McGinn (2012), Millar (2000), and Travis (2013). For arguments that knowledge of our own minds and actions won’t be based on accuracy evaluable non-doxastic states, see Anscombe (1962). For further discussion of the epistemological significance of these views, see Littlejohn (2017). 2. I’m bracketing difficult questions about the relationship between facts and true propositions. 3. For a discussion of the basis restriction, see Borges (2020), Fitelson (2010), Lee (2021), Murphy (2017), Schnee (2015), Turri (2019), and ­Warfield ­(2005). For discussions of the object restriction, see Bricker (2022), B ­ uckwalter and Turri (2020), and Shaffer (2015). 4. For discussions of these cases for debates about evidence, see Shaffer (2015, 2019) for arguments that claim that something only needs to be approximately true to constitute evidence. For arguments that evidence must be true, see Littlejohn (2013a, 2013b), Littlejohn and Dutant (in press), and Williamson (2000). 5. For a helpful discussion of counter-closure, see Luzzi (2010, 2019). 6. I modified the argument slightly by dropping the word ‘strictly’. As we’ll see below, this change won’t matter if we agree that something is true iff it is strictly true. There are good reasons to think that this is so. 7. Though, see Bricker (2022) for a discussion of whether the beliefs in question are merely approximately true or concern approximations and are thus strictly true. We shall briefly touch upon this issue in a moment. 8. For further discussion of this kind of contextualist view, see also Hansen (2011) and Huang (2017). 9. For further discussion, see Carter (2017, 2021) and Lasersohn (1999). In fairness to the contextualist view, it might be argued that appending such things changes the context, but my aim here isn’t to defend the view that what we felicitously say in such cases is true. I don’t want to tie my defence of the object restriction to any particularly contentious positions in the philosophy of language because I think that if we abandon the contextualist view (rightly or wrongly), we’ll still have ways of defending the object restriction and responding to the skeptical worry introduced above. 10. It might be worth thinking about whether such false knowledge that’s (arguably) suitable for practical purposes is suitable for other purposes such as the satisfaction of curiosity. Convinced by Whitcomb (2010) that our curiosity is satisfied only when we know, you might wonder whether the relevant falsehoods (e.g., that France is a hexagon) satisfy our curiosity. 11. For a response, however, see Blome-Tillmann (2013). See Russell (2022) for an explanation as to why the contextualist might take this response seriously.

References Anscombe, G. E. M. (1962). On sensations of position. Analysis, 22(3), 55–58. Blome-Tillmann, M. (2013). Contextualism and the knowledge norms. Pacific Philosophical Quarterly, 94(1), 89–100.

Knowing the Facts, Alternative and Otherwise 327 Borges, R. (2020). Knowledge from knowledge. American Philosophical Quarterly, 57(3), 283–297. Brewer, B. (2011). Perception and its objects (Vol. 132, pp. 87–97). Oxford University Press. Bricker, A. M. (2022). Knowing falsely: The non-factive project. Acta Analytica, 37(2), 263–282. Buckwalter, W., & Turri, J. (2020). Knowledge, adequacy, and approximate truth. Consciousness and Cognition, 83, 102950. Carter, S. (2017). Loose talk, negation and commutativity: A hybrid static-­ dynamic theory. Sinn Und Bedeutung, 21, 1–14. Carter, S. (2021). The dynamics of loose talk. Noûs, 55(1), 171–198. DeRose, K. (2002). Knowledge, assertion, and context. Philosophical Review, 111(2), 167–203. Fitelson, B. (2010). Strengthening the case for knowledge from falsehood. Analysis, 70(4), 666–669. Hansen, N. (2011). Color adjectives and radical contextualism. Linguistics and Philosophy, 34(3), 201–221. Huang, M. (2017). A plea for radical contextualism. Synthese, 194(3), 963–988. Lasersohn, P. (1999). Pragmatic halos. Language, 75(3), 522–551. Lee, K. Y. (2021). Reconsidering the alleged cases of knowledge from falsehood. Philosophical Investigations, 44(2), 151–162. Littlejohn, C. (2013a). No evidence is false. Acta Analytica, 28(2), 145–159. Littlejohn, C. (2013b). The Russellian retreat. Proceedings of the Aristotelian Society, 113, 293–320. Littlejohn, C. (2017). How and why knowledge is first. In A. Carter, E. Gordon, & B. Jarvis (Eds.), Knowledge first (pp. 19–45). Oxford University Press. Littlejohn, C., & Dutant, J. (2021). Even if it might not be true, evidence cannot be false. Philosophical Studies, 179 (3):801–827. Luzzi, F. (2010). Counter-closure. Australasian Journal of Philosophy, 88(4), 673–683. Luzzi, F. (2019). Knowledge from non-knowledge: Inference, testimony and memory. Cambridge University Press. Lynch, M. P. (2004). True to life: Why truth matters. MIT University Press. McGinn, M. (2012). Non-inferential knowledge. Proceedings of the Aristotelian Society, 112(1), 1–28. Millar, A. (2000). The scope of perceptual knowledge. Philosophy, 75(291), 73–88. Murphy, P. (2017). Justified belief from unjustified belief. Pacific Philosophical Quarterly, 98(4), 602–617. Nozick, R. (1974). Anarchy, state, and utopia (pp. 187–201). Basic Books. Russell, G. K. (2022). Fancy loose talk about knowledge. Inquiry, 65(7), 789–820. Schnee, I. (2015). There is no knowledge from falsehood. Episteme, 12(1), 53–74. Shaffer, M. J. (2015). Approximate truth, quasi-factivity, and evidence. Acta Analytica, 30(3), 249–266. Shaffer, M. J. (2019). Rescuing the assertability of measurement reports. Acta Analytica, 34(1), 39–51. Sutton, J. (2005). Stick to what you know. Noûs, 39(3), 359–396.

328  Clayton Littlejohn Travis, C. (2013). Perception: Essays after frege. Oxford University Press. Turri, J. (2019). Knowledge from falsehood: An experimental study. Thought: A Journal of Philosophy, 8(3), 167–178. Warfield, T. A. (2005). Knowledge from falsehood. Philosophical Perspectives, 19(1), 405–416. Whitcomb, D. (2010). Curiosity was framed. Philosophy and Phenomenological Research, 81(3), 664–687. Williamson, T. (2000). Knowledge and its limits. Oxford University Press. Williamson, T. (2005). Knowledge, context, and the agent’s point of view. In G. Preyer & G. Peter (Eds.), Contextualism in philosophy: Knowledge, meaning, and truth (pp. 91–114). Oxford University Press.

Index

Arnold, A. 7, 20, 74, 139, 153 Audi, R. 18, 26–27, 40, 71, 74, 152, 154, 254 Ball, B. and Blome-Tillman, M. 3, 6, 7, 50–53, 57–58, 59, 65, 74, 89–90, 102, 121, 136, 151, 153, 183–195, 254, 262, 264–265, 270 Borges, R. 1–8, 27, 32, 40–42, 57–58, 65, 73–75, 115–117, 120–121, 133, 136–137, 152, 154–155, 228–230, 254–255, 270, 326–327 Buford, C. and Cloos, C. M. 8, 52–53, 57–58, 74, 134, 136, 152, 154, 254 cases: Fancy Watch 18–23, 51–52, 64–74, 79–80, 134, 187–188, 199; Handout 46–52, 57, 94–100, 122, 131–132, 139–153, 169, 183–186, 282; Ten Coins 2, 77, 105–109 closure, knowledge 6–7, 27, 42, 74, 134, 136–138, 155, 161, 163–165, 167, 169, 228, 231, 236, 238–255; see also counter-closure Coffman, E. J. 4, 7, 11–27, 62, 73–74, 80, 83, 89–90, 121, 136, 139, 152–154, 254 coherentism 26, 41 contextualism 280–281, 284, 315–316, 324–328 counter-closure, knowledge 2, 6–8, 59, 75, 77–78, 87–90, 118, 136, 138–140, 146, 152, 155–156, 161–180, 215, 230–255, 312–313, 326–327

defeaters 4–5, 40–41, 63, 90–91, 127–132, 135–138, 140–157 Dretske, F. 44, 57–58, 163, 181, 253, 255 epistemic luck see luck Feit, N. and Cullison, A. 8, 73, 121, 141–146, 151–153 Fitelson, B. 7–8, 20–21, 26–27, 57–58, 73–75, 79–80, 89–90, 102, 134, 137, 139, 154–155, 181, 187, 254–255, 326–327 foundationalism 34–36, 41, 43, 201, 232, 238, 244–245, 247, 251–252, 256, 286 Gettier, E. 2, 4–5, 7–8, 23, 27–29, 33, 41–42, 47–49, 60, 63, 67, 69–70, 72–77, 89–90, 92–97, 100, 102–103, 105–111, 116–118, 120, 124, 126–127, 130, 132, 134–142, 145, 147–148, 151–157, 168–169, 181, 185, 206, 217–218, 222–223, 226, 228–230, 254–255, 268–271, 274, 276 Goldman, A. 41–42, 147, 152, 155 Hawthorne, J. 54, 58, 62, 73–75, 87, 90, 134, 137, 139, 152, 155, 189, 254–255, 283, 285, 308–309 Hilpinen, R. 7, 33, 42, 74–75, 120, 127, 134, 137, 139, 141, 155 Klein, P. 1–4, 6–8, 18, 27–43, 58, 60, 62–63, 73–75, 89–90, 102–103, 116–122, 124, 127–130, 132–137, 139, 141–142, 147–155, 165,

330  Index 180–181, 219–220, 228, 230, 254–255, 270, 283–284 knowledge from essential grounds or falsehood 4–5, 11, 26, 34, 52, 60–62, 67–70, 72–73, 75–76, 79–81, 84, 89, 93, 113, 120, 132, 139–141, 143–145, 148–149, 152, 155, 186, 206–207, 217, 231, 233, 268 knowledge closure see Closure knowledge counter-closure see  counter-closure Lackey, J. 56–58, 153, 155, 284 lotteries 82, 90, 124–126, 155, 189, 272, 283 luck 4, 63, 68–69, 73, 80, 88, 93–98, 101–103, 109, 118, 138, 140, 151, 153, 155, 168–170, 206, 268, 272, 288–289, 305, 309 Luzzi, F. 2, 6–8, 11, 18–23, 26–27, 59–60, 62, 74–75, 78, 89–90, 102–103, 116, 118, 134, 137, 139–141, 145, 152, 154, 156, 170, 179, 181, 183–189, 192, 194–195, 222, 229–256, 326–327 Montminy, M. 5, 8, 17–22, 27, 52, 57–58, 60–75, 89, 91, 102–103, 121, 137, 152, 156, 185–187, 192, 195, 228, 230, 254–255, 262, 264, 271 Murphy, P. 2, 6–7, 74–75, 139, 156, 179–180, 215–230, 326–327 No False Lemmas 76–77, 84, 87, 89, 140–141, 152, 156, 169 Nozick, R. 123–124, 134–135, 138, 152, 156, 253, 256, 322, 327

propositional justification 23, 29–33, 39–40, 59, 142, 145, 148–151, 156, 192, 194, 207, 219, 307, 312 proxy premise 65–66, 68, 74, 140, 152, 169–171, 185–186, 189, 192 Russell, B. 116, 118, 124, 127, 134, 136, 152, 156, 173, 181, 213–214, 326–327 safety 5, 23, 85, 92, 95–102, 109, 123–125, 133–135, 138, 155, 198, 218 Schnee, I. 1–8, 65, 73–75, 89, 91, 102–103, 121, 133, 138, 152, 156, 228, 230, 254, 256, 262, 267–268, 271, 326–327 sensitivity 50, 98, 103, 113, 123, 125–126, 133–134, 184–185, 188, 217, 271, 284, 292–293, 300–301, 309 transmission and transmission failure 15, 27, 136, 172, 195, 253, 255 Warfield, T. 1–3, 7, 18, 27, 41, 43, 45–51, 53–54, 56–58, 60–61, 64, 74–75, 79–80, 89, 91, 102–103, 116, 119, 122, 131, 133–134, 138–141, 152, 157, 169–170, 182–187, 194–195, 219–220, 228, 230, 254, 256, 262, 272, 282, 285, 326, 328 Williamson. T. 102–103, 117, 119–120, 133, 135, 138, 152, 157, 189, 213, 218, 228, 230, 274, 276–277, 285, 324, 326, 328 Wright, C. 6, 26–27, 213, 231–256