Oxford Studies in Epistemology Volume 6 9780198833314, 9780198833321, 0198833318

Oxford Studies in Epistemology is a biennial publication which offers a regular snapshot of state-of-the-art work in thi

142 10

English Pages 304 Year 2019

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
OXFORD STUDIES IN EPISTEMOLOGY: Volume 6
Copyright
CONTENTS
EDITORS’ PREFACE
CONTRIBUTORS
1. Perceptual Justification and the Cartesian TheaterDavid James Barnett
1. Background
2. Anti-Cartesianism and Perceptual Partiality
3. Motivations for Anti-Cartesianism
3.1. First Motivation: Anti-Skeptical Advantages
3.2. Second Motivation: We can’t be Cartesians “All the Way Down”
4. A Problem for Anti-Cartesianism
4.1. First Option: Biting the Bullet
4.2. Second Option: Licensing Chauvinism
4.3. Third Option: Rejecting Evidentialism about Defeaters
4.4. Conclusion
References
2. Subjective Probability and the Content/Attitude Distinction
1. Content
1.1. The Measure Theory of Mental Content
1.2. A Toy Measure Theory
2. Relativism about the Content/Attitude Distinction
3. Ramifications for Belief Modeling
3.1. Binary Belief vs Credence
3.2. Precise vs Imprecise Credence
4. Ramifications for Epistemology
References
3. Modal Empiricism: What is the Problem?
1. Modal Empiricism Rejected
2. Modal Empiricism Defended
3. First Sceptical Consequence: Nomological Necessities
4. Second Sceptical Consequence: Modal Rationalism
4.1. Modal Rationalism and (R1)
4.2. Modal Rationalism and (R2)
5. Conclusion
References
4. Accuracy and Educated Guesses
1. Educated Guesses
2. Immodesty
3. Probabilism
3.1. Non-Triviality
3.2. Boundedness
3.3. Finite Additivity
3.4. Other Applications
4. Alternative Approaches
4.1. Epistemic Utility Theory
4.2. The Practical Approach
5. Some Final Comments on Guessing and Accuracy
6. Conclusion
References
5. Who Wants to Know?
1. Our Knowledge-Centered Epistemology
1.1. The Value of Knowledge
1.2. Epistemic States Higher than Knowing
2. Obligations in Ethics and in Epistemology
2.1. Characterizing Epistemic Normativity
2.2. Epistemic Supererogation and Special Epistemic Obligations
2.3. Explaining the Phenomenon
3. Special Epistemic Obligations and the Aims of Professional Inquiry
3.1. Objection: The Obligations are not Epistemic
3.2. Objection: The Obligations are Universal
3.3. Objection: The Obligations Reflect an Attempt to Maximize Knowledge
3.4. Objection: The Obligations Reflect Shifting Standards for Knowing
4. Conclusion
References
6. On the Accuracy of Group Credences
1. Group Opinions and Group Knowledge
2. Linear Pooling
3. Epistemic Value and Accuracy
4. The Accuracy Argument for Linear Pooling
5. Objections to Linear Pooling: Preserving Independences
6. Objections to Linear Pooling: Updating on Evidence
7. Conclusion
Appendix: Sketch of Proof of Theorem 1
References
7. Expressivism, Normative Uncertainty, and Arguments for Probabilism
Introduction
1. The Naïve Expressivist’s Problem with Uncertainty
2. Two Styles of Sophisticated Expressivism
2.1. A-type Expressivism, Schroeder-style
2.2. B-type Expressivism, Silk-style
3. Incorporating Normative Uncertainty into Sophisticated Expressivist Views
3.1. Uncertainty in Schroeder’s Framework
3.2. Uncertainty in Silk’s Framework
4. Dutch Book and Accuracy Arguments
4.1. Dutch Book Arguments for Probabilism
4.2. Accuracy Arguments for Probabilism
5. Expressivist Arguments for Probabilistic Coherence
5.1. Expressivist Accuracy-Dominance
5.2. Expressivist Dutch Books
6. Prospects for Expressivist Rationality Constraints
References
8. Space, Structuralism, and Skepticism
1. Skepticism and the Structuralist Response
2. Modality and Fundamentality
3. Causation
4. The Unity of Space
5. Contingency and Intrinsic Properties
6. The Larger Picture
References
9. What to Believe About Your Belief that You’re in the Good Case
1. Evidence and Inter-Level Coherence
2. Naïve Evidentialism
2(a). Evidential Parity
2(b). Epistemic Priority
3. A Priorism
3(a). A Priori Justification as Evidence
3(b). A Priori Justification as Probabilistically Irrelevant
3(c). A Priori Justification as Pre-Evidential Probability
4. Externalism
5. A Hybrid View
References
10. Knowledge, Practical Adequacy, and Stakes
1. Conceptual Preliminaries
2. Stakes and Practical Adequacy
2.1. HIGH STAKES and EVEN STAKES
2.2. Failures of HIGH STAKES*
2.3. Failures of EVEN STAKES*
3. Refining ‘STAKES’
4. Knowledge, Practical Adequacy, and Stakes
4.1. Stakes-Refined Purism and Modest Stability
4.2. Some Troublesome Cases
4.3. Destabilizing Trios
References
11. Clarifying Pragmatic Encroachment: A Reply to Charity Anderson and John Hawthorne on Knowledge, Practical Adequacy, and Stakes
1. Two Pragmatist Theses
2. Are the Two Pragmatist Theses Equivalent?
3. Problem Cases for Pragmatic Encroachment
References
12. Stakes, Practical Adequacy, and the Epistemic Significance of Double-Checking
1. Anderson and Hawthorne (A&H) on Pragmatic Encroachment
2. A&H against Pragmatic Encroachment via Practical Adequacy, I: A Concern
3. A&H against Pragmatic Encroachment via Practical Adequacy, II: The Strengthened Case
4. The Epistemology of Double-Checking
5. An Alternative Account of the Epistemic Significance of PractiCal Considerations: The Doctrine of Value-Reflecting Reasons
References
13. How Much is at Stake for the Pragmatic Encroacher
References
INDEX
Recommend Papers

Oxford Studies in Epistemology Volume 6
 9780198833314, 9780198833321, 0198833318

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

OXFORD STUDIES IN EPISTEMOLOGY

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

OXFORD STUDIES IN EPISTEMOLOGY Editorial Advisory Board: Stewart Cohen, University of Arizona Keith DeRose, Yale University Richard Fumerton, University of Iowa Alvin Goldman, Rutgers University Alan Hájek, Australian National University Gil Harman, Princeton University Frank Jackson, Australian National University and Princeton University Jim Joyce, University of Michigan Jennifer Lackey, Northwestern University Jennifer Nagel, University of Toronto Jonathan Vogel, Amherst College Tim Williamson, University of Oxford Associate Editor: Julianne Chung, University of Louisville

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

OXFORD STUDIES IN EPISTEMOLOGY Volume 6

Edited by Tamar Szabó Gendler and John Hawthorne

1

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

3

Great Clarendon Street, Oxford, OX2 6DP, United Kingdom Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries © the several contributors 2019 The moral rights of the authors have been asserted First Edition published in 2019 Impression: 1 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by licence or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above You must not circulate this work in any other form and you must impose this same condition on any acquirer Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America British Library Cataloguing in Publication Data Data available Library of Congress Cataloging in Publication Data Data available ISBN 978–0–19–883331–4 (hbk.) ISBN 978–0–19–883332–1 (pbk.) Printed and bound in Great Britain by Clays Ltd, Elcograf S.p.A. Links to third party websites are provided by Oxford in good faith and for information only. Oxford disclaims any responsibility for the materials contained in any third party website referenced in this work.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

CONTENTS

Editors’ Preface Contributors

vii xi

1. Perceptual Justification and the Cartesian Theater David James Barnett 2. Subjective Probability and the Content/Attitude Distinction Jennifer Rose Carr 3. Modal Empiricism: What is the Problem? Albert Casullo 4. Accuracy and Educated Guesses Sophie Horowitz 5. Who Wants to Know? Jennifer Nado 6. On the Accuracy of Group Credences Richard Pettigrew 7. Expressivism, Normative Uncertainty, and Arguments for Probabilism Julia Staffel 8. Space, Structuralism, and Skepticism Jonathan Vogel 9. What to Believe About Your Belief that You’re in the Good Case Alex Worsnip 10. Knowledge, Practical Adequacy, and Stakes Charity Anderson and John Hawthorne 11. Clarifying Pragmatic Encroachment: A Reply to Charity Anderson and John Hawthorne on Knowledge, Practical Adequacy, and Stakes Jeremy Fantl and Matthew McGrath 12. Stakes, Practical Adequacy, and the Epistemic Significance of Double-Checking Sanford C. Goldberg 13. How Much is at Stake for the Pragmatic Encroacher Jeffrey Sanford Russell

1

190

Index

287

35 58 85 114 137

161

206 234

258

267 279

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

EDITORS’ PREFACE

Published under the guidance of a distinguished editorial board, each volume of Oxford Studies in Epistemology seeks to publish cutting-edge work that brings new perspectives to traditional epistemological questions and opens previously unexplored avenues of investigation. We are delighted to present the sixth volume in this series. Like its predecessors, OSE 6 brings together a range of exciting new work in the field of epistemology from throughout the Englishspeaking world. Among the topics discussed in this issue are the nature of perceptual justification, intentionality, modal knowledge, credences, epistemic and rational norms, expressivism, skepticism, and pragmatic encroachment. Like its predecessors, OSE 6 is methodologically diverse, with some papers that draw from formal epistemology and decision theory, while others employ traditional philosophical analysis and argumentation. As always, we hope that readers will be stimulated by the insights and ideas that these articles will surely generate. This volume also includes the winner of the second biennial Sanders Prize in Epistemology (supported by the Marc Sanders Foundation and open to scholars within fifteen years of their receiving a PhD), Sophie Horowitz’s ‘Accuracy and Educated Guesses’, as well as a runner-up, David James Barnett’s ‘Perceptual Justification and the Cartesian Theater’. Horowitz offers a new answer to the question of what makes credences more or less accurate: credences are more accurate insofar as they license true educated guesses, and less accurate insofar as they license false educated guesses—an account that is compatible with immodesty, can be used to justify certain coherence constraints on rational credence (such as probabilism), and has advantages over rival accounts of accuracy. Barnett argues that although a traditional Cartesian epistemology of perception (on which perception does not provide one with direct knowledge of the external world) is faced with well-known skeptical challenges, any anti-Cartesian view strong enough to avoid these challenges must license a way of updating one’s beliefs in response to anticipated experiences that seems diachronically irrational (and that to avoid this result, the anti-Cartesian must either license an unacceptable epistemic chauvinism or else claim that merely reflecting on one’s experiences defeats perceptual justification). Other papers in this issue also discuss credences, rational and epistemic norms, and skepticism. In ‘Subjective Probability and the Content/Attitude Distinction’, Jennifer Rose Carr contends that if mental contents are a form of measurement system for representing behavioral and psychological dispositions, then neither imprecise credences nor precise credences can be rationally impermissible. In ‘Modal Empiricism: What is the Problem?’ Albert Casullo

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

viii | Editors’ Preface makes a case for the claim that the cost of refuting modal empiricism is modal skepticism. In ‘Who Wants to Know?’ Jennifer Nado proposes that professional inquirers (including philosophers) are subject to special epistemic obligations which require them to meet higher standards than those required for knowing—a variation that cannot be accounted for adequately via the usual standard-shifting accounts of knowledge (such as contextualism or subjectsensitive invariantism) but rather calls for a more pluralistic approach. In ‘On the Accuracy of Group Credences’ Richard Pettigrew argues for the credal judgment aggregation principle, “Linear Pooling” (according to which the credence function of a group should be a weighted average or ‘linear pool’ of the credence functions of the individuals in the group) based on considerations of accuracy. In ‘Expressivism, Normative Uncertainty, and Arguments for Probabilism’, Julia Staffel explores whether we can devise expressivist versions of the standard arguments used to support rationality constraints on degrees of uncertainty, Dutch book arguments, and accuracy-dominance arguments (and concludes that while we can, the resulting arguments don’t support the same rationality constraints as the original versions of the arguments). In ‘Space, Structuralism, and Skepticism’, Jonathan Vogel opposes structuralism (on which properties are individuated by the way they causally interact with other properties,) about certain spatial properties, and the structuralist response to skepticism more broadly. Finally, in ‘What to Believe About Your Belief that You’re in the Good Case’, Alex Worsnip presents a positive view that blends externalism about evidence with a mild, qualified kind of pragmatism which aims to do justice to the sense that anti-skeptical assumptions are evidentially groundless while also maintaining that one cannot rationally believe something that one judges oneself to lack sufficient evidence for. This issue also includes a symposium on Charity Anderson and John Hawthorne’s ‘Knowledge, Practical Adequacy, and Stakes’. Anderson and Hawthorne allege that defenses of pragmatic encroachment commonly rely on two thoughts that are erroneously often assumed to complement each other: first, that the gap between one’s strength of epistemic position on p and perfect strength sometimes makes a difference to what one is justified in doing, and second, that the higher the stakes, the harder it is to know (indicating along the way a variety of strategies for regimenting the notion of stakes as well as raising some purportedly troubling cases for pragmatic encroachment). Discussants include Jeremy Fantl and Matthew McGrath (‘Clarifying Pragmatic Encroachment: a Reply to Charity Anderson and John Hawthorne on Knowledge, Practical Adequacy, and Stakes’), Sanford C. Goldberg (‘Stakes, Practical Adequacy, and the Epistemic Significance of Double-Checking’), and Jeffrey Sanford Russell (‘How Much is at Stake for the Pragmatic Encroacher’). We would like to extend thanks to our referees James Beebe, Alexander Bird, Billy Dunaway, Kenny Easwaran, Daniel Greco, Wesley Holliday, Yoaav Isaacs, Thomas Kelly, Christoph Kelp, Joshua Schechter, Mark Schroeder,

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Editors’ Preface | ix Scott Sturgeon, and three anonymous reviewers, and to our editorial board members Keith DeRose (Yale University), Richard Fumerton (University of Iowa), Alvin Goldman (Rutgers University), Alan Hájek (Australian National University), Gil Harman (Princeton University), Frank Jackson (Australian National University and Princeton University), Jim Joyce (University of Michigan), Jennifer Lackey (Northwestern University), Jennifer Nagel (University of Toronto), Jonathan Vogel (Amherst College), and Tim Williamson (Oxford University). We are also most grateful, as always, to Peter Momtchiloff for his muchappreciated ongoing support of this series. Tamar Szabó Gendler, Yale University John Hawthorne, University of Southern California Editors Julianne Chung, University of Louisville Associate Editor

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

CONTRIBUTORS

Charity Anderson Baylor University David James Barnett University of Toronto Jennifer Rose Carr University of California, San Diego Albert Casullo University of Nebraska-Lincoln Jeremy Fantl University of Calgary Sanford C. Goldberg Northwestern University John Hawthorne University of Southern California Sophie Horowitz University of Massachusetts Amherst Matthew McGrath Rutgers University, New Brunswick Jennifer Nado University of Hong Kong Richard Pettigrew University of Bristol Jeffrey Sanford Russell University of Southern California Julia Staffel University of Colorado at Boulder Jonathan Vogel Amherst College Alex Worsnip University of North Carolina, Chapel Hill

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

1. Perceptual Justification and the Cartesian Theater David James Barnett 1. Background According to a traditional Cartesian epistemology of perception, perception does not provide one with direct knowledge of the external world. Instead, when you look out to see a red wall, what you learn first is not a fact about the color of the wall—i.e., that it is red—but instead a fact about your own visual experience—i.e., that the wall looks red to you. If you are to know or to justifiably believe that the wall is red, then you must be in a position to justifiably infer this conclusion about the external world from known premises about your own visual experience.1 The Cartesian account is sometimes accused of treating ordinary perception on the model of a “Cartesian theater,” with perceptual experiences playing the role of mere images displayed before an internal spectator. As I will explain, I think there is some truth to the accusation. What is less clear to me is how this seemingly unappealing model of perceptual justification can be avoided. For even though there are powerful motivations for rejecting the Cartesian account, any alternative view strong enough to appeal to these motivations faces serious problems of its own. My aim in what follows is to explain an underappreciated commitment of the Cartesian account, some motivations for resisting the account, and the problem that confronts any view that takes advantage of these motivations. The underappreciated commitment of the Cartesian account concerns the following question: Why does what you are justified in believing depend on your perceptual experiences, while what I am justified in believing depends on my perceptual experiences? The Cartesian, I claim, is committed to the following answer: It is because you know what your own experiences are, but you typically do not know what mine are (and vice versa for me). And this answer commits the Cartesian to claiming that, to whatever extent you do happen to know what my perceptual experiences are, my experiences will justify your beliefs in the same way and to the same degree that your own experiences do. After explaining all of this in Sections 2 and 3, I will go on in

1 The Cartesian view is so-called because it is inspired by Descartes’ treatment of perceptual skepticism, though it arguably goes beyond anything he said. For classic examples, see, e.g., (Chisholm, 1966) and (Russell, 1912).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

2 | David James Barnett Section 4 to discuss the problem facing accounts of perceptual justification that follow me in rejecting these Cartesian claims. But first, I want to consider briefly the more familiar landscape of issues surrounding the Cartesian epistemology of perception. We can start by considering the epistemic situation of an agent who really is in a Cartesian theater: (CARTESIAN THEATER) You find yourself in a windowless room, which is empty aside from a closed-circuit TV. The TV is hooked up to a camera that is located elsewhere, where it faces a wall of an unknown color. When the TV is switched on, it displays an image of a red wall. You have strong reason to believe that the images on the TV are a reliable guide to the color of the wall, and as a matter of fact the images are both accurate and reliably generated. It should be uncontroversial that you are justified in believing that the wall is red in CARTESIAN THEATER. And in broad outline, it should also uncontroversial why you are justified. Your only access to the color of the wall comes from the images that you see on your TV screen. So you are justified in believing that the wall is red only because you are in a position to justifiably infer that the wall is red from what you know about these images.2 This of course does not mean that you must consciously go through such an inference. But it plausibly does mean that you must be in a position justifiably to do so. Since you must be in a position to justifiably infer the color of the wall from known premises about the images on your TV screen, this lends credibility to some further familiar claims. First, it is plausible that you still would have been justified in believing that the wall is red even if the images on your TV had been inaccurate, and even if the process that generated them had been objectively unreliable. For you would have the same evidence either way. Second, it is plausible that you would not have been justified in believing that the wall is red if you had lacked background evidence supporting the reliability of the images on the TV screen. For it is plausible that one cannot justifiably infer from premises about the images to conclusions about the wall unless one has evidence supporting that the former is a reliable guide to the latter. To be sure, both of these further claims are controversial. Some reliabilists about inferential justification, for example, would deny them both. I think we should accept both of these claims, as I have argued elsewhere.3 But instead of discussing these claims in more detail here, I want to instead examine a prominent way of resisting a traditional Cartesian epistemology of perception that is willing to grant these claims about Cartesian theaters, and that instead denies that ordinary perception puts us in a comparable situation.

2

For simplicity, I ignore the view that by looking at the images, one can see the wall itself. See (Briscoe, 2016 Sec. 5) for critical discussion of a view like this. Those sympathetic to such a view could substitute an example where you receive non-imagistic information about the color of a wall. 3 See especially (Barnett, 2014) and (Barnett, 2015).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Perceptual Justification–Cartesian Theater | 3 Consider a corresponding case of ordinary visual perception: (VISUAL PERCEPTION) You look at a wall, and see that it is red. You have strong reason to believe that your visual experiences are reliable. Although some radical skeptics might deny that it is possible to have strong reason to believe that one’s visual experiences are reliable, it should be uncontroversial that if VISUAL PERCEPTION is possible, then in it you are justified in believing that the wall is red. What is controversial is how to explain why you are justified. I take the the Cartesian account to be this. When you look at a red wall, what you learn first is a fact about your own mind; the fact that you are having an experience as of a red wall (henceforth: a reddish experience).4 This reddish experience itself does not give you justification to believe that the wall is red, anymore than a red image on a TV screen does. Instead, it is your knowledge that you have this experience which puts you in a position to justifiably infer that the wall is red, just as knowledge of the red image does in CARTESIAN THEATER. If we accept this Cartesian account of why you are justified (and accept the plausible but controversial further claims about Cartesian theaters set out above), then this will leave us with two familiar corollaries. The first is: (PERCEPTUAL INTERNALISM) Genuinely perceiving never gives one stronger (or weaker) justification for one’s perceptual beliefs than merely seeming to perceive does. The idea behind PERCEPTUAL INTERNALISM is that someone who is hallucinating, or who otherwise has the same experiences as a genuine perceiver, is as justified in her perceptual beliefs as the perceiver is. This is plausibly a consequence of the Cartesian account because both the perceiver and the hallucinator will be in the position of having to infer conclusions about the external world from the same body of evidence. The second corollary is: (PERCEPTUAL INCREDULISM) One is never justified in believing what one perceives unless one has independent evidence that one’s perceptual experiences are reliable. This is plausibly a consequence of the Cartesian account because it is natural to think that one would need a corresponding kind of evidence in order to justifiably infer that the wall is red from one’s knowledge that there is a red image on a TV screen. If the Cartesian is right that ordinary perception requires one to make inferences from knowledge of one’s experiences, then this plausibly

4 A disjunctivist can read my talk of having a reddish experience as shorthand for a disjunction, such that an agent has a reddish experience if she either sees a red wall or (merely) seems to see a red wall.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

4 | David James Barnett will require corresponding independent evidence supporting the reliability of perception. (See Section 1.3.1 for more on this controversial matter.) Recent anti-Cartesian theorists, however, have pushed back against this traditional account. These anti-Cartesians do not merely deny the psychological claim that we go around introspecting our experiences and drawing inferences from the introspected premises. Instead, they think that the epistemology of perception is anti-Cartesian in a deeper sense. In their view, your knowledge of your experiences is not what justifies perceptual beliefs. Instead, certain perceptual states contribute to your perceptual justification in a way that is not exhausted by whatever inferences you can draw from your knowledge of your experiences. Very often, these opponents of Cartesianism also seek to reject one or both of Cartesianism’s familiar corollaries. Take, for example, dogmatists and phenomenal conservatives.5 These anti-Cartesians are motivated in part by a desire to reject PERCEPTUAL INCREDULISM, which they see as leading to an untenable skepticism about perception. Roughly speaking, their idea is that because one’s perceptual experiences themselves can provide a distinctive form of immediate justification not acknowledged by the Cartesian account, they can give you stronger justification for your perceptual beliefs than what the Cartesian account predicts. We will consider how this might give anti-Cartesians an anti-skeptical advantage over Cartesianism in Section 1.3.1. Other anti-Cartesians, such as epistemological disjunctivists, go even further in their rejection of the traditional Cartesian account, and deny that the perceptual states most directly involved in perceptual justification are experiences.6 They claim instead that the relevant perceptual states are factive states like seeing. Because these factive states are absent in cases of hallucination, these anti-Cartesians reject not only PERCEPTUAL INCREDULISM but PERCEPTUAL INTERNALISM as well. Although externalist versions of anti-Cartesianism like disjunctivism are important, our discussion will focus more directly on internalist versions. This is because the contrast with Cartesianism of greatest concern to us arises even without a rejection of the Cartesian view that the perceptual states most directly involved in perceptual justification are (non-factive) experiences. Even so, much of what I will say should extend to externalist anti-Cartesian views as well. And I will highlight a few places where the differences between internalist and externalist forms of anti-Cartesianism are important. It is important that even though both of these familiar forms of antiCartesianism reject PERCEPTUAL INCREDULISM, that is not something that I build 5 See, e.g., (Brogaard, 2013), (Brown, 2013), (Cohen, 2010), (Cullison, 2010), (Huemer, 2006 and 2007), (Jehle and Weatherson, 2012), (Kung, 2010), (Lycan, 2013), (Moretti, 2015), (Pollock and Cruz, 1999), (Pryor, 2000 and 2013), (Silins, 2008), (Tucker, 2010 and 2013), (Weatherson, 2007), and (Wedgwood, 2013), among many others. Note that while anti-Cartesianism is often coupled with a Moorean reply to the skeptic, this is not something I build in to the view. 6 See, e.g., (McDowell, 1982 and 1995) and (Pritchard, 2012), and see (Soteriou, 2014) for a helpful review.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Perceptual Justification–Cartesian Theater | 5 into the view. Instead, anti-Cartesianism as I define it is a view about the explanation of perceptual justification. It says that perceptual states themselves can justify beliefs, and that their involvement in the justification of one’s beliefs need not always consist in their serving as evidence from which the beliefs are inferred. As we go on, I will have more to say about the additional commitments I think the anti-Cartesian ought to take on, including PERCEPTUAL INCREDULISM. But for now, I define anti-Cartesianism in a minimal way, so that I must work to explain why it should take the more specific form I favor.

2. Anti-Cartesianism and Perceptual Partiality It is time to consider an underappreciated point of contrast between Cartesianism and anti-Cartesianism. The contrast concerns how each view explains the distinctive epistemic significance of one’s own experiences. Both views can accept the datum that the justification of one’s beliefs depends more directly on one’s own experiences than it does on another person’s. But they explain this datum in different ways. The Cartesian’s explanation, I will claim, commits the Cartesian to what I call PERCEPTUAL IMPARTIALITY. And as I go on to explain in Section 1.3.1, the Cartesian’s commitment to the more familiar PERCEPTUAL INCREDULISM is a byproduct of this prior commitment. We can start by again comparing a pair of examples, one involving a Cartesian theater, and the other involving ordinary perception. Here is the first: (TWO SPECTATORS) You are in a windowless room, which is empty aside from a closed-circuit TV. The TV is hooked up to a camera that is located elsewhere, where it faces a wall of an unknown color. Another camera faces the same wall. This camera is connected to a different closed-circuit TV, which sits in a different windowless room, which is occupied by a different person, whom we will call ‘Other’. Your evidence concerning the reliability of your own TV and that concerning the reliability of Other’s TV are on a par, in the sense that for any evidence you have concerning your own, you have corresponding evidence concerning Other’s. At the moment, both TVs are turned off. But pretty soon, exactly one of the TVs will be turned on. You know all of this. Consider some claims about this example that should be uncontroversial. The first is that there is an asymmetry between the epistemic significance for you of images on your TV screen and of images on Other’s. In particular, if your TV screen is the one that is turned on, and if it displays an image of a red wall, then this can result in your having some justification to believe that the wall is red. How much justification it gives you will depend on your background evidence concerning factors like the reliability of the images on your TV. But regardless of how these details of the example are filled in, the important point is that the images on your TV screen have the potential to affect your justification to believe that the wall is red. If instead Other’s TV is turned on, and Other’s TV displays an image of a red wall, then this will not result in your having any

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

6 | David James Barnett reason to believe that the wall is red. In short, the images on your TV screen can affect your justification for believing the wall is red in a way that the images on Other’s TV do not. And this is true even though your evidence concerning the reliability of Other’s TV is on a par with your evidence concerning your own. A second uncontroversial claim concerns the explanation of this asymmetry. The explanation is simply that you are in a position to know what images are displayed on your TV screen—you can see them!—but you are in no position to know what is displayed on Other’s TV screen. Because you could know what images are on your TV, you could be in a position to infer conclusions about the color of the wall from them. But you are in no position to infer conclusions from the images on Other’s TV, for the simple reason that you would not know what they are. Thus the asymmetry in the epistemic significance of the images is explained by an asymmetry in access; the images on your TV screen affect what you are justified in believing simply because you are in a position to know what they are. This explanation has some important consequences for a situation in which you are able to learn about the images on Other’s TV screen in some way other than by seeing—for example, where Other is able to tell you what he sees on his TV screen. The consequence is that if this other way of knowing about Other’s images confers knowledge with the same degree of epistemic security that one gets by seeing, then it would give you just as much justification to believe that the wall is red as you get from seeing the same images on your own TV screen. And moreover, even if your alternative way of accessing Other’s TV screen confers less secure knowledge than seeing does, it will give you less justification to believe the wall is red only to the extent that your knowledge of the images on his TV screen is less secure. (We will discuss this further in Section 1.4.1.) These consequences follow because by stipulation you have no evidence supporting the reliability of your TV over Other’s. If you nevertheless received stronger justification from seeing your own TV’s images, this would mean that you could be justified in placing greater confidence in them than in Other’s (e.g., by believing what you see on your TV while holding back from believing what you know is displayed on Other’s). But that would amount to a kind of irrational chauvinism. For you would be counting the images on your TV as stronger evidence of the truth, simply because that TV is yours. Plausibly, everyone should accept these claims about Cartesian theaters. But turn now to the second example involving ordinary perception, over which Cartesian and anti-Cartesian views conflict: (TWO PERCEIVERS) You and Other are facing a wall of an unknown color. Your evidence concerning the reliability of your own visual faculties is on a par with that concerning the reliability of Other’s, in the sense that for any evidence you have concerning your own, you have corresponding evidence concerning Other’s. You are both wearing blindfolds, but pretty soon, exactly one of you will have his or her blindfold removed. You know all of this. It should be uncontroversial that if you go on to have your blindfold removed, and if you have a reddish experience, then this can give you some reason to

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Perceptual Justification–Cartesian Theater | 7 believe that the wall is red. Whether it gives you sufficient reason to justify the belief that the wall is red will depend on your background evidence about factors like the reliability of your vision. But the important point for now is simply that your experiences can have a positive epistemic impact on your justification to believe that the wall is red. If instead it is Other’s blindfold that is removed, and if Other is the one who has the reddish experience, then this will not give you any reason to believe that the wall is red. And this remains true even though your evidence concerning Other’s visual reliability is entirely on a par with evidence about your own. Thus it should be uncontroversial that your own experience can asymmetrically affect what you are justified in believing in this way. What Cartesians and anti-Cartesians disagree about is the explanation of this asymmetry. The Cartesian about perception must explain this asymmetry in the same way that we explain the corresponding asymmetry in TWO SPECTATORS. That is, the Cartesian must say that your perceptual experiences asymmetrically affect your justification simply because you have a special kind of access to facts about your experiences. In TWO SPECTATORS, you know about the images on your TV screen because you can see them, while in TWO PERCEIVERS, you know about your own experiences not by seeing but instead in a special introspective way. But despite this difference, the Cartesian thinks that these cases have in common that the asymmetry in epistemic significance is explained by an asymmetry in access. This is because the Cartesian thinks that the contribution made to one’s perceptual justification by perceptual experience is exhausted by one’s knowledge of those experiences. Notice some apparent consequences of the Cartesian explanation. Since the Cartesian explains the epistemic asymmetry by an asymmetry in access, the Cartesian is committed to granting that if you somehow could have equally secure access to what Other’s visual experiences are, this would give you the same degree of justification to believe that the wall is red as you get from your own experiences. And even if it is not possible for you to have equally secure access to Other’s experiences as you have to your own, it remains true that you get less justification from knowledge of Other’s experiences only to the extent that your access to those experiences is less secure. These consequences follow from the Cartesian explanation because by stipulation you have no evidence supporting the reliability of your experiences over Other’s. For this reason, it would seem irrationally chauvinistic to count your own experiences as stronger evidence of the truth, by placing greater confidence in any conclusions you might infer from what you know about them. The Cartesian thus must accept these consequences, on pain of licensing irrational chauvinism as justified.7

7 But see Section 1.4.2 for one way in which the Cartesian might try to deflect this charge of chauvinism.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

8 | David James Barnett In Section 1 we noted two familiar corollaries of Cartesianism, PERCEPTUAL INTERNALISM and PERCEPTUAL INCREDULISM. We are now ready to introduce a less familiar, though in my view no less important, third corollary: (PERCEPTUAL IMPARTIALITY) Having a perceptual experience can never give you substantially stronger justification for a perceptual belief than you would get from mere knowledge that another person has had such an experience. Note that PERCEPTUAL IMPARTIALITY says that you cannot receive substantially stronger justification from your own experiences than you would get from knowledge of another person’s. For a Cartesian can plausibly claim that your own experiences give you slightly stronger justification, for the simple reason that that you plausibly never can be quite as certain of another person’s experiences as you are of your own. What the Cartesian seems unable to accept is that even to the extent that you can know another person’s experiences, this knowledge still does not confer the same degree of justification as you get from your own experiences. For again, it would seem irrationally chauvinistic to count one’s own experiences as stronger evidence of the truth than another person’s experiences, unless you have some evidence suggesting that your experiences really do amount to more reliable evidence. Turn now to the anti-Cartesian, who claims that your merely having a reddish experience can give you justification to believe that the wall is red, in a way that is not fully accounted for by your knowledge of those experiences. Now the anti-Cartesian does not deny that you do have a special way of knowing your own experiences. It is obvious that you do! What the antiCartesian denies is that this asymmetry in knowledge fully explains why your own experiences have an epistemic significance for you that Other’s do not have. Nor does the anti-Cartesian need to deny that your knowledge of your own reddish experience can give you even more justification to believe that the wall is red, in addition to what you get just by having the experience— although some anti-Cartesians might wish to deny this.8 What the antiCartesian says is simply that having a reddish experience gives you some justification all on its own. The Cartesian and anti-Cartesian thus disagree in the first instance about why your perceptual beliefs are justified, in cases where they are justified. But in disagreeing with the Cartesian’s account of why perceptual beliefs are justified, the anti-Cartesian opens the door for sometimes disagreeing with the Cartesian about whether your perceptual beliefs are justified in particular cases. In particular, it allows the anti-Cartesian to hold that there are cases where having a reddish experience can give you substantially stronger justification for a perceptual belief than you would get from merely knowing that Other has had such an experience. In other words, it allows the

8

See, e.g., (Lasonen-Aarnio, 2015) for a corresponding issue concerning belief.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Perceptual Justification–Cartesian Theater | 9 anti-Cartesian to deny PERCEPTUAL IMPARTIALITY, and accept the contradictory claim that: (PERCEPTUAL PARTIALITY) Having a perceptual experience can sometimes give you substantially stronger justification for a perceptual belief than you would get from mere knowledge that another person has had such an experience. We have already seen that the Cartesian must deny a thesis like this, on pain of licensing an irrational form of chauvinism as justified. But perhaps it might seem that any view accepting PERCEPTUAL PARTIALITY, including an antiCartesian one, must be guilty of the same kind of chauvinism. If I believe what I seem to see, but am less willing to believe what I know that you seem to see, aren’t I in an important sense counting my own experience as stronger evidence than yours? I think this is not so obvious. Consider first what an externalist anti-Cartesian (like a disjunctivist) might say to deflect the charge of chauvinism. She might say that what you are justified in believing depends on the available evidence, which includes all of the facts that you know. When you know that some other person has a reddish experience, then the fact that he has this experience is included in your available evidence. And perhaps when you yourself see a red wall, you might also know that you have a reddish experience, and hence have this fact included in your available evidence. It indeed would be irrationally chauvinistic to count this bit of the available evidence as stronger evidence of the truth than the fact that another person has a similar experience. But that kind of chauvinism need not be involved in believing what you see. For when you see that the wall is red, the fact that the wall is red also is included in your available evidence. And there is nothing chauvinistic about taking this fact to better support that the wall is red than any facts about an agent’s experiences do. I think the internalist anti-Cartesian has a similar way of deflecting the charge of chauvinism. For she might say that what you are justified in believing depends upon the apparent evidence, which includes not only the facts that you know, but also the other things that appear to be facts from your point of view. When you know that another person seems to see a red wall, this fact about his experience is included in your apparent evidence. But when you yourself seem to see a red wall, your apparent evidence includes the apparent fact that the wall is red. Again, there is nothing chauvinistic about counting this apparent fact as stronger evidence that the wall is red than apparent facts that merely concern a given person’s experiences.9 Of course, it might still be worried that these sorts of moves do not really avoid commitment to an implausible chauvinism, at least for reflective perceivers like us. For we are in a position to reflect on our own experiences, and recognize that they are as capable of error as anyone else’s. And even if we do

9

For related thoughts on the epistemology of intuitions, see (Wedgwood 2007, Chs. 10 and 11).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

10 | David James Barnett not infer conclusions about the world from premises about those experiences, as the Cartesian alleges, there presumably is some further sense in which we only hold the perceptual beliefs we do as a result of our experiences. Since we are in a position to appreciate all of this on reflection, the worry goes, we are still guilty of irrational chauvinism if we nevertheless place more confidence in our ordinary perceptual beliefs than we would in the conclusions we might infer from another person’s experiences. I have a great deal of sympathy for this worry. Indeed, in Section 4 I will press a problem for anti-Cartesianism that I see as closely related to it. All I claim here is that it is not just obvious that an anti-Cartesian that accepts PERCEPTUAL PARTIALITY must license the kind of flagrant chauvinism that a Cartesian view would need to. Perhaps in the end the anti-Cartesian who accepts PERCEPTUAL PARTIALITY is committed to licensing irrational chauvinism. But it takes work to show it.

3. Motivations for Anti-Cartesianism We have just seen that the Cartesian cannot plausibly accept PERCEPTUAL PARTIALITY, but that it might be open to the anti-Cartesian to do so. In Section 4 below, I will press a problem for anti-Cartesianism that stems from PERCEPTUAL PARTIALITY. But before doing so, I need to say more to strengthen the association between anti-Cartesianism and PERCEPTUAL PARTIALITY. For even though PERCEPTUAL PARTIALITY seems open to the anti-Cartesian, this does not mean that the anti-Cartesian must accept PERCEPTUAL PARTIALITY. For one thing, anti-Cartesianism as I have defined it is compatible with the unusual view that having perceptual experiences always gives you weaker justification than you would get from merely knowing about another person’s perceptual experiences. More importantly, an anti-Cartesian might hold that while having a perceptual experience gives you a different kind of justification than you would get from merely knowing of another’s experiences, you always get the same overall strength of justification from both sources. To strengthen the connection between anti-Cartesianism and PERCEPTUAL PARTIALITY, we will consider two core motivations for rejecting Cartesianism. These motivations, if they succeed at all, succeed in motivating a form of anti-Cartesianism that accepts PERCEPTUAL PARTIALITY. The upshot is that an antiCartesian who appeals to these motivations must accept PERCEPTUAL PARTIALITY. 3.1. First Motivation: Anti-Skeptical Advantages Recall PERCEPTUAL INCREDULISM, the claim that one cannot be justified in believing what one perceives without independent evidence that one’s perceptual experiences are reliable. One familiar motivation for anti-Cartesianism holds that the Cartesian is committed to PERCEPTUAL INCREDULISM, and that this in turn leads to skepticism. For this reason, the motivation goes, anti-Cartesianism enjoys an anti-skeptical advantage over Cartesianism. Here I will briefly

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Perceptual Justification–Cartesian Theater | 11 survey why anti-Cartesianism might enjoy this advantage. In doing so, my purpose is not to develop a novel motivation for anti-Cartesianism, or to settle whether it ultimately succeeds. Instead, I aim merely to highlight that this familiar motivation, if it succeeds at all, succeeds at motivating a form of anti-Cartesian that embraces PERCEPTUAL PARTIALITY. First consider why it is difficult to avoid perceptual skepticism without rejecting PERCEPTUAL INCREDULISM. If one’s ordinary perceptual beliefs are to be justified, then under PERCEPTUAL INCREDULISM one must have independent evidence supporting that one’s perceptual experiences are reliable. The relevant notion of ‘independence’ here can be slippery. But the idea is that one’s evidence supporting the reliability of perception cannot itself derive from perception, on pain of vicious circularity.10 This raises a problem, because it is hard to see how we might have perception-independent evidence concerning the deeply contingent matter of whether our perceptual experiences are reliable. And for this reason, it is natural to think that our ordinary perceptual beliefs can be justified only if PERCEPTUAL INCREDULISM is false. This sketch of why PERCEPTUAL INCREDULISM might lead to skepticism is open to question. But it at least has strong prima facie plausibility. The anti-skeptical motivation for anti-Cartesianism assumes that it is correct, and goes on to allege that the Cartesian is committed to PERCEPTUAL INCREDULISM. To see why the Cartesian, who treats perceptual justification on the model of a Cartesian theater, is arguably committed to PERCEPTUAL INCREDULISM, consider again a simple case in which you find yourself in a Cartesian theater, observing a red image on your TV screen. As we noted in Section 1 above, it is very appealing to accept the following thesis about such a case: (TV IMAGE INCREDULISM) One cannot be justified in inferring conclusions about the world from premises about TV images unless one has independent evidence that the TV images are reliable. The problem for Cartesians is that they treat ordinary perception on the model of a Cartesian theater. So if they accept the appealing thesis of TV IMAGE INCREDULISM, they have trouble rejecting PERCEPTUAL INCREDULISM. For if inferring from premises about TV images to conclusions about the external world requires independent evidence that the TV images are reliable, then plausibly the same should go for corresponding inferences from premises about one’s perceptual experiences.11 Otherwise, one could count one’s experiences as stronger evidence than a TV’s images, even when one lacks evidence supporting that the experiences are more reliable than the images. And this seems to amount to an irrational kind of chauvinism. The Cartesian might resist this apparent commitment to PERCEPTUAL INCREDULISM by denying TV IMAGE INCREDULISM. For example, a reliabilist might say 10

See, e.g., (Barnett, 2014) for further discussion. But see Foley (2001 and 2005), who seems to deny a corresponding view concerning beliefs rather than experiences. 11

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

12 | David James Barnett that one can be justified in inferring from premises about the TV images to conclusions about the world so long as the images are objectively reliable, and so long as one has no defeaters. A reliabilist view like this could still qualify as ‘Cartesian’ in my sense, and yet face no obvious skeptical problems. Traditional Cartesians, however, reject reliabilism and other views that might deny TV IMAGE INCREDULISM, as I argue elsewhere we all should.12 The anti-skeptical advantage claimed by anti-Cartesians is an advantage over these traditional Cartesians. The only other way to resist PERCEPTUAL INCREDULISM is by distinguishing in some way between ordinary perception and a Cartesian theater, in order to accept incredulism about the the latter but not the former. And this is where the Cartesian faces problems that the anti-Cartesian does not. For the Cartesian account holds that perception puts one in the position of having to infer conclusions about the world from known premises about one’s experiences, just as in a Cartesian theater one might infer conclusions about the world from known premises about the images on a TV. If that is accepted, then it is hard to see how further differences between the cases could be of assistance. We already have seen that the Cartesian has a difficult time accepting PERCEPTUAL PARTIALITY without licensing obvious chauvinism. And this means that if she rejects PERCEPTUAL INCREDULISM, then she is committed to rejecting a corresponding claim concerning a case where one must infer conclusions about the world from premises about another person’s experiences. That is, she is committed to rejecting: (OTHER PEOPLE’S EXPERIENCES INCREDULISM) One cannot be justified in inferring conclusions about the world from premises about another person’s experiences unless one has independent evidence that those experiences are reliable. The Cartesian would thus need to reject OTHER PEOPLE’S EXPERIENCES INCREDULISM while accepting TV IMAGE INCREDULISM. But this would seem to simply commit her to a different kind of objectionable chauvinism. For if one has no evidence suggesting that an agent’s experiences are more reliable than a TV’s images, it would seem illegitimately chauvinistic to nevertheless count the experiences as stronger evidence than the images. The Cartesian thus faces a prima facie skeptical problem. It is plausible that we must reject PERCEPTUAL INCREDULISM if we are to avoid skepticism. But the Cartesian has difficulty doing so. For it is plausible that a spectator in a Cartesian theater cannot justifiably infer conclusions about the world unless he has independent evidence that the images on his TV are reliable. And since the Cartesian treats ordinary perception on the model of a Cartesian theater, she has difficulty accepting this incredulist claim about Cartesian theaters without also accepting a corresponding incredulism about perception. This is where the anti-Cartesian can plausibly claim an anti-skeptical advantage over the Cartesian. For the anti-Cartesian more plausibly can distinguish

12

See especially (Barnett, 2014).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Perceptual Justification–Cartesian Theater | 13 between ordinary perception and a Cartesian theater, accepting an incredulist view about the latter but not the former. This is because the anti-Cartesian denies that ordinary perception requires us to make inferences from premises about our own perceptual experiences. For this reason, the anti-Cartesian plausibly can avoid commitment to PERCEPTUAL INCREDULISM, even while granting that independent evidence for the reliability of the TV images would be required by a spectator in a Cartesian theater. Indeed, the anti-Cartesian can grant that we would need independent evidence for the reliability of perception if we were in the position of having to infer our ordinary perceptual beliefs from premises about them. For again, the anti-Cartesian denies that ordinary perception puts us in the position of having to make such inferences. This anti-skeptical motivation for anti-Cartesianism is by no means beyond question. Perhaps the Cartesian could find a way to avoid PERCEPTUAL INCREDULISM after all, or else of accepting it without falling into skepticism. Alternatively, perhaps it could be claimed that the anti-Cartesian cannot ultimately succeed in resisting skepticism, either. I do not hope to settle these matters here. Instead, I want to emphasize only that if the anti-Cartesian does enjoy this anti-skeptical advantage over the Cartesian, this is only because the anti-Cartesian is better positioned to accept PERCEPTUAL PARTIALITY. For the anti-Cartesian is in no better position than the Cartesian to distinguish between inferences from premises about TV images and inferences from premises about an agent’s experiences. And so the anti-Cartesian is in no better position to accept TV IMAGE INCREDULISM and reject OTHER PEOPLE’S EXPERIENCES INCREDULISM. Rather, the anti-Cartesian’s advantage comes in distinguishing ordinary perception from a case where one infers conclusions about the world from premises about an agent’s experiences. This arguably could enable the anti-Cartesian, unlike the Cartesian, to accept OTHER PEOPLE’S EXPERIENCES INCREDULISM while rejecting PERCEPTUAL INCREDULISM. In doing so, the anti-Cartesian must hold that one sometimes can be justified in one’s perceptual beliefs even though one would not be justified in inferring conclusions about the world from premises about another agent’s experiences. And this means accepting PERCEPTUAL PARTIALITY. 3.2. Second Motivation: We can’t be Cartesians “All the Way Down” The second motivation for anti-Cartesianism appeals to the following observation: We cannot be Cartesians “all the way down”. Specifically, we cannot plausibly extend Cartesianism from perceptual experience to other states like belief and knowledge.13 For it is not plausible that the justification of your beliefs depends on your own beliefs and knowledge only in the way that it can

13 See also (Brown, 2013) and (Wedgwood, 2007, Chs. 10 and 11) for anti-Cartesianism about intuitions, and Pollock and Cruz (1999) for anti-Cartesianism about memory experiences. And see (Robson, 2012) for a review of loosely related issues concerning aesthetic experience, including Wollheim’s (1980, p.233) Acquaintance Principle.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

14 | David James Barnett depend on another person’s. After explaining why, I will discuss how this might support anti-Cartesianism about perception. Consider first Cartesianism about knowledge itself.14 The Cartesian about knowledge takes what the perceptual Cartesian said about perceptual experiences, and extends it to knowledge. The perceptual Cartesian denied that perceptual experiences themselves can directly affect what one is justified in believing. Whenever experiences are involved in the justification of a belief, it is only indirectly, in virtue of one’s knowledge of the experiences giving one inferential justification for the belief. If we try to extend these claims to knowledge itself, however, it leads to a contradiction. It is clear that knowledge can at least be indirectly involved in the justification of one’s beliefs. For example, suppose you have background knowledge that a picnic will be cancelled if it rains, and you gain new knowledge that it will rain. Surely this can at least result indirectly in your being justified in believing that the picnic will be cancelled. Yet by stipulation, the Cartesian about knowledge would deny that your knowledge that it will rain is what gives you justification to believe that the picnic will be cancelled. Instead, she says that what gives you justification must be your higher-order knowledge that you know this. But at the same time, the Cartesian about knowledge also denies that your higher-order knowledge can give you justification to believe the picnic will be cancelled, for the same reason. Thus Cartesianism about knowledge itself is inconsistent. This quick refutation applies only to a general Cartesianism about knowledge, which by stipulation denies that any knowledge, even higher-order knowledge, can directly give one justification for beliefs. It leaves open more restricted Cartesian views about belief and knowledge. But the quick refutation is enough to show that we cannot be Cartesians “all the way down”. For some mental states, just being in those states affects what one is justified in believing. Their epistemic significance is not always exhausted by the inferential justification provided by knowledge that one is in them. Even a Cartesian about other states must admit that some states of higher-order knowledge are states of this kind. So the issue is not whether mental states can affect one’s justification in this way, but instead simply which ones do. Now there are a number of possible views that are anti-Cartesian about higher-order knowledge, but that adopt Cartesianism about other states of knowledge and belief. While I think all these views face similar problems, I will focus on a particular view that is inspired by recent debates over peer disagreement and higher-order evidence. The view, which I call ‘Cartesianism about belief ’ holds that neither first-order beliefs nor first-order knowledge directly give one justification for further beliefs. Instead, when you have a belief that is supported by your first-order evidence, this can affect what justification you 14 The view is so called with apologies to Descartes, who I interpret as anti-Cartesian about clear and distinct perceptions that he took to be necessary and sufficient for knowledge (Barnett, MS).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Perceptual Justification–Cartesian Theater | 15 have for further beliefs only in virtue of what you can justifiably infer from your knowledge that you have the belief.15 Like Cartesianism about perception, Cartesianism about belief is a view about why particular mental states affect your justification for further beliefs. But unlike Cartesianism about perception, any explanation it could offer faces an immediate problem. For consider what it says about a simple case where you know that a picnic will be cancelled if it rains, and then come justifiably to believe that it will rain. Surely this too can at least result indirectly in your having justification to believe that the picnic will be cancelled. But the Cartesian denies that justifiably believing that it will rain can be what directly gives you justification to believe that the picnic is cancelled. Instead, it is your knowledge that you believe it will rain that must give you inferential justification supporting that the picnic will be cancelled. But why would the fact that an agent believes it will rain inferentially support that a picnic will be cancelled? The obvious answer is that it does so by supporting that it will rain. But you already justifiably believe that it will rain, and the Cartesian denied that this can be what gives you justification to believe that the picnic will be cancelled! The Cartesian about belief thus has no apparent way of explaining how knowledge of your belief that it will rain could give you inferential justification to believe that the picnic will be cancelled. Also like Cartesianism about perception, Cartesianism about belief lends itself to further views about whether you are justified in particular cases. In particular, I think the Cartesian must accept a principle similar to PERCEPTUAL IMPARTIALITY. In this respect, Cartesianism about belief and perception are closely aligned. So I will set aside the preceding objection, and focus on others with more immediate relevance to Cartesianism about perception. Consider an example: (TWO BELIEVERS) You and Other both know that the picnic will be cancelled if it rains. Right now, you both suspend judgment on whether it will rain. But pretty soon, one of you will learn first-order meteorological evidence that will lead you to settle on a belief. You have strong but misleading higherorder evidence that you are highly unreliable at evaluating meteorological evidence, and you have equivalent higher-order evidence concerning Other. (For example, you might have misleading evidence supporting that you are both impaired by drowsiness, hypoxia, or a drug.) As a matter of fact, whoever comes to hold a belief will believe that it will rain. And despite this misleading higher-order evidence, you are in fact highly reliable, and the meteorological evidence supports this belief. 15 Notice that on this Cartesian view, the higher-order knowledge that does the justifying is knowledge that you have the belief, and not knowledge whether you hold the belief justifiably, or whether it is supported by your first-order evidence. I think this fits well with plausible views about introspection—we plausibly can achieve higher-order knowledge of a belief directly via introspection, but not knowledge of whether it is justified. But in any case, some of my objections will apply to Cartesian views that incorporate these other kinds of higher-order knowledge.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

16 | David James Barnett First consider what we should say if Other is the one who goes on to believe that it will rain. It should be uncontroversial that even if you know that Other holds this belief, this knowledge will have little effect on your justification to believe that the picnic will be cancelled. For unlike the more schematic TWO SPECTATORS and TWO PERCEIVERS, in TWO BELIEVERS it is stipulated that your higher-order evidence is unfavorable. So even if Other himself has strong meteorological evidence supporting his belief that it will rain, your evidence, including what you know about Other, will not support that it will rain. And for this reason, you cannot plausibly be justified in believing either that it will rain or that the picnic will be cancelled. Now consider what the Cartesian must say if you are the one who believes that it will rain. The Cartesian says that just as with Other, this puts you in the position of having to infer whether the picnic will be cancelled from the evidence that you believe it will rain. We have said that you would not be justified in doing so when your evidence is instead that Other holds this belief. So it would seem irrationally chauvinistic for you to do so when you are the one who holds it. For this would mean treating your own belief as stronger evidence of the truth, even though your evidence about your own reliability and Other’s is equivalent. In short, the Cartesian about belief must accept an impartiality principle similar to PERCEPTUAL IMPARTIALITY. For reasons that will emerge shortly, it is worth considering a principle that, while no less plausibly a commitment of Cartesianism, is stronger than what we considered for perception: (STRONG DOXASTIC IMPARTIALITY) Holding a belief can never have substantially stronger effects on your justification for other beliefs than would result from knowing that another person holds that belief. The Cartesian should accept STRONG DOXASTIC IMPARTIALITY because she says that regardless of who holds a belief, its epistemic significance is exhausted by what can be inferred from the fact that the person holds is. So if the Cartesian allowed holding the belief yourself to have some further epistemic effect that knowing of Other’s belief does not have, then this would amount to licensing irrational chauvinism. The Cartesian therefore must claim that, if you are the one who believes that it will rain, your holding this belief will have little effect on your justification to believe that the picnic will be cancelled. She must claim this because that is what we all should say when it is Other who holds this belief. But this claim has potentially objectionable consequences. The first is this: The Cartesian must say that your belief that it will rain itself is unjustified, despite its being supported by your meteorological evidence. For consider the plausible principle that for arbitrary p and q, (MODUS PONENS CLOSURE) If you justifiably believe that p and know that if p then q, then you have justification to believe that q.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Perceptual Justification–Cartesian Theater | 17 If you justifiably believed that it will rain, then by MODUS PONENS CLOSURE you would have justification to believe that the picnic will be cancelled. But you would not have justification to believe that the picnic will be cancelled merely from knowing that Other believes it will rain. So by accepting STRONG DOXASTIC IMPARTIALITY, the Cartesian is committed to saying that your belief that it will rain is unjustified. Indeed, Cartesianism about belief seems committed to a strong form of conciliationism, a view with recent prominence in discussions of peer disagreement and higher-evidence.16 Although conciliationist views can vary, the rough idea is that once you arrive at an initial first-order belief in response to your first-order evidence, the all-things-considered justified doxastic attitude is the one best supported by your higher-order evidence about yourself, including the fact that you initially arrived at whatever belief you did. To see why the Cartesian must accept this strong form of conciliationism, suppose to the contrary that having strong first-order evidence that it will rain could give you stronger justification to believe that it will rain than you would get from the higher-order evidence alone. This would mean that it gives you stronger justification than you would get from merely knowing that Other believes that it will rain, since concerning Other this higher-order evidence is your total evidence. And if you can have stronger justification to believe that it will rain, then this stronger justification to believe that it will rain should also result in stronger justification to believe that the picnic will be cancelled. Thus we must accept conciliationism, or else accept the following: (DOXASTIC PARTIALITY) Holding a belief can sometimes give you substantially stronger justification for other beliefs than you would get from mere knowledge that another person holds that belief. Yet DOXASTIC PARTIALITY contradicts STRONG DOXASTIC IMPARTIALITY, which the Cartesian must accept. The Cartesian thus has stronger conciliationist commitments than many philosophers are willing to accept. Of course, while conciliationism remains controversial, many accept or at least defend it. But I think even these philosophers should reject a second objectionable commitment of Cartesianism, which goes beyond anything conciliationists endorse. For by STRONG DOXASTIC IMPARTIALITY, the Cartesian must claim that your belief that it will rain does not in any way affect your justification concerning whether the picnic will be cancelled. So let us grant for the sake of argument the conciliationist claim that your belief that it will rain is unjustified. As we will now see, the Cartesian’s claim is objectionable even if so. Start with the common observation that agent with logically incoherent doxastic attitudes are guilty of irrationality, at least barring exceptional 16 For sympathetic discussions, see, e.g., (Elga, 2007), (Christensen, 2010), (Sliwa and Horowitz, 2015) and (Vavova, 2014), and for critical discussions see, e.g., (Kelly, 2010), (Lasonen-Aarnio, 2014), (Schoenfield, 2015), and (Weatherson, MS).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

18 | David James Barnett circumstances.17 For example, consider an agent who believes that p, and who knows that if p then q, but who nevertheless believes that not-q. This agent manifests irrationality by holding these inconsistent beliefs. Consider also an agent who believes that p, and who knows that if p then q, but who nevertheless suspends judgment on whether q. This agent, too, manifests irrationality in suspending judgment on an obvious consequence of other things she believes.18 Similarly, if one believes that it will rain in TWO BELIEVERS, then one manifests irrationality by suspending judgment or disbelieving that the picnic will be cancelled. For this reason, these doxastic attitudes could not be justified, so long as you believe that it will rain. That is, so long as you believe it will rain, you could not justifiably disbelieve that the picnic will be cancelled. Nor could you justifiably suspend judgment on whether the picnic will be cancelled. To be sure, if conciliationists are right that your belief that it will rain is itself unjustified, then you furthermore could not justifiably believe that the picnic will be cancelled! But that does not mean that other doxastic attitudes would be justified. Instead, it means that you are in a low-grade epistemic tragedy, in which no possible doxastic attitude is justified, so long as you retain your belief that it will rain. (I call it ‘low-grade’ because you have a way out: giving up your unjustified belief that it will rain.) In making these claims about the epistemic effects of even unjustified beliefs, I commit myself to a principle like the following: (NARROW-SCOPE MODUS PONENS) If you believe that p and know that if p then q, then you are rationally required to believe that q. Here I understand the notion of rational requirement in the following semistipulative way: You are rationally required to believe that q iff you cannot justifiably hold any doxastic attitude to q other than belief. (If there can be at least low-grade epistemic tragedies, being rationally required to hold a belief is not sufficient for having justification for it. Whether rational requirement is necessary for justification is a controversial matter, closely related to whether rationality is permissive.19) There are two differences between NARROW-SCOPE MODUS PONENS and the more familiar MODUS PONENS CLOSURE. First, NARROW-SCOPE MODUS PONENS is missing a deontic operator in the antecedent. Satisfying the antecedent requires believing that p, but not justifiably believing that p. Second, the condition specified in the consequent has been (arguably) weakened. It would be implausible to hold that merely believing that p (while knowing if p then q) 17 See, e.g., (Lasonen-Aarnio, 2008) and (Schechter, 2013) for recent discussion of circumstances where coherence requirements allegedly fail. I think these and other problems critics have raised for coherence requirements are orthogonal to the present discussion. 18 Here I take suspended judgment to be a positive doxastic attitude. One does not qualify as suspending judgment if one has never settled on an attitude, for example if one has never considered the matter. 19 See, e.g., (White, 2005) and (Schoenfield, 2014).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Perceptual Justification–Cartesian Theater | 19 is sufficient for having justification to believe that q. For that would mean that an agent who believes he is Napoleon (and who has some basic historical knowledge) has justification to believe that he was defeated at Waterloo. Instead, NARROW-SCOPE MODUS PONENS says only that cannot justifiably hold a doxastic attitude to q other than belief. So it means, for example, that one who believes he is Napoleon, and who knows that if so he was defeated at Waterloo, cannot justifiably believe that he was not defeated at Waterloo. The Cartesian about belief must deny that unjustified beliefs can have these downstream epistemic effects. We have already observed that when Other is the one who believes that it will rain, knowing of Other’s belief would not give you justification to believe that the picnic will be cancelled. For by stipulation, your evidence supports that Other is unreliable. But similarly, knowing of Other’s belief would not prevent you from justifiably holding other doxastic attitudes to whether the picnic will be cancelled, such as suspended judgment. To be sure, these considerations do not get us all the way to DOXASTIC PARTIALITY, which speaks to justification rather than rational requirement. But it does get us to the contradictory of STRONG DOXASTIC IMPARTIALITY, namely: (WEAK DOXASTIC PARTIALITY) Holding a belief can sometimes have substantially stronger effects on your justification for other beliefs than would result from knowing that another person holds that belief. The upshot is that the Cartesian, who is committed to STRONG DOXASTIC IMPARTIALITY, must deny that believing it will rain prevents you from justifiably suspending judgment or disbelieving that the picnic will be cancelled. Now it might seem that in denying this, the Cartesian is in good company. For the epistemic significance of unjustified belief has been the subject of considerable controversy in recent debates between wide- and narrow-scopers about rational requirements.20 The wide-scopers claim that it is a mistake to think of rationality as requiring particular attitudes in the way NARROW-SCOPE MODUS PONENS alleges. Instead, they say, what rationality requires is general coherence among our attitudes. These philosophers opt for wide-scope requirements like: (WIDE-SCOPE MODUS PONENS) Rationality requires that (if you believe that p and know that if p then q, then you believe that q). Now I admit that WIDE-SCOPE MODUS PONENS might not have the straightforward implications for justification that NARROW-SCOPE MODUS PONENS does. Justification is an evaluative notion typically applied to particular attitudes, and it is perhaps up for grabs what implications wide-scope requirements should have for the evaluation of particular attitudes. Presumably we should at least say that if a set of attitudes jointly violate a wide-scope requirement, then at least one of them is unjustified. But the Cartesian can allow this. For she

20

See, e.g., (Broome, 1999 and 2013), (Kolodny, 2005),(Lord, 2014), and (Worsnip, 2015).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

20 | David James Barnett holds that your belief that it will rain is unjustified, independently of whether you violate any wide-scope requirements with your attitude to whether the picnic will be cancelled. Yet it is at least plausible, even if we opt for wide-scoping, that the violation of a rational requirement can impugn the justification of otherwise justified beliefs. In particular, if you believe that it will rain in TWO BELIEVERS, then plausibly you cannot justifiably believe that the picnic will not be cancelled, even if you otherwise have good reason to believe this. And it is furthermore plausible that you could not justifiably suspend judgment about whether it will be cancelled. These claims have an independent intuitive plausibility, even if we opt for a view about rational requirements that does not straightforwardly entail them. It is time to take stock. Cartesianism about belief is in bad shape. And if we adopt anti-Cartesianism about belief, that opens the door for anti-Cartesianism about perceptual experience as well. To be sure, anti-Cartesianism about belief does not straightforwardly entail anti-Cartesianism about perceptual experience. For it could be that beliefs and perceptual experiences simply differ epistemically. Coherentists, for example, often stress what they see as fundamental differences between belief and experience in supporting their claim that the only thing that could justify a belief is another belief. But recent anti-Cartesians reply by stressing what they see as important similarities. Like belief, these anti-Cartesians claim, perceptual experiences have representational content. And also like belief, but unlike other states with representational content like desires, perceptual experiences in some sense present their representational content as being true.21 For this reason, it is plausible that perceptual experiences, like beliefs, partially determine what one’s apparent evidence is. I take this to strengthen attempts to deflect the charge of chauvinism that we considered at the end of Section 2 above. These are difficult matters, and I will not try to adjudicate them here. Instead, my aim is simply to note that to the extent that we are attracted to antiCartesianism about experience because of these apparent similarities between perceptual experiences and beliefs, we will have reason to accept PERCEPTUAL PARTIALITY. This is because our main motivations for rejecting Cartesianism about belief appealed to corresponding partiality principles, DOXASTIC PARTIALITY and WEAK DOXASTIC PARTIALITY. Now it is true that WEAK DOXASTIC PARTIALITY differs from PERCEPTUAL PARTIALITY. For WEAK DOXASTIC PARTIALITY says merely that your beliefs can affect your justification in ways that another person’s do not, and not that your beliefs do so specifically by giving you justification for beliefs that another person’s does not. But the reason why narrow-scoping supported only this weaker partiality principle for belief is because of the distinction between rational requirement and justification. The distinction is important in the case of belief, because the unjustified belief that it will rain can

21

For helpful discussion of these issues, see (Pryor, 2005).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Perceptual Justification–Cartesian Theater | 21 rationally require you to believe that the picnic will be cancelled even though it does not justify you in doing so. But I think that it is unlikely that the distinction between justifying and rationally requiring a belief will be of similar importance when it comes to experience. Since experiences cannot be unjustified in the first place, it is not clear that an experience could rationally require a perceptual belief without thereby justifying it.22 And so an anti-Cartesianism about experience, if modeled on anti-Cartesianism about belief that accepts either DOXASTIC PARTIALITY or WEAK DOXASTIC PARTIALITY, should accept PERCEPTUAL PARTIALITY.

4. A Problem for Anti-Cartesianism Let’s consider the big picture. According to the traditional Cartesian epistemology of perception, one’s perceptual beliefs are justified if at all by inferences from known premises about one’s perceptual experiences. But anti-Cartesians say that perceptual states themselves can provide one with perceptual justification to a degree that cannot be accounted for in this way. This allows antiCartesians to accept PERCEPTUAL PARTIALITY without obvious chauvinism, unlike Cartesians. And indeed, the core motivations for anti-Cartesianism also support PERCEPTUAL PARTIALITY, meaning that the anti-Cartesian cannot avail herself of these motivations without also accepting PERCEPTUAL PARTIALITY. What remains to be seen is whether there is some subtler way in which accepting PERCEPTUAL PARTIALITY ultimately commits the anti-Cartesian to licensing objectionable chauvinism or some other sort of irrationality. And it is here that I think the anti-Cartesian faces a problem. So although in my view the motivations for rejecting Cartesianism are powerful, there also is a powerful objection to any form of anti-Cartesianism that can avail itself of these motivations by accepting PERCEPTUAL PARTIALITY. The Cartesian analogy between ordinary perception and a Cartesian theater, despite its many difficulties, is not easily dispensed with. The problem for anti-Cartesianism arises from the fact that just as you can know about another person’s experiences, you also can know about the experiences that you had in the past, or that you will have in the future. Consider an example: (ANTICIPATED EXPERIENCE) Shortly before noon, you are wearing a blindfold and facing a wall. You know that at noon, the blindfold will be removed, and that when it is, you will have an experience as of a red wall.

22 Those with worries about the cognitive penetrability of perception—e.g., (Siegel 2012 and 2013)—might think it is possible for perceptual experiences to be epistemically defective in some broader way that makes them unsuitable to justify a perceptual belief even when they require it. But even if this is granted, I think it will not substantially affect the main thread of our discussion. The problem for anti-Cartesianism that I present in Section 4 could arguably be recast in terms of requirements rather than justifications, or even simply be restricted to cases in which it is stipulated that the perceptual states in question are not epistemically defective.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

22 | David James Barnett What should the anti-Cartesian say about the epistemic significance of knowing that you will have a reddish experience at noon? Is it epistemically akin to knowing that another person has a reddish experience? Or is it more like actually having the experience now? For a more precise formulation of this question, let’s call the justification that the anti-Cartesian says that you get from actually having an experience your proprietary justification. (That is, the proprietary justification is whatever you get in excess of what you would get from knowing that another person had such an experience.) The question at hand is whether knowing about your own future experiences gives you proprietary justification now for perceptual beliefs. There are strong prima facie motivations for thinking that the anti-Cartesian cannot allow knowledge of one’s future experiences to give one proprietary justification now. For it would seem that the only way this knowledge could justify you in believing that the wall is red is by allowing you to justifiably infer that the wall is red from the premise that you will have a reddish experience at noon. And it would be illegitimately chauvinistic to infer that the wall is red from this premise unless you also would be willing to infer the same conclusion from the premise that Other will have a reddish experience at noon. Thus the anti-Cartesian cannot say that knowledge of one’s own future experience gives one proprietary justification for believing that the wall is red any more than the Cartesian was able to say this for knowledge of one’s own present experiences. Now I am not entirely sure that this prima facie motivation should be accepted, as I will explain in Section 1.4.2. But let us for now suppose that it is, and see the problem that this raises for the anti-Cartesian. The anti-Cartesian accepts PERCEPTUAL PARTIALITY, and thus says that having a reddish experience now can give you proprietary justification for believing that the wall is red— justification that can be stronger than what you would get from merely knowing that another person has or will have a reddish experience. But merely knowing that you will have a reddish experience yourself in a few minutes does not give you that kind of proprietary justification right now. The antiCartesian must therefore claim that even when you know in advance that you will have a reddish experience at noon, when noon arrives and you actually have that reddish experience, this will give you stronger justification than you had to begin with. The problem for the anti-Cartesian is that this final claim licenses a way of updating one’s beliefs and credences that seems to manifest diachronic irrationality. For suppose that before noon you withhold belief that the wall is red, even though you know that at noon the wall will look red to you. It seems irrational for you suddenly to be convinced that the wall is red once your blindfold is removed and you seem to see a red wall. You knew that this would happen! More generally, it seems irrational for you to substantially increase your confidence at noon that the wall is red, merely because you are having the very experience that you knew in advance you would have. But if the antiCartesian were right that your degree of justification for believing the wall is

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Perceptual Justification–Cartesian Theater | 23 red increases substantially at noon, then it would be rational for you to substantially increase your confidence that the wall is red. If it isn’t, the antiCartesian must be wrong. The anti-Cartesian has a number of possible responses to this apparent problem. But before considering these options, I want to say briefly why this problem for anti-Cartesianism is more general and less avoidable than two more familiar objections to particular versions of anti-Cartesianism. First consider a prominent Bayesian objection to certain versions of dogmatism, which embrace a Moorean reply to skepticism.23 These Moorean dogmatists hold that one’s experiences can give one justification to believe that one is not in a skeptical scenario, even when one’s being in the skeptical scenario would entail that one has those very experiences. For example, where BIV is the proposition that one is a brain in a vat who has a non-veridical reddish experience, and RED is the proposition that one has a reddish experience, the Moorean holds than having a reddish experience can justify one in rejecting BIV even though BIV entails RED. The familiar Bayesian objection to this claim appeals to the theorem of the probability calculus that if BIV entails RED, then Pr(~BIV|RED)  Pr(~BIV). The objection says that since one’s prior conditional probability for ~BIV given RED can be no higher than one’s prior unconditional probability for RED, having a reddish experience cannot give you justification to believe that ~BIV. This is an important objection to the Moorean reply to skepticism. But the relationship between anti-Cartesianism an the Moorean reply is slippery. Not only might anti-Cartesians reject the Moorean reply,24 Cartesians might embrace it.25 And even if the anti-Cartesian does embrace the Moorean reply, she has available several plausible responses to the Bayesian objection.26 Here I will merely highlight my preferred response, which has been discussed in depth by Luca Moretti (2015). The response holds that just because Pr(~BIV|RED)  Pr(~BIV), that does not mean that having a reddish experience cannot confer justification for believing ~BIV. For it denies that the epistemic significance of having a reddish experience is exhausted by one’s conditionalizing on the proposition that one has it. Anti-Cartesians should be prepared to deny this. For conditionalizing on a proposition is a close cousin of making inferences from one’s knowledge of that proposition. And the anti-Cartesian’s core idea is that the epistemic significance of having an experience is not exhausted by what can be inferred from the proposition that one has it. Consider now a second familiar objection. This objection concerns externalist forms of anti-Cartesianism like epistemological disjunctivism. These externalist

23 See (White, 2006) for a canonical presentation of this objection to Pryor’s (2000) dogmatism, as well as other objections. 24 E.g., (Silins, 2008). 25 E.g., (Vogel, 2014). 26 E.g., (Cohen, 2010), (Jehle and Weatherson, 2012), (Kung, 2010), (Moretti, 2015), (Pryor, 2007 and 2013), (Vogel, 2014), and (Weatherson, 2007).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

24 | David James Barnett views claim that the factive state of seeing can give one perceptual knowledge even when merely seeming to see the same thing would fail to even give one justification for a perceptual belief. The prima facie problem for these views is that they have the counterintuitive implication that one who unwittingly hallucinates a red wall is unjustified in believing that the wall before her is red. But externalists have tried to explain away the intuition that such an agent is justified by distinguishing between justification and blamelessness.27 Sure enough, they say, such an agent is blameless in believing that the wall is red. But this kind of blamelessness is not sufficient for justification. However, the problem that I raise for anti-Cartesianism instead only requires blamelessness to be necessary for justification. This is because it charges anti-Cartesianism with licensing as justified a way of updating one’s beliefs that is positively blameworthy. So long as blamelessness is necessary for justification, this updating procedure could not yield justified belief, as the anti-Cartesian is apparently committed to claiming. So even though these externalist forms of anti-Cartesianism are not our main focus, I think the problem I raise is more pressing for them than this familiar problem. It is time now to consider an anti-Cartesian’s options for responding to the problem I have just raised. In Section 1.4.1, I will consider the prospects for biting the bullet, and accepting that one gains justification at noon to believe that the wall is red. In Sections 1.4.2 and 1.4.3, I will consider two ways that the anti-Cartesian might try to avoid the commitment to saying this. Finally, in Section 1.4.4 I will explain why the anti-Cartesian who wishes to hang on to PERCEPTUAL PARTIALITY has no further options remaining. 4.1. First Option: Biting the Bullet Can the anti-Cartesian bite the bullet, and accept that at noon you gain additional justification for believing the wall is red? Here is one defense of biting the bullet: Any view about perception must say something unappealing at some point. And what the anti-Cartesian says here, while counterintuitive, is less bad than what alternative views must say elsewhere. In my view, this is the best defense of biting the bullet. I now consider some other defenses, which attempt to make biting the bullet seem more appealing than I have made it out to be. I think these fail. The first grants that your degree of justification increases at noon, but denies that this licenses an irrational updating procedure. It says: Simply because your degree of justification increases, that does not mean your degree of confidence ought to increase. I admit that there might be different ways of understanding talk of degrees of justification, perhaps some of which divorce the notion from rational degrees of confidence. And perhaps on some of these, it could be plausible that 27

See, e.g., (Littlejohn, 2012) and (Pritchard, 2012). And see (Miracchi, forthcoming) for helpful critical discussion, as well as an alternative externalist proposal.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Perceptual Justification–Cartesian Theater | 25 your degree of justification increases at noon. For example, perhaps you gain a more direct kind of knowledge at noon that the wall is red, even if you rationally had high confidence already. But I think that the anti-Cartesian is in no position to fall back on a notion of degrees of justification that divorces it from rational degrees of confidence. For this would undermine the strongest motivations for anti-Cartesianism. Consider, for example, the anti-skeptical advantage over the Cartesian. The anti-Cartesian could not claim this advantage if our perceptual experiences merely gave us a kind of proprietary justification that affects the strength of our justification for perceptual beliefs, without also permitting us to place more confidence in them. This kind of proprietary justification would do nothing to help us respond to the skeptical challenge that we are irrational for being as confident as we are that we have hands. If we were worried that our perceptual beliefs, despite being rational and true, were merely deficient in some further way, then perhaps PERCEPTUAL PARTIALITY could still be of assistance. But it is unclear what that further worry would be, and it does not seem to be the core skeptical challenge that antiCartesians have traditionally taken their view to help us overcome. Another defense of biting the bullet holds that even if you know in advance that you will have a reddish experience, you can never be quite certain. But when you actually have a reddish experience at noon, then you can be certain. As a result, you can be more confident that the wall is red. I admit that this this defense gives a plausible explanation of why your confidence might justifiably increase slightly at noon. Indeed, it is one even Cartesians could accept. For plausibly, even when one knows some evidence e that inferentially supports p, this is compatible with one having some room for doubt about e and thus about p as well. And so it is plausible that increasing one’s degree of certainty in e beyond a minimum threshold required for knowledge might increase one’s degree of confidence in p at least slightly. Even so, I think this defense cannot plausibly explain why one’s degree of confidence for p could rationally increase more than slightly. If one knows that e, then one has at most a little room for doubt about e. So there will be little room for one’s degree of certainty in e to increase, and thus little room for one’s confidence in p to increase as a result of an increase in the certainty of e.28 This is important, because the anti-Cartesian proponent of PERCEPTUAL PARTIALITY needs to say that one’s confidence that the wall is red can increase substantially at noon. For this too is essential for anti-Cartesianism’s strongest motivations. For example, if having an experience offered only a slight increase in the strength of one’s justification relative to what one would get from knowing

28 Although I take this point to be intuitively plausible, it can be reinforced by the familiar theorem of the probability calculus that Pr(p) = Pr(p|e)Pr(e) + Pr(p|~e)Pr(~e). Since Pr(p|~e)Pr(~e)  0, this theorem entails that Pr(p)  Pr(p|e)Pr(e). So when for some δ, Pr(e) = 1 δ, it follows that Pr(p)  Pr(p|e)(1 δ), and thus that Pr(p)  Pr(p|e) Pr(p|e)δ. And since δ  0 and Pr(p|e)  1, this means that Pr(p)  Pr(p|e) δ. Roughly put, the upshot is that Pr(p) can be less than Pr(p|e) only to whatever extent Pr(e) is less than 1.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

26 | David James Barnett about another person’s experience, then the anti-Cartesian would have very limited anti-skeptical advantages over Cartesianism. For whatever the strength of our justification for perceptual beliefs might be under Cartesianism, the antiCartesian would be able to offer us only slightly more. That would be reassuring only if the skeptical worry was merely that we fall just slightly short of being justified in our perceptual beliefs! To to have a noteworthy anti-skeptical advantage, therefore, the anti-Cartesian must claim that the proprietary justification conferred by one’s experiences is substantial. For this reason, I do not think that biting the bullet is made any more palatable by the slight increase in certainty about one’s experience that might come when it actually happens. In an insightful reply, Thomas Raleigh (MS) observes that whether a change in credence looks slight or substantial depends on how we measure. If you are already .99 confident that the wall will look red at noon, it might appear that an increase to, say, .999 confidence will be quite slight. But that appearance is arguably an artifact of the usual measure of credences on the 0 to 1 interval, which is not shared by alternatives like a Log-Odds scale. And as Raleigh points out, in some respects the Log-Odds scale gives a better measure of how substantial a change in credence is. This is important, because it means that the increase at noon in your credence that the wall looks red can still be substantial by some reasonable measures, even if your credence was quite high to begin with. And so maybe this increase could suffice to explain a substantial increase in credence that the wall is red. I think Raleigh is right to criticize my loose talk of ‘slight’ and ‘substantial’ change in credence. But I still think my main point survives. Even though your increase in confidence at noon that the wall looks red counts as ‘substantial’ by some measures, what matters is that it is not enough to support the antiCartesian response to the skeptic. When the traditional skeptic says we should be ‘substantially’ less confident in our perceptual beliefs, she does not mean we should reduce our confidence from .999 to .99. She means we should reduce our confidence to some much lower (though perhaps fuzzy) value. Now what is distinctive about the anti-Cartesian’s response to skepticism, as discussed in Section 1.3.1, is that it can grant that if we were in the position of having to infer our ordinary perceptual beliefs from premises about some arbitrary person’s experiences, then the skeptic might be right about what our credences should be. The anti-Cartesian claims that the skeptic’s mistake is conflating this odd situation with our actual one. This is why the anti-Cartesian needs the proprietary justification you get from having an experience yourself to make the difference between what our actual credences are and what the skeptic says they should be. And since the bullet-biter says that knowing about your own future experiences does not provide you with this proprietary justification, the bullet-biter needs your confidence at noon to increase from the low value the skeptic permits to the high value we typically have (setting aside confounders like prior justification to believe perception is reliable). And this change is what greater certainty at noon that you have a reddish experience is insufficient to explain, for the reasons discussed in fn. 28.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Perceptual Justification–Cartesian Theater | 27 4.2. Second Option: Licensing Chauvinism A second option for the anti-Cartesian is to claim that when you know of your own future experience, that gives you stronger justification than you would get from knowing of another person’s experience. If this is granted, then in ANTICIPATED EXPERIENCE you might already enjoy this stronger justification before noon, and consequently receive no additional reason to believe that the wall is red at noon. The apparent problem with this claim is that it seems to license an irrational form of chauvinism. For it seems to license one in counting one’s own future experiences as stronger evidence than another person’s, even in the absence of evidence that one’s experiences are objectively more reliable. It might be claimed that there is a way to avoid this kind of flagrant chauvinism. One strategy might be to claim that it can be rational to count one’s own experiences as stronger evidence of the truth, even in the absence of evidence that one’s experiences are objectively more reliable. For example, one might claim in the style of Crispin Wright (2004) that we have non-evidential reasons for believing that our own experiences are a reliable guide to the truth—e.g., because the pursuit of our intellectual projects requires us to accept that our own experiences are reliable, but not that other people’s experiences are.29 Another strategy might be to claim that even if it would be irrational to count one’s own future experiences as stronger evidence than another person’s, this is not the only way that knowledge of one’s future experiences to give one proprietary justification for perceptual beliefs. Perhaps if I know that I will have a reddish experience in just a moment, I don’t need to infer that the wall is red from this evidence. Maybe it could somehow just make my future proprietary justification available to me now, as if I were already having the experience.30 But even if these or other responses give us plausible theoretical rationales for why knowledge of one’s own future experiences can give one proprietary justification right now, they are forced to embrace counterintuitive consequences. Consider: (TWO PERCEIVERS SEQUEL) All is as before in the TWO PERCEIVERS case. Then, before any blindfolds are removed, you learn some additional information. Without being told whether you or Other will be the agent whose blindfold is removed, you are informed that whoever has their blindfold removed will have an experience as of a red wall. A few minutes pass, and you are given the further information that you will be the one whose blindfold is removed. When you learn that the agent whose blindfold is removed will have a reddish experience, this might give you some justification to believe that the wall is red. How much justification it gives you can vary, depending on how we fill in

29 Note, however, that the Cartesian might avail himself of this, too, in an attempt to accept PERCEPTUAL PARTIALITY. 30 This strategy is inspired by (Pryor, 2013).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

28 | David James Barnett further details of the case. For although we have said that your evidence concerning the reliability of your own experiences is on a par with your evidence concerning Other’s, we have left it open precisely what this evidence includes. But it also seems that no matter how we fill in these additional details, you will not get any additional justification to believe that the wall is red once you learn that it is you whose blindfold will be removed. The anti-Cartesian who takes the second option must say otherwise. But this too licenses an updating procedure that is intuitively hard to accept as rational, in this case on account of its apparent chauvinism. For consider your situation once you know that someone will have a reddish experience, but before you learn that it will be you. If at this stage you withhold belief concerning the color of the wall, it would seem irrational for you then to grant belief once you learn that it will be you who has the reddish experience. More generally, it seems irrationally chauvinistic for you to increase your confidence that the wall is red when you learn it is you who has the reddish experience. For your evidence concerning Other’s perceptual reliability and your own are equivalent. Thus, even if the anti-Cartesian can in this way avoid saying that your justification increases at noon in ANTICIPATED EXPERIENCE, she has a different bullet to bite: licensing chauvinism. To be sure, just as with saying that your justification increases at noon, the problem with licensing chauvinism is not that the anti-Cartesian lacks a theory that can explain why this sort of chauvinism is rational. The problem is that it seems false that this sort of chauvinism is rational. Yet any view that takes knowledge of one’s own future experiences to provide proprietary justification is bound to say otherwise. 4.3. Third Option: Rejecting Evidentialism about Defeaters Recall that PERCEPTUAL PARTIALITY says that having an experience sometimes gives you a distinctive proprietary justification for a perceptual belief. It need not say that it does so all the time. The final response for the anti-Cartesian is to claim that your reddish experience in ANTICIPATED EXPERIENCE does not confer proprietary justification, even though in other cases your experiences can do so. An anti-Cartesian who takes this option owes us a story about what feature of the ANTICIPATED EXPERIENCE case prevents you from receiving proprietary justification from your experience. This seems tough to do. For the feature would need to be an essential feature of the case, or else the case could simply be modified to remove it. And it would need to be a feature that is plausibly absent in many ordinary cases of perception, or else anti-Cartesianism will lose its anti-skeptical force. It seems that the only feature fitting the bill is that in ANTICIPATED EXPERIENCE you have reflective awareness of your experience. For it is essential to the example that you know that you have a reddish experience at noon. And it is not clear that we ordinarily do know what our experiences are, even though we are able to come to know what they are if we stop to reflect. So the final option for the anti-Cartesian is to claim that this reflective

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Perceptual Justification–Cartesian Theater | 29 awareness alone is enough to defeat the distinctive proprietary justification that those experiences otherwise would give you for perceptual beliefs. To be clear, the idea is not that reflective awareness defeats any justification you might have for believing that the wall is red. Given the right background evidence, you can infer from the fact that Other seems to see a red wall that the wall is red, and you presumably can be in an equally good position to make such an inference from knowledge of your own experiences. Rather, the idea is that reflective awareness undermines the distinctive sort of proprietary justification that your experiences usually can provide. Although I am more sympathetic than most are to this third option for antiCartesianism, it faces objections. After considering one objection that does not impress me much, I will consider another objection that is in my view stronger. The objection that does not impress me appeals to an assumption that nothing you can do “from the armchair” can change your epistemic position. According to this assumption, the only thing that can change your epistemic position is gathering up new evidence using sensory perception. Things you do from the armchair, like reasoning through the consequences of your existing evidence, or reflecting on your existing mental states, can only help you to achieve justified beliefs about things that you already had justification to believe. If you are able to arrive at a justified belief just by reasoning through your existing evidence, then that means that you already had (propositional) justification for that belief, even before you did the reasoning. And the same goes, according to the objection, for another thing you do from the armchair: reflecting on your existing mental states, like your existing experiences. If this is right, then merely reflecting on your current experiences cannot give you a new defeater for your perceptual beliefs, since it cannot change your epistemic position at all. I am sympathetic to the objection’s contention that reasoning through the consequences of your existing evidence cannot change your epistemic position. But I think that reflection (or introspection) often changes your epistemic position—i.e., that it is a way of gathering up new evidence that you did not already have, even though it is a kind of evidence-gathering that can be done from the armchair. An example might help to reinforce the point. Suppose I ask you “How many states have names beginning with the letter ‘M’?” If you know the names of all the states, then it seems that there is an important sense in which the answer to this question is already among the things implicitly built in to your current evidence. So when you reason through this evidence to a justified belief, it is plausible that you are merely coming to believe something you already were in a position to justifiably believe. But suppose I instead ask you “How many states remind you of your grandmother?” You might be able to determine the answer without getting out of your armchair. But you would have to do so through a process of internal experimentation and observation that plausibly involves acquiring new evidence.31 This does not mean that

31

Thanks to Jim Pryor (2013) for a helpful amendment to this example.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

30 | David James Barnett every case of reflection changes your epistemic position in the same way. But it does speak against a general ban on reflection ever changing your epistemic position. Now for the objection that I think is more serious. Instead of claiming that reflection cannot give you new evidence, this objection says simply that the evidence that you do gain is not a defeater for your perceptual belief. The objection appeals to a widely accepted evidentialist model of defeaters, which holds that defeaters must take the form of evidence that directly or indirectly speaks against the truth of what you believe. More precisely, the idea is that for one’s awareness that d to give one a defeater for the belief that p, the fact that d must either oppose one’s belief by directly supporting that p is false, or else undermine one’s existing evidence for the belief by supporting that that evidence does not really support that p (or something to that general effect).32 If we accept this evidentialist model of defeaters, then we apparently cannot allow reflective awareness that you have a reddish experience to give you a defeater for your belief that the wall is red. For the fact that you have a reddish experience will not in ordinary cases amount to evidence directly supporting that the wall is not red. Nor will it in any obvious way amount to evidence that your existing evidence fails to support that the wall is red. To be sure, there are difficulties in applying the usual model of undermining defeaters, which is on its home turf concerning beliefs which are justified by indirect evidence, to putatively non-inferentially justified perceptual beliefs. But roughly and approximately, it seems the proponent of the evidentialist model should require an undermining defeater for a perceptual belief to take the form of evidence that one’s experiences do not provide a good guide to the external world (or something to that general effect). The fact that one has taken a hallucinogenic drug might be an example of such a defeater. But the mere fact that one has a reddish experience typically is not. Now I have some sympathy for an anti-Cartesianism that denies this evidentialist model of defeaters. But I think that anyone who takes this option owes us a positive explanation of how reflective awareness of one’s experiences can defeat one’s proprietary justification even without providing one with undermining evidence. Again, I do not take this option to be a non-starter, as many apparently do. But it is a difficult task to offer a satisfying explanation of how it can be accepted.33

32

See, e.g., (Weatherson, Ms., Sec. 2.1) for a recent discussion of defeaters along these lines. As Karl Schafer has emphasized to me, an anti-Cartesian who takes this third option furthermore appears committed to a major concession to the skeptic. Since one will have distinctive proprietary justification for one’s perceptual beliefs only so long as one is not reflectively aware of them, one will not be justified in one’s perceptual beliefs in contexts where one is reflectively aware of them. I think it is not obvious whether this concedes too much to the skeptic, and I will not attempt to settle the matter here. Instead, I will note only that this concession to skepticism is not too far off from those of prominent contextualist and subjectsensitive invariantist responses to skepticism, which concede in different ways the truth of skepticism in certain kinds of reflective contexts. 33

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Perceptual Justification–Cartesian Theater | 31 4.4. Conclusion Any kind of anti-Cartesianism worth having will allow us to accept PERCEPTUAL PARTIALITY, for without it the core motivations for anti-Cartesianism are lost. But in accepting PERCEPTUAL PARTIALITY, the anti-Cartesian faces serious objections stemming from one’s possible knowledge of one’s own future experiences. One option is to simply bite the bullet, and claim that even when one knows of an experience in advance, one gets stronger justification for a perceptual belief once one actually has the experience. Another is to license the seemingly chauvinistic practice of increasing your confidence that the wall is red upon learning that you will have a reddish experience at noon, even when you already knew that either you or Other would have such an experience. The final option is to reject evidentialism about defeaters, and claim that merely being reflectively aware of your experiences must defeat the proprietary justification that they otherwise provide. Does the anti-Cartesian have any other options? No. For if we reject the third option, then we will say that an unreflective agent who has a reddish experience has no more justification to believe the wall is red than does an agent who has reflective awareness of her reddish experience. And if we reject the first option, then we will say that such a reflective agent, who has a reddish experience and who knows she has had it, has no more justification than an agent who knows that she will have a reddish experience but who has not had it yet. And if we reject the second option, then we will say that such an agent, who knows that she will have a reddish experience but who has not had it yet, has no more justification than does an agent who knows that someone else has had a reddish experience. So if we reject all three options, then it follows that an unreflective agent who has had a reddish experience has no more justification to believe that the wall is red than does an agent who merely knows that someone else has had such an experience—in direct contradiction of PERCEPTUAL PARTIALITY. Since the most appealing and well-motivated forms of antiCartesianism accept PERCEPTUAL PARTIALITY, it seems that the rejection of a Cartesian epistemology of perception brings with it some difficult choices.34

References Barnett, David James (2014) ‘What’s the Matter With Epistemic Circularity?’ Philosophical Studies 171(2): 177–205. Barnett, David James (2015) ‘Is Memory Merely Testimony from One’s Former Self?’ Philosophical Review 124(3): 353–92.

34

For helpful comments and discussion, I am grateful to Paul Boghossian, Earl Conee, Sinan Dogramaci, Hilary Kornblith, Ram Neta, Jim Pryor, Karl Schafer, Miriam Schoenfield, David Sosa, Katia Vavova, Alex Worsnip, and audiences at the University of Texas at Austin, Mount Holyoke College, the University of Massachusetts Amherst, the University of Toronto, and the Creighton Club meeting at Syracuse University.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

32 | David James Barnett Barnett, David James (MS) ‘Intellectual Autonomy and the Cartesian Circle’. Briscoe, Robert (2016) ‘Depiction, Pictorial Experience, and Vision Science’. (ms.) Brogaard, Berit (2013) ‘Phenomenal Seemings and Sensible Dogmatism’ in Seemings and Justification: New Essays on Dogmatism and Phenomenal Conservatism, 270–90, Chris Tucker ed., Oxford: Oxford University Press. Broome, John (1999) “Normative Requirements” Ratio 12: 398–419. Broome, John (2013) Rationality Through Reasoning, Chichester: Wiley-Blackwell. Brown, Jessica (2013) ‘Immediate Justification, Perception, and Intuition’ in Seemings and Justification: New Essays on Dogmatism and Phenomenal Conservatism, 71–87, Chris Tucker ed., Oxford: Oxford University Press. Chisholm, Roderick (1966) Theory of Knowledge, Englewood Cliffs, NJ: Prentice-Hall. Christensen, David (2010) ‘Higher-Order Evidence’ Philosophy and Phenomenological Research 81(1): 185–215. Cohen, Stewart (2010) ‘Bootstrapping, Defeasible Reasoning, and A Priori Justification’ Philosophical Perspectives 24: 141–59. Cullison, Andrew (2010) ‘What Are Seemings?’ Ratio 23(3): 260–74. Elga, Adam (2007) ‘Reflection and Disagreement’ Noûs 41(3): 478–502. Foley, Richard (2001) Intellectual Trust in Oneself and Others, Cambridge: Cambridge University Press. Foley, Richard (2005) ‘Universal Intellectual Trust’ Episteme 2(1): 5–12. Huemer, Michael (2006) ‘Phenomenal Conservativism and the Internalist Intuition’ American Philosophical Quarterly 43: 147–58. Huemer, Michael (2007) ‘Compassionate Phenomenal Conservativism’ Philosophy and Phenomenological Research 73: 30–55. Jehle, David and Weatherson, Brian (2012) ‘Dogmatism, Probability, and Logical Uncertainty’ in New Waves in Philosophical Logic 95–111, Greg Restall and Gillian Russell eds, Basingstoke: Palgrave Macmillan. Kelly, Thomas (2010) ‘Peer Disagreement and Higher-Order Evidence’ in Social Epistemology: Essential Readings 183–220, Alvin I. Goldman and Dennis Whitcomb eds, Oxford: Oxford University Press. Kolodny, Niko (2005) ‘Why be Rational?’ Mind 114: 509–63. Kung, Peter (2010) ‘On Having No Reason: Dogmatism and Bayesian Confirmation’ Synthese 177: 1–17. Lasonen-Aarnio, Maria (2008) ‘Single-Premise Deduction and Risk’ Philosophical Studies 141(2): 157–73. Lasonen-Aarnio, Maria (2014) ‘Higher-Order Evidence and the Limits of Defeat’ Philosophy and Phenomenological Research 88(2): 314–45. Lasonen-Aarnio, Maria (2015) ‘ “I’m onto Something!” Learning about the World by Learning What I Think about It” Analytic Philosophy 56(4): 267–97. Littlejohn, Clayton (2012) Justification and the Truth Connection, Cambridge: Cambridge University Press. Lord, Errol (2014) ‘The Real Symmetry Problem(s) for Wide-Scope Accounts of Rationality’ Philosophical Studies 160(3): 443–64. Lycan, William (2013) ‘Phenomenal Conservativism and the Principle of Credulity’ in Seemings and Justification: New Essays on Dogmatism and Phenomenal Conservatism, 293–304, Chris Tucker ed., Oxford: Oxford University Press. McDowell, John (1982) ‘Criteria, Defeasibility, and Knowledge’ Proceedings of the British Academy 68: 455–79.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Perceptual Justification–Cartesian Theater | 33 McDowell, John (1995) ‘Knowledge and the Internal’ Philosophy and Phenomenological Research 55: 877–93. Miracchi, Lisa (forthcoming) ’Competent Perspectives and the New Evil Demon Problem’ in The New Evil Demon: New Essays on Knowledge, Justification, and Rationality, Fabian Dorsch and Julien Dutant eds, Oxford: Oxford University Press. Moretti, Luca (2015) ‘In Defence of Dogmatism’ Philosophical Studies 172: 261–82. Pollock, John and Cruz, Joseph (1999) Contemporary Theories of Knowledge, 2nd ed., Lanham, MD: Rowman and Littlefield. Pritchard, Duncan (2012) Epistemological Disjunctivism, Oxford: Oxford University Press. Pryor, James (2000) ‘The Skeptic and the Dogmatist’ Noûs 34: 517–19. Pryor, James (2005) ‘There is Immediate Justification’ in Contemporary Debates in Epistemology, 181–202, Matthias Steup and Ernest Sosa, eds, Oxford: Blackwell. Pryor, James (2007) ‘Uncertainty and Undermining’(ms.) Pryor, James (2013) ‘Problems for Credulism’ in Seemings and Justification: New Essays on Dogmatism and Phenomenal Conservativism, 89–134, Chris Tucker ed., New York: Oxford University Press. Raleigh, Thomas (MS) ‘Plenty of Room Left for the Anti-Cartesian: A Reply to Barnett’. Robson, Jon (2012) ‘Aesthetic Testimony’ Philosophy Compass 7(1): 1–10. Russell, Bertrand (1912) The Problems of Philosophy, New York, NY: Henry Holt and Company. Schechter, Joshua (2013) ‘Rational Self-Doubt and the Failure of Closure’ Philosophical Studies 163(2): 428–52. Schoenfield, Miriam (2014) ‘Permission to Believe: Why Permissivism Is True and What It Tells Us About Irrelevant Influences on Belief ’ Noûs 48(2): 193–218. Schoenfield, Miriam (2015) ‘A Dilemma for Calibrationism’ Philosophy and Phenomenological Research 91(2): 425–55. Siegel, Susanna (2012) ‘Cognitive Penetrability and Perceptual Justification’ Noûs 46: 201–22. Siegel, Susanna (2013) ‘The Epistemic Impact of the Etiology of Experience’ Philosophical Studies 162: 697–722. Silins, Nicholas (2008) ‘Basic Justification and the Moorean Response to the Skeptic’ in Oxford Studies in Epistemology, vol. 2, 108–42, Tamar Szabo Gendler and John Hawthorne eds., Oxford. Sliwa, Paulina and Horowitz, Sophie (2015) ‘Respecting All the Evidence’ Philosophical Studies 172(11): 2835–58. Soteriou, Matthew (2014) ‘The Disjunctive Theory of Perception’ in The Stanford Encyclopedia of Philosophy (Summer), Edward N. Zalta ed. Available at http:// plato.stanford.edu/archives/sum2014/entries/perception-disjunctive/. Tucker, Chris (2010) ‘Why Open-Minded People Should Endorse Dogmatism’ Philosophical Perspectives 24(1): 529–45. Tucker, Chris (2013) ‘Seemings and Justification: An Introduction’ in Seemings and Justification: New Essays on Dogmatism and Phenomenal Conservatism, 1–32, Chris Tucker ed., Oxford: Oxford University Press. Vavova, Katia (2014) ‘Moral Disagreement and Moral Skepticism’ Philosophical Perspectives 28(1): 302–33.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

34 | David James Barnett Vogel, Jonathan (2014) ‘E & ~H’ in Perceptual Justification and Skepticism, 87–107, Dylan Dodd and Elia Zardini eds, Oxford. Weatherson, Brian (2007) ‘The Bayesian and the Dogmatist’ Proceedings of the Aristotelian Society 107(1/2): 169–85. Weatherson, Brian (2010) ‘Do Judgements Screen Evidence?’ (ms.) Wedgwood, Ralph (2007) The Nature of Normativity, New York: Oxford University Press. Wedgwood, Ralph (2013) ‘A Priori Bootstrapping’ in The A Priori in Philosophy, 226–47, Albert Casullo and Joshua Thurow eds, Oxford: Oxford University Press. White, Roger (2005) ‘Epistemic Permissiveness’ Philosophical Perspectives 19(1): 445–59. White, Roger (2006) ‘Problems for Dogmatism’ Philosophical Studies 131(3): 525–57. Wollheim, Richard (1980) Art and Its Objects, 2nd ed., Cambridge: Cambridge University Press. Worsnip, Alex (2015) ‘Narrow-Scoping for Wide-Scopers’ Synthese 192(8): 2617–46. Wright, Crispin (2004) ‘Warrant for Nothing (And Foundations for Free)?’ Supplement to the Proceedings of the Aristotelian Society 78(1): 167–212.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

2. Subjective Probability and the Content/Attitude Distinction Jennifer Rose Carr On an attractive, naturalistically respectable theory of intentionality, mental contents are a form of measurement system for representing behavioral and psychological dispositions. This chapter argues that a consequence of this view is that there is substantial arbitrariness in the content/attitude distinction. Whether some measurement of mental states counts as characterizing the content of mental states or the attitude is not a question of empirical discovery but of theoretical utility. (By theoretical utility, I mean practical utility associated with theory adoption that does not derive from the value of accuracy.) If correct, this observation has important ramifications in a number of philosophical literatures, not just in philosophy of mind but also in epistemology and rational choice theory. The focus of this chapter is on the relation between the measure theory of mental content and the modeling of subjective probability in doxastic states. Subjective probabilities are expressed by statements like “It probably rained in Leeds last night” or “Ten to one he’s going to lose his temper.” Beyond that, there is considerable controversy. There are a variety of competing views about how to understand and model the doxastic states associated with subjective probability. The three most popular views model subjective probabilities as (i) binary beliefs about probability, (ii) precise credences, and (iii) imprecise credences. On the binary belief view, an agent’s total doxastic state is best modeled as a function from propositions to one of three values, corresponding to belief, suspension of judgment, and rejection. On the precise credence view, an agent’s total doxastic state is best modeled as a credence function: a function from propositions to real numbers, where greater numbers represent greater certainty in the relevant proposition. On a common version of the imprecise credence view,1 an agent’s total doxastic state is best modeled as a set of credence functions.2

1 There are a variety of alternative imprecise credence views. For example, Kyburg (1983) defends an imprecise model whereby an agent’s doxastic states are represented by a function that maps propositions to interval subsets of [0,1]; Sturgeon (2008) defends a model whereby imprecise credences can also exhibit a form of vagueness at their borders that is not captured in either the set of probabilities model or the interval model. 2 These options are not exhaustive. One alternative is the view that an agent’s total doxastic state is best represented by a preorder over propositions, representing comparative confidence without cardinal degrees of confidence (Keynes 1921; de Finetti 1937; Koopman 1940; Savage 1972; Fine 1973). Another, more common alternative is the view that agents have both binary

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

36 | Jennifer Rose Carr These different modeling claims are generally thought to correspond to claims about the fundamental descriptive nature of doxastic states. Are doxastic attitudes toward a proposition p coarse-grained, such that an agent either believes p, rejects p, or suspends judgment about p? Or are these attitudes more fine-grained? Are they fine-grained and sharp, like the maximum horizontal width of Chile, or fine-grained and imprecise, like the latitude of Chile? These are non-normative questions. But the different belief models are often linked with theories of rationality: for example, that it’s irrational to have imprecise credences3 or irrational to have precise credences.4 The theory of intentionality and the theory of rationality are typically treated as orthogonal.5 But this chapter argues that an attractive theory of intentionality has important ramifications for rationality. If the measure theory of mental content is correct, I argue, then it cannot be the case that agents are irrational by virtue of having probabilistic binary beliefs, or having precise credences, or having imprecise credences, in response to their overall epistemic situation. The central argument of the chapter goes as follows: (1) (2)

(3)

(C)

If the measure theory of mental content is true, then the content/ attitude distinction is measurement system relative. If the content/attitude distinction is measurement system relative, then there’s no psychological difference between having probabilistic binary beliefs, precise credences, and imprecise credences. If there’s no psychological difference between having probabilistic binary beliefs, having precise credences, and having imprecise credences, then none of these is rationally impermissible. So, if the measure theory of mental content is true, then it is not rationally impermissible to have probabilistic binary beliefs, or precise credences, or imprecise credences.

First clarification: I use “psychological” to isolate those features of an agent’s mental states that are either intrinsic to the agent or grounded in the

beliefs and some form of degrees of belief, precise or imprecise. How these two relate to each other is a topic of debate. Some hold that our doxastic states should be modeled with both precise and imprecise credences: that though agents have imprecise credences, there may be some precise credence function at a time that an agent most identifies with and that explains her behavior (Moss 2015). Some even hold three different models are independently needed: belief, comparative confidence, and some form of credence model (Fitelson manuscript). The conclusion of this chapter generalizes to these other belief models, though the details of the generalization require care. 3

See e.g. White (2009); Elga (2010). See e.g. Walley (1991); Joyce (2010). 5 One obvious exception involves theories of intentionality that make use of possible worlds propositions, which treat all necessarily true propositions as equivalent. (Likewise for all necessarily false propositions.) Plausibly, it is irrational to believe (or have high credence in) p while suspending judgment about (or having non-low credence in) ¬p. But arguably, it’s not irrational to be uncertain about some abstruse mathematical falsehood while believing that it’s either raining or not raining. 4

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Subjective Probability and the Content/Attitude Distinction | 37 interactions between the agent and her environment. (This is intended to be neutral with respect to internalism versus externalism about mental content.) What I mean to rule out are those features of the agent that depend on how the agent is described by the theorist: for example, whether she’s described in English or Zazaki, or whether she’s described in inches or centimeters, or whether she’s described in the precise credence framework or the imprecise credence framework. Second clarification: the consequent of (3) and of (C) is shorthand for the claim that, for any epistemic situation, an agent cannot be irrational merely by virtue of having probabilistic binary beliefs, nor merely by virtue of having precise credences, nor merely by virtue of having imprecise credences. (An epistemic situation specifies the conditions that are relevant for determining what total doxastic states are rationally permissible. According to evidentialists, this amounts to the agent’s evidence. For others, other factors are relevant.) I mean to claim something stronger than that for each of the three doxastic types under discussion, there’s some epistemic situation such that that doxastic type is permissible. The chapter will proceed as follows: in section 1, I’ll spell out the measure theory of mental content, first motivating the theory in 1.1 and then giving a toy measure theory for belief and desire in 1.2. Section 2 defends the first premise of the argument above. Section 3 defends the second premise, first comparing probabilistic binary beliefs with credences in 3.1 and then comparing precise with imprecise credences in 3.2. I close in section 4 with a brief defense of the third premise.

1. Content 1.1. The Measure Theory of Mental Content Here’s one picture of how minds have content: there is some non-physical relation between physical states of brains and certain abstract objects (propositions). By virtue of standing in this non-physical relation, creatures with brains are able to have contentful mental states. These relations are discovered when theorists observe agents’ behavior, transitions among physically realized functional states of brains, and whatever else might form the physical basis for mental states. Here’s an alternative: the relation between mental states and abstract contents is not discovered but constructed. “Attitudes”—in particular, doxastic and orectic states—are just clusters of dispositions. They are not clusters of dispositions that also happen to have a content by virtue of standing in a special nonphysical relation to a proposition. On this latter theory, attitudes are related to propositions in the way that temperatures, lengths, and weights are related to numbers. What does it mean to say that a particular pizza is a fourteen-inch pizza, or that a particular beer stein holds thirty-four fluid ounces? Does the pizza have that diameter in

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

38 | Jennifer Rose Carr virtue of standing in a non-physical relation to the number fourteen? If this relation exists, it is neither mysterious nor explanatory. We know that the relevant relationship between the pizza and the number fourteen is constructed. Physical objects possess intrinsic and nonintrinsic physical properties—diameter, capacity, etc.—that are compactly represented by indexing them to representatives. Abstract objects such as numbers are well-suited for this purpose, for reasons we’ll see. So similarly, physical objects with minds possess intrinsic and nonintrinsic physical properties—dispositions to form mental states, to act, and so on—that are represented by indexing them to abstracta like propositions.6 This view, which has been proposed and rediscovered many times over the past six decades,7 is sometimes called the “measure theory of mind.”8 Imagine a time before numerical temperature scales have been created. Theorists understand that objects can be colder, hotter, etc. There is an infinite range of mutually exclusive possible thermal properties an object can have. It would be highly inconvenient to assign each such property a different word. We could describe them compositionally: really hot, kind of hot, warm, . . . But this wouldn’t give us as much detail as we might find useful. We notice that there are certain structural relations between the various possible thermal properties of objects. Objects can be totally ordered by hotness; some object could be a little hotter than another, or a lot; for any two thermal properties, there is a thermal property midway between the two; and so on. We notice that the space of numbers also has these structural relations. So to characterize the range of mutually exclusive possible thermal properties, it’s convenient to index different thermal properties to real numbers. Thus a tepid object must be assigned a number that’s somewhere between the numbers assigned to a hot object and a cold object; a warm object’s number must be between that of a hot object and a tepid object; and so on. Which precise numbers we use for which temperatures is arbitrary. What is non-arbitrary is the relations amongst these numbers: they are assigned so as to preserve certain structural relations among the relevant thermal properties. For example, relative to a shared scale, the number representing the temperature of x should be greater than the number representing the temperature of y iff x is hotter than y. Of course, not all relations (or even all natural or interesting relations) between real numbers need to represent corresponding relations among thermal properties in order for numbers to be usable for representing thermal properties. We needn’t say that a day when the temperature’s 80ºF is twice as hot as a day when it’s 40ºF, even though 6 This picture is one of many, many theories of mental content. Others include Stampe (1977); Dretske (1981); Fodor (1987); Millikan (1989a, 1989b); Kriegel (2013). This survey is highly partial (in both senses). I will not address the many merits and shortcomings of alternative theories. 7 Versions of this suggestion have been discussed and defended by Suppes and Zinnes (1963); Churchland (1979); Field (1980); Stalnaker (1984); Dennett (1987); Davidson (1989); Matthews (1994). 8 Matthews (1994).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Subjective Probability and the Content/Attitude Distinction | 39 80 and 40 are related in that way. Indeed, what relations are relevant for the representation of thermal properties isn’t fully determined by facts about thermal properties.9 Next: imagine a time before propositional attitude ascriptions. Agents have complex dispositions to produce various forms of behavior, transitions between brain states and functional states, and so on. We notice that these dispositions have certain structural relations. Some are incompatible (rationally or psychologically); some necessitate others (rationally or psychologically); some are orthogonal; etc. We can represent these structural relations by indexing agents’ psychological dispositions to abstract objects that have analogous structural relations. Here, real numbers are not the obvious choice. Numbers are a natural choice for many measurable attributes because of their ordinal and cardinal properties: temperatures, weights, lengths, and so on are conveniently represented with reals in part because an object can be a little hotter/heavier/ longer/etc. than another, or a lot hotter/heavier/longer/etc. But typically we don’t have much use for ordering psychological dispositions. The relations among psychological dispositions that we happen to be interested in representing are relations of incompatibility, necessitation, orthogonality, etc. So sets are convenient for representing psychological dispositions. If two dispositions are rationally incompatible, e.g., we can capture this by indexing them to disjoint sets.10 So there is some analogy between the relation between thermal states and numbers and the relation between mental states and propositions. How deep does this analogy go? The reason why thermal properties are conveniently represented with real numbers is that the range of possible mutually exclusive thermal properties shares certain structural properties with the set of real numbers. But again, how these real numbers are mapped to different thermal properties is largely arbitrary: there are different temperature scales that preserve the relevant ordinal and cardinal relations between numbers and thermal properties. The scales that we in fact use are convenient for various purposes—using 0 and 100 to privilege the phase change temperatures for water at sea level, or to privilege really cold and really hot air temperatures in Western European climates, etc.—but the selection of scale is fundamentally conventional. There are infinitely many different temperature scales that we could have used with equal fidelity to the physical facts. There are more and less conservative interpretations of the measurement analogy.

9

I will not rehearse the details of standard expositions of measurement theory. For classic expositions, see Suppes and Zinnes (1963) and Krantz and Tversky (1971). 10 Psychological states’ relations of incompatibility, entailment, etc. could also be representable with various forms of structured propositions or even (with some work) sentences in artificial or natural language. (See Matthews (1994).)

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

40 | Jennifer Rose Carr  Conservative: the relation between mental states and abstract objects is illuminated by, and no more mysterious than, the relation between thermal properties and abstract objects.11  Less conservative: content ascriptions are literally measurement reports. As such, certain features of the representation space of indices are arbitrary. There are multiple equally adequate measurement “scales.”  Even less conservative: not only are there different possible “scales” for measuring psychological states, but there are also different psychological properties that might be useful to measure. None has decisive claim to the title “content.” On the least conservative version of the view, propositional contents are nothing more than tags that theorists use to keep track of subjects’ behavioral and psychological properties. For a state to have propositional content is for it to be indexed relative to some representation space of propositions. Different kinds of objects (structured or unstructured, abstract or non-abstract) can form a representation space for mental states. This is the version of the analogy that we’ll explore, and to which we’ll apply the title “measure theory of mental content.” 1.2. A Toy Measure Theory Mental states have a variety of different properties that could be useful to index within some representation space for the purpose of measurement. Here is an oversimplified example of how properties of mental states motivate the use of a particular kind of representation space of abstracta: First, we specify which properties of an agent’s mental states are relevant to our interests. Suppose these are twofold: the mental states’ rationality conditions and their causal tendencies.12 To capture the interesting relations amongst attitudes, and between attitudes and their environment, we can index them to abstract objects that are interrelated in parallel ways. We assume that agents with mental states are, as a noncontingent matter, largely rational.13 So the rational interrelations between mental states are psychologically noncontingent. We then observe that set theoretic or boolean relations can conveniently represent rational relations between mental states. How? We can index each theoretically interesting mental state to some set. If one mental state is rationally or psychologically incompatible with another, we

11

This seems to be the version of the thesis endorsed in Matthews (1994). There are various possible motivations for this choice of properties to index to “content.” The assumption that agents must be (widely) rational in order to have contentful mental states is widespread but not universal. It is sometimes tied to the radical interpretation view (Davidson 1973; Lewis 1974; Williamson 2009, ch. 8). The relevance of causal information is also widespread (from Fodor (1987) to Millikan (1989a, 1989b) and Dretske (1988).) The view I sketch here is a simplified version of the view defended in Stalnaker (1984, ch. 1). 13 We’ll take no stand on which kinds of entities will count as agents; but if a cat or an insect counts as an agent, then it is largely rational. 12

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Subjective Probability and the Content/Attitude Distinction | 41 make sure to index them to disjoint sets. If one mental state rationally or psychologically necessitates another, we make the first a subset of the second. If the two mental states are rationally or psychologically compatible but neither necessitates the other, we index them to overlapping sets such that neither is a subset of the other. And so on. In principle, this could be done with any sets of large enough cardinality: sets of numbers, pizzas, or stars. But we are also interested in the causal relations between attitudes and their environment. So it’s more theoretically valuable to use sets that clue us in to these causal relations. For example, suppose we’ve already divided mental states on the basis of functional role such that one subset contains beliefs and another contains desires.14 We index each belief to a set of possible worlds—the worlds where the facts15 that tend to cause the belief obtain.16 We index each desire to a set of possible worlds: the worlds where the facts that the desire tends to cause obtain. Now, from knowing that some belief is indexed to a certain set of worlds, we are able to read off certain facts about the state: that the agent can’t rationally hold a belief indexed to any set disjoint from that set, for example, and that the belief is one that tends to be caused by facts that obtain in worlds in that set. So, in our toy model, if a belief tends to be caused by the fact that p is true, then we index the belief with the possible worlds proposition p; we say that the belief has p as its content. If a desire tends to cause q to be true, then we index the desire with the possible worlds proposition q; we say that the desire has q as its content. The representation space of sets of possible worlds can play the two roles we’d aimed for: set relations mirror rational relations; sets of possible worlds encode the tendential causal relations between mental states and their environments. This is obviously a simplification. We might, for example, be interested in representing more features of mental states than their rational interrelations and causal relations with their environment. If so, there are decisions to be made about whether, in representing these other features, it is more useful to use a separate representation space (treating these as distinct measurements), or a unified representation space. What motivates choosing some specific representation space as the space of contents?

2. Relativism about the Content/Attitude Distinction Clearly, even in our toy measure theory, some properties of mental states are not reconstructable from the propositions to which they are indexed. Not everything is built into the content. A belief and a desire may have the same propositional content, but they are not the same mental state. Some 14 I remain neutral about how this might work; the story we tell is consistent with indication theories, teleosemantics, and so on. 15 For ease of exposition, I assume that the relevant causal relata are mental states and facts. 16 Simplifying wildly, I elide complexities introduced by false belief. These could be filled in using Dretske’s “channel conditions,” Stalnaker’s “normal conditions,” and so on. I also ignore distal versus proximal causes.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

42 | Jennifer Rose Carr properties of mental states are encoded in propositional contents, while other properties of mental states are not. These other properties are relegated to the “attitude,” “form,” “vehicle,” etc. We can encode properties of mental states in various ways. There are different properties of an agent’s total set of psychological and behavioral dispositions that can be measured using different systems of indices. For some such properties, we can conveniently use real numbers. Obvious candidates for this treatment are credences and utilities. For other features of mental states, we can use sets, or sequences of objects, properties, and relations, or sentences, or . . . These different indexing tools can also be combined. Return to our example motivations for a representation space of sets of possible worlds. The causal tendencies linking mental states and their environment are plausibly vague, and come in degrees: hence credences and utilities. But if we think of attitudes as coming in degrees, can we retain the same rationale for using sets of worlds as contents? For binary beliefs, being indexed to disjoint sets indicated rational or psychological incompatibility. But one can rationally have exactly the same degree of belief in both p and ¬p: namely, credence 0.5. So disjoint sets as contents no longer represent rational or psychological incompatibility. Similarly, what if we want to represent features of attitudes other than causal relations? We often take some of our beliefs to be “about” things that arguably couldn’t stand in causal relations to them, e.g. mathematical facts, normative facts, and so on. So content ascriptions aren’t confined to encoding information about the causal conditions and causal tendencies of mental states. In both cases, the measure theorist faces the question: should we represent the relevant features of mental states within the representation space that we use for attributing mental content? Or should we separately specify these features in our characterization of the mental state as part of the attitude? The measure theorist is committed to the conclusion that answering these questions might be a matter of decision, not discovery. There are (at least) three ways in which the choice of measurement system is susceptible to arbitrariness17: 1.

There is arbitrariness in the assignment of indices to physical properties.

In quantitative measurements, this affects choice of scale. We measure thermal properties with numbers, but the assignment of numbers is conventional and arbitrary (up to the point of preserving the relevant structural relations of thermal properties within the representation space). There are different temperature scales, none of which can be sensibly thought of as the “one true temperature scale.” Any particular temperature scale is at most preferable

17 That is, arbitrariness with respect to the goal of accurately representing psychological states. Pragmatic considerations may motivate a choice of measurement system; alethic considerations cannot.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Subjective Probability and the Content/Attitude Distinction | 43 within particular contexts, relatively to local conventions or other sources of practical utility. 2.

There is arbitrariness in the choice of which physical properties to measure.

For ordinary objects, it’s obvious that there is a huge variety of physical quantities that are worth measuring in different contexts, for different purposes. You might be most interested in the pizza’s diameter, because you’re hungry, while I might be most interested in its temperature, because I want to avoid cheese-burn, while the delivery driver might be most interested in its speed, to ensure it arrives in thirty minutes or less. None of these systems of measurements is privileged (capturing the “numerical content” of the pizza); which system we should use depends on our purposes, what features of objects we want to keep track of. 3.

There is arbitrariness in the individuation of measurements.18

We can measure an object’s speed; we can measure its direction; we can measure its temperature. We can also measure its velocity = 〈speed, direction〉; we could, if we wanted, measure its 〈temperature, direction〉. Obviously, it’s not the case that speed is a defective system of measurement because it doesn’t encode information about direction. It just measures a somewhat different physical quantity from what velocity measures. Should we use a velocity measurement, or two separate measurements of speed and direction? Both do equally well with respect to information about the physical facts. Similarly, we might describe an object’s movement in terms of its constant acceleration over an interval of time, or we might describe the same movement in terms of its speed at each of the times within the interval. Whether we count this as one measurement or many makes sense only relative to a choice of measurement system. The choice of measurement system we use is a matter of theoretical utility rather than accurate representation of the physical facts. If the measure theory of mental content is correct, then we should expect each of these to hold for mental content attributions. First, we should expect there to be multiple adequate representation spaces of contents, which are more or less preferable relative to the theorist’s aims in a context, and not relative to “the one true assignment of contents.” Second, creatures like us have a huge variety of potentially interesting psychological properties. We should expect that there may be no unique privileged body of content-determining properties. Third, we should expect that, insofar as different measurement systems individuate the psychological properties relevant for contents in different ways, there will be arbitrariness in content individuation: relative to different measurement systems, the same mental state might count as one attitude toward one proposition or multiple attitudes toward multiple propositions. Here, “measurements” does not mean acts of collecting information about the system (“taking measurements,”) but rather the unit-sized chunks of information so acquired. 18

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

44 | Jennifer Rose Carr One might respond that the first observation can be settled by stipulation: the propositional indices will count as content. But which indices are propositional? Because there are several live hypotheses about the nature of propositions, this is unhelpful.19 Moreover, the stipulation is not innocuous: expressivists often view it as a central thesis of their positions that there is a type of judgment, expressible in assertoric sentences, that has nonpropositional content.20 Similarly, many have argued that perceptual experiences have non-propositional content.21 Finally, as we’ll see, whether some property of a mental state should feature in the proposition to which its indexed or some other measurement is not settled by facts about the agent’s dispositions, brain states, causal history, environment, etc. If the measure theory is correct, then this question is only answerable relative to a choice of measurement system. In a slogan, the measure theory is committed to mental content relativism: Mental content relativism: What content(s), if any, a psychological state has is measurement system relative. More importantly for present purposes, the measure theory is committed to mental content/attitude relativism: Mental content/attitude relativism: Whether some property of a psychological state is part of the content or the attitude is measurement system relative. These forms of relativism should be distinguished from antirealism about content. I won’t attempt the foolhardy task of giving a general definition of antirealism. I’ll merely note that you don’t count as an antirealist about whether a bird is flying at twenty mph merely because you concede that you could have given its speed in km/h, or given its velocity instead. Suppose the sentence “The bird is flying at twenty mph” is felicitously uttered. The proposition expressed by that utterance will be true or false depending solely on the motion of the bird. Similarly, you don’t count as antirealist about whether a mental state is a belief that p (relative to a representation space) merely because you concede that it could have been equally well characterized in a different measurement system, or you could have described some mental state constituted by an overlapping but slightly different cluster of dispositions, physical states, etc.22 Similarly for instrumentalism about content, the view that propositional attitude ascriptions are not literally true (or truth-evaluable), and are merely useful devices for specific purposes. Measurements are literally true. It’s just that which things we choose to measure, and which units we use, depend on our purposes. 19 Propositions may be complexes of senses (Frege 1892), or complexes of objects, properties, and relations (Russell 1903), or sets of possible worlds (Stalnaker 1984), or cognitive event types (Soames 2014), or . . . 20 See, e.g., Gibbard (1990, 2003); Schroeder (2009). 21 E.g. Tye (2000). 22 The view is, however, antirealist about the content-attitude distinction, in the sense that this is merely a feature of measurement systems for characterizing psychological states.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Subjective Probability and the Content/Attitude Distinction | 45 Objection. The analogy between content and speed breaks down in contexts of disagreement. (4a) does not conflict with (4b): (4)

a. The bird is flying at 20 mph. b. The bird is flying at 32 km/h.

But (5a) does conflict with (5b): (5)

a. Property F features in the content of state s. b. F does not feature in the content of s.

A realist about these claims is committed to the hypothesis that one is false. The relativist23 sidesteps this by treating (5a) and (5b) as semantically incomplete. They are only interpretable as true or false if relativized to a measurement system: (6)

a. Property F features in the content of state s according to measurement system M1 . b. F does not feature in the content of s according to measurement system M2 .

Ordinary mental content attributions do not explicitly relativize to measurement systems and so have the form of (5a) and (5b), which according to relativism are semantically incomplete. So relativism about content and about the content/attitude distinction is in fact antirealist about ordinary mental content attributions. Reply. On the relativist view, the claim that s has F must be relativized to a measurement system in order to express a truth-evaluable proposition. But there’s no clear reason for requiring this relativization to be explicit. It doesn’t seem to me that ordinary speakers exhibit semantic blindness or make some other kind of mistake of linguistic competence by virtue of the fact that they don’t explicitly relativize their mental state attributions to a measurement system. The selection of measurement system may sometimes be provided by some relevant context.24 Compare: given special relativity, claims about (e.g.) length must be evaluated relative to a reference frame. In accepting special relativity, we are not thereby committed to antirealism about length. Similarly, sentences that include non-anaphoric pronouns and demonstratives must be evaluated relative to an assignment of referents to the free variables. But we needn’t be antirealists about facts expressible in indexical language.25 23

In the vein of Harman (1996). This view is also meant to be neutral with respect to relativism about truth in the vein of MacFarlane (2014), according to which sentence truth is relativized to both the context of utterance and a context of assessment; cf. Egan (2007). 25 Caveat: the terms “antirealism” and “relativism” are precisified in a variety of ways in a variety of philosophical literatures. There are some precisifications that are incompatible with my usage. My aim is to situate the view under discussion by contrasting it with alternative possible views, not to defend it against the charge of antirealism under any interpretation of the word. Thanks to Jonathan Livengood for pressing this objection. 24

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

46 | Jennifer Rose Carr Relativism about the content/attitude distinction has an affinity with interpretivism: Interpretivism: an agent’s (partial or binary) beliefs and desires are whatever beliefs and desires an ideal interpreter would ascribe to the agent in order to make the best (explanatory and rationalizing) sense of the agent’s dispositions to act.26 Interpretivism of some form or another has been a highly influential view among formal epistemologists and decision theorists. Understanding contents as measurements provides a kind of answer to a challenge for interpretivists: why are beliefs and desires the kinds of things that necessarily coincide with the ideal interpreter’s best interpretation? On the view under discussion, to have a contentful attitude is to have a cluster of properties that an (in some sense) ideal interpreter might index to some kind of object that can enter into boolean relations. Contents just are conventional tags that are useful in interpretations.27 Note, however, that the two views are, strictly speaking, orthogonal. An interpretivist can accept that there’s a unique, privileged space of contents that agents have attitudes toward, and hold that ideal interpretation assigns attitudes to these contents.28 And one can hold that there’s arbitrariness in the content/attitude distinction while denying that mental states have an illuminating relationship with ideal interpreters. (What speed a car is going at has no illuminating relationship with the speed attributed to it by an ideal interpreter.)

3. Ramifications for Belief Modeling 3.1. Binary Belief vs Credence Some have denied that ordinary agents have credences, precise or imprecise.29 There are a variety of reasons for this conclusion, often having to do with computational limitations and the theoretical superfluity of quantitative attitudes. We may have some attitudes toward probabilistic contents, these philosophers argue, but not all of our doxastic states have a tacit commitment to probabilities. A general challenge for reducing credences to binary belief is to account for subjective probability. Suppose we take it as a datum that I assign subjective probability 0.5 to the proposition that a coin will land heads the next time it’s

26

See in particular Lewis (1974). Relativism about the content/attitude distinction suggests that what counts as a best interpretation depends partly on the interpreter’s purposes. A result of this view is that there may not be a unique privileged interpretation, for reasons that are totally independent of indeterminacy of reference. 28 Williams (manuscript) argues that those who defend probabilism in terms of representation theorems (e.g. Savage 1954) may be committed to this view. 29 Harman (1986), Holton (2015). 27

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Subjective Probability and the Content/Attitude Distinction | 47 tossed. How can this be cashed out as a binary belief? It’s clear that we have doxastic attitudes toward claims about probabilities: I might believe that the objective chance that the coin will land heads is 0.5. But binary beliefs about objective chances are not the same thing as subjective probabilities. I just as well might suspend judgment on the proposition that the objective chance that the coin will land heads is 0.5. I might be more or less confident of this proposition. Indeed, suppose I know that either the coin is biased 3-to-1 toward heads or it’s biased 3-to-1 toward tails, but I’m uncertain which. I fail to believe that the objective probability of heads is 0.5—I think that the objective probability is either 0.75 or 0.25—but nevertheless my subjective probability that the coin will land heads is neither. Indeed, it might be 0.5. One can run a similar argument against the hypotheses that subjective probabilities are beliefs about evidential probability. These forms of belief about other kinds of probability are orthogonal to subjective probabilities. Holton (2015) and (p.c.) denies that ordinary agents have credences, but accepts that we do have subjective probabilities. We don’t need to appeal to other kinds of probability to explain the phenomenon of subjective probability, on Holton’s view. Instead, if there are subjective probabilities, we should think of them as figuring into the content of doxastic states, rather than the attitude. Initially, this suggestion may seem puzzling, since the notion of subjective probability is typically identified with the notion of a credence. The two phrases are sometimes taken to be synonymous. It would make no sense to reduce credences to beliefs about credences. But on Holton’s view, the two notions come apart. Subjective probabilities—estimates of a primitive likelihood—are the kinds of things that people can have beliefs about.30 So (going beyond Holton’s view), instead of sets of worlds as contents, the contents of binary beliefs might be sets of world–probability function pairs, where the second parameter represents a possible subjective probability function. An agent’s total doxastic state could be represented as a set of 〈w; Pr〉 pairs. Where the agent is committed to the probability of some possible worlds proposition p being n, all of these pairs contain a Pr coordinate such that PrðpÞ ¼ n. However, the agent need not assign subjective probabilities to all propositions. Where the agent withholds judgment about the probability of some possible worlds proposition p, the agent’s doxastically possible 〈w; Pr〉 pairs will contain Prs that assign different values to p. But what is the psychological difference between a binary belief that assigns p subjective probability n and a credence that assigns p subjective probability n? The measure theorist should say: nothing. They might involve precisely the same dispositions to cause and be caused by other mental states; they might be grounded in the same evidence and they might cause precisely the same dispositions to act. Decision rules associated with credences have notational variants expressible in terms of binary belief. If the agent’s belief

30

See also Sepielli (2012) on the notion of minimal probability.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

48 | Jennifer Rose Carr state fails to determine a unique probability function, no problem: the agent’s psychological dispositions will be easily representable as imprecise credences. If an agent’s belief set B is a set of 〈w; Pr〉 pairs, her imprecise credal state C ¼ fPr : ∃w:〈w; Pr〉∈Bg. Here, again, whatever the correct decision norms for imprecise credences are, they can straightforwardly be generalized to binary beliefs about uncertain subjective probabilities.31 Here, the measure theorist should accept: whether the property of a doxastic state that we characterize as subjective probability is indexed as part of the content, or is indexed separately as a different feature of the attitude, is largely arbitrary. If it is common ground that doxastic states can have associated subjective probabilities, we can encode this information in various ways. Fans of credences measure possible worlds content and confidence separately, while fans of binary belief measure them together. So similarly, we might separately represent measurements of an object’s speed and of its direction, or we might represent these as a single measurement of velocity. It is not worth debating which of these measurements captures the objective “movement content” of moving objects. Subjective probabilities may or may not be roughly introspectable. But whether they are a feature of the content or the attitude is not. If measure theory is correct, then there’s nothing there to introspect. 3.2. Precise vs Imprecise Credence Relativism about the content/attitude distinction also entails that, as a psychological matter, there’s no non-relative answer to the question of whether an agent’s credences are precise or imprecise. Imprecise credence attributions are typically motivated in three ways: psychologically, pragmatically, and epistemically. Briefly, these views claim: 1. Imprecise credences are irrational but psychologically realistic: the imprecise credence model provides a representation of uncertainty that’s more descriptively plausible than attributions of precise credences. 2. Imprecise credences are rationally permissible: the imprecise credence model provides a representation of uncertainty that seems to correspond to certain forms of practically rationally permissible dispositions to act, which the precise model (paired with traditional decision rules) falsely claims are practically irrational. 3. Imprecise credences are rationally required: the imprecise credence model provides a representation of an epistemically rationally mandatory 31 So, for example, Bayesian decision theory requires selecting options that maximize expected utility, where the relevant expectation is calculated by the agent’s credence function. In binary belief terms, we can calculate expectations relative to the probability function(s) present in the agent’s belief set. If the agent’s belief set fails to determine a unique probability function, the relevant set of probability functions can be used with variations of decision rules for imprecise credences: for example, E-permissibility (Joyce 2010) or; Γ-Maximin (Seidenfeld 2004).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Subjective Probability and the Content/Attitude Distinction | 49 response to evidence that is ambiguous (i.e. that points in different directions) or unspecific (i.e. that doesn’t point in any direction). On the first view, precise credences are unrealistic as a representation of our doxastic states. This might be motivated by the observation that we rarely, and plausibly never, have introspective access to precise credences.32 It might be motivated by the fact that we have finite brains, incapable of infinitely sharp credences.33 It’s worth noting, however, that neither of these worries is adequately answered by the traditional imprecise credence model. The traditional model represents an imprecise credal state with a set of credence functions. This set of credence functions determines, for each proposition that the agent has a doxastic attitude toward, a set of real numbers representing the agent’s imprecise credence in that proposition. This set of real numbers has infinitely sharp boundaries, which are no more introspectable than precise credences. Similarly for the conception of imprecise credences as merely assigning intervals or upper and lower previsions.34 Further motivations come from the hypothesis that imprecise credences are manifested in behaviors that are treated as irrational within traditional expected utility theory. The claim that we in fact display these behaviors and thereby manifest imprecise credences is orthogonal to the claim that these behaviors are irrational, but because our present purposes are descriptive, the two views can be discussed together. Some behaviors are often associated with imprecise credences: for example, having distinct buying and selling prices for gambles,35 or willingness to forgo certain forms of sure gain in diachronic betting contexts.36 But these associations are based on specific assumptions about how precise credences must be manifested in behavior: specifically, that agents with precise credences are expected utility maximizers. (Neither of these forms of behavior is consistent with expected utility maximization, given fixed utilities.) The link between a particular means of indexing psychological dispositions and rules of rational decision-making is weak. Alternative decision rules might be implemented that allow for an agent’s choice behavior to be explained

32 Cases where agents have access to precise objective chance information seem to be the best candidate, but realistically these cases never arise. We never in fact learn with certainty that the coin is perfectly fair. Credence 1 and 0 might be the exception. But even here, there are worries: ordinary agents typically refuse to bet their lives on any proposition, no matter how obvious. 33 Note: it doesn’t follow from the finitude of our brains that we can’t have infinitely precise credences. Finite creatures can, after all, have infinitely precise blood-alcohol levels. 34 See Kyburg (1983); Pedersen and Wheeler (2014). This concern is addressed in Sturgeon (2008), who suggests that credences be both imprecise and vague. How precisely this vagueness is to be modeled is an open question. I don’t consider this model further in this chapter, but see Carr (manuscript) for discussion. 35 E.g. Walley (1991). 36 See Elga (2010).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

50 | Jennifer Rose Carr equally well in terms of precise credence. For example, suppose an agent has distinct buying and selling prices for gambles. These buying and selling prices for gambles might be separable and determined by (precise) higher-order probabilities. For example, an agent might have a precise credence ni in p, but have positive credence in the proposition that the probability of p is instead nj , and in the proposition that the probability of p is instead nl , etc.37 Let the minimal probability for p that the agent assigns positive credence to be n1 and the maximum be nk . We might further allow that a rational agent might bet, not in accordance with her first-order credence in p, but instead in accordance with the probabilities for p she thinks are possible. She might buy bets on p as though she had precise credence n1 and sell bets on p as though she had precise credence nk . Similarly, depending on the choice of decision rule, imprecise credences might be manifested in behavior that is indistinguishable from expected utility maximizing (or other behaviors associated with precise credences). For example, an agent’s buying and selling prices might be determined by a privileged credence function in her representor with which the agent identifies,38 or the midpoints of upper and lower credences,39 etc. If the measure theory is correct, then, there need be no psychological difference between precise and imprecise credences. These may simply be alternative measurement systems for characterizing the same set of complex psychological dispositions. Again, one represents a certain openness about probabilities in the attitude (an imprecise credence), while the other represents this in the content (uncertainty in probabilistic propositions). These different systems have their own upsides and downsides. For example, the imprecise credence representation is less simple than the precise credence when it comes to the characterization of mental states. But if having distinct buying and selling prices for gambles is rationally permissible, then the imprecise credence representation compensates for this shortcoming in its comparatively simpler decision rule. Content/attitude relativism has no bearing on whether behaviors associated with imprecise credences are rationally permissible. But it does have bearing on whether imprecise credences are rationally required. So we should consider the third motivation for imprecise credences, which claims that they are epistemically required as a response to ambiguous or

37 What probability is at issue here? Objective and epistemic probabilities are obvious candidates, as well as hypotheses about the agent’s own first-order credences. Some other conception of subjective probability, along the lines discussed in the previous subsection, is even an open possibility. 38 See Moss (2015). 39 As Moss notes, on the midpoint view, an agent may behave as though her credences are not probabilistic, since the midpoints of imprecise credences determined by a set of probability functions need not itself form a probability function. But this is consistent with the agent’s having precise, non-probabilistic credences.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Subjective Probability and the Content/Attitude Distinction | 51 unspecific evidence. The standard example motivating rationally required imprecise credences involves contrasting two evidential situations: Fair Coin You have a coin that you know to be perfectly fair. What should your credence be that when you next toss the coin, it’ll land heads? Mystery Coin You have a coin that was made at a factory where they can make coins of pretty much any bias. You have no idea whatsoever what bias your coin has. What should your credence be that when you next toss the coin, it’ll land heads? (See e.g. Joyce 2010.) There is a sharp credence that stands out as a natural candidate for both cases: 0.5. In the Fair Coin case, credence 0.5 is required by the Principal Principle. In the Mystery Coin case, you have no more reason to favor heads than to favor tails; your evidence is symmetric. Credence 0.5 in both heads and tails seems prima facie to preserve this symmetry. But the proponent of imprecise credences claims this reasoning doesn’t properly distinguish the evidential situation of someone in the Mystery Coin case from the Fair Coin case. In the Fair Coin case, you have much more specific evidence. The evidence in the Mystery Coin example is too unspecific to warrant any assignment of a precise credence. It requires a different kind of uncertainty. Because you have no information about the chance in the Mystery Coin case, this proponent of imprecise credences claims, you should have an imprecise credence that contains all of the probabilities that could be equal to the objective chance of the coin’s landing heads, given your evidence. On this view, any precise credence would amount to taking a definite stance when the evidence doesn’t justify a definite stance. It would mean adopting an attitude that was somehow more informative than what the evidence warrants. The view that imprecise credences can be epistemically required is widespread: [E]ven if men have, at least to a good degree of approximation, the abilities bayesians attribute to them, there are many situations where, in my opinion, rational men ought not to have precise utility functions and precise probability judgments. (Levi 1974, 394–5) If there is little evidence concerning O then beliefs about O should be indeterminate, and probability models imprecise, to reflect the lack of information. (Walley 1991) Precise credences . . . always commit a believer to extremely definite beliefs about repeated events and very specific inductive policies, even when the evidence comes nowhere close to warranting such beliefs and policies. (Joyce 2010, 285) If you regard the chance function as indeterminate regarding X, it would be odd, and arguably irrational, for your credence to be any sharper . . . How would you defend that assignment? You could say “I don’t have to defend it—it just happens to be my credence.”

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

52 | Jennifer Rose Carr But that seems about as unprincipled as looking at your sole source of information about the time, your digital clock, which tells that the time rounded off to the nearest minute is 4:03—and yet believing that the time is in fact 4:03 and 36 seconds. Granted, you may just happen to believe that; the point is that you have no business doing so. (Hájek and Smithson 2012, 38–9)

Now, it’s clear that there is no barrier to distinguishing the states of uncertainty in the Fair Coin and Mystery Coin cases within the precise credence model. Whether an agent is certain about the objective chance of heads or deeply uncertain is not obviously the kind of thing that can be read locally off of the agent’s credence in the proposition that the coin will land heads. There are other relevant global features of the agent’s total doxastic state: for example, the agent’s credences in propositions explicitly about the objective chance of heads (e.g. low credence that ChðheadsÞ ¼ 0:5).40 In the Fair Coin case, the rational agent with precise credences adopts a credence function which assigns PrðheadsÞ ¼ 0:5 and PrðChðheadsÞ ¼ 0:5Þ ¼ 1. In the Mystery Coin case, she adopts a credence function which assigns PrðheadsÞ ¼ 0:5 and PrðChðheadsÞ ¼ 0:5Þ  0. These are different total doxastic states; so the precise credence model can distinguish different forms of uncertainty appropriate to the two evidential situations. In general, for any specification of what information precise credences are meant to be inappropriately committed to, it will always be possible to characterize a precise credence function that is entirely noncommittal about (i.e., that has middling credence in) that information. The information might be information about objective probabilities, or evidential probabilities, etc. But even if that weren’t the case—even if no such proposition could be specified in terms of, e.g., possible worlds propositions—we’re still not forced to represent the relevant imprecision (suspension) in the attitude. We might use non-possible-worlds propositions that the agent could have precise credence in. (Propositions as sets of 〈w; Pr〉 pairs, for example.) For that matter, we could represent the same state of uncertainty using binary belief. Where an agent is uncertain about some probabilistic features of a proposition p (e.g., its objective chance), there is a range of probabilities of p that the agent treats as open: call this set S. The imprecise credence model uses a measurement system that treats S as a feature of the attitude toward the content p. The precise credence model uses a measurement system that treats all non-subjective probabilities in S as components of contents: propositions about the probability of p. Only subjective probabilities figure into the attitude.

40 The degree of resilience in the agent’s credence in heads (that is, how much her credence in heads would be affected by the introduction of new evidence) is also relevant. See Skyrms (1977).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Subjective Probability and the Content/Attitude Distinction | 53 Relativism about the content/attitude distinction predicts: this disagreement has only to do with the assessor’s choice of measurement system, and not with the agent’s psychological or decision-theoretic commitments. In section 2 I argued that the measure theory of mental content generated three forms of arbitrariness in the assignment of contents. Each of these forms of arbitrariness potentially affects the modeling of subjective probability: 1. There is arbitrariness in the assignment of indices to physical properties. This is most obvious in the scale for subjective probability. It is conventional to treat 0 as the lowest subjective probability and 1 as the highest; a scale from 22 to 84, where 22 represents maximal certainty in a proposition and 84 represents maximal rejection, would have represented cardinal uncertainty just as well (though it would have made for uglier math). The measure theorist will say that there are also multiple measurement systems for contents that plausibly do the job equally well: for example, different conceptions of propositions. 2. There is arbitrariness in the choice of which physical properties to measure. The measure theorist should say: in content measurements, we may include whatever cluster of dispositions form the ground for subjective probabilities as well as commitment to other forms of probability, like the proponent of probabilistic binary belief. Or we might include other forms of probability (in particular objective probability) within the contents, but exclude subjective probability, treating it as a different measurement, like the proponent of precise credences. Or we might exclude all psychological dispositions related to probability, treating them all as figuring into the attitude, like the proponent of imprecise credences. 3. There is arbitrariness in the individuation of measurements. In the mystery coin case, the imprecise credence model sees one attitude toward one content where the precise credence model sees multiple attitudes toward multiple contents (credence in p versus credence in propositions about the chance of p). The measure theory predicts that these are equally accurate: again, compare measuring velocity to measuring speed and separately measuring direction.

4. Ramifications for Epistemology Section 2 argued that if the measure theory of mental content is true, then the content/attitude distinction is measurement system relative (premise 1). Section 3 showed that if the content/attitude distinction is measurement system relative, then there’s no psychological difference between having probabilistic binary beliefs, having precise credences, or having imprecise credences (premise 2). Now we can finally turn to the final premise in the central argument of this chapter:

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

54 | Jennifer Rose Carr (3)

If there’s no psychological difference between having probabilistic binary beliefs, having precise credences, and having imprecise credences, then none of these is rationally impermissible.

From these three premises, it follows that if the measure theory of mental content is true, then there can be no rational requirement against probabilistic binary beliefs, or precise credences, or imprecise credences. I’ve argued that the measure theory is committed to the view that the difference between these models boils down to where they encode subjective and objective probabilities: as figuring into the measurement that receives the title “content” or as figuring into other measurements of the attitude. This conclusion, if correct, has normative implications. I claim the following: whether an agent is rational does not depend on the specific system of measurement that the assessor uses to characterize the agent’s doxastic states. Objection. What you’ve argued is compatible with the claim that precise or imprecise credences can be rationally impermissible on the measure theory. It’s just that this impermissibility is relative to a model or measurement system. For example, relative to the imprecise credence model, it’s rationally impermissible to have a precise credence in heads in the Mystery Coin case. Moreover, all rationality and irrationality attributions have determinate truth conditions partly in virtue of something other than the psychological properties of the attributee. The conventions adopted by the attributor affect whether the attribution is true, false, or neither. The attribution will have a specific language, and a specific measurement system. Insofar as there are any mental states in any circumstances, they are irrational only relative to a measurement system. Reply. Whether an agent has precise credences, imprecise credences, or probabilistic binary beliefs depends partly on the assessor’s choice of measurement system. Whether the agent is rational does not.41 A somewhat more committal version of this reply will pair the measure theory of mental content with evidentialism.42 On this view, whether an agent is epistemically rational is a function of her doxastic states and her evidence. Whether she is pragmatically rational is a function of her doxastic and orectic states and her evidence. All three factors are entirely independent of the theorist’s choice of representational conventions. And so whether the agent is rational or irrational is entirely independent of the theorist’s representational conventions.

41 It’s consistent with this thesis that rationality attributions are context- or assessmentsensitive in some other way: for example, it may be sensitive to norms of rationality. 42 On my interpretation, evidentialism of this sort has been endorsed by major figures on different sides of the debate: for example, Joyce (2010), who argues that precise credences are sometimes rationally impermissible, and Elga (2010), who argues that imprecise credences are always rationally impermissible.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Subjective Probability and the Content/Attitude Distinction | 55 If the measure theory is correct, then whether an agent is correctly represented as having precise credences, imprecise credences, or binary beliefs about subjective probability is a matter of the assessor’s system of measurement. So, I conclude, it cannot be a requirement of rationality that an agent have imprecise credences, or precise credences, or binary beliefs about subjective probability. It is no rational failing of the agent that some theorist uses one measurement system or another to represent her mental states. The main argument of this chapter is intentionally left with a conditional conclusion. The measure theory does face significant challenges. It’s not clear, for example, that the measure theory is consistent with every theory of propositions or contents43 or with every theory of the functional role of doxastic states. Nevertheless, given how perennially attractive the view has proven to be, it’s worth exploring its consequences. The theory of intentionality and the theory of rational belief are typically treated as orthogonal. It’s clear that we have contentful mental states. The question in philosophy of mind of how exactly doxastic states get their content seems independent of the epistemological question of which doxastic states are rational under which circumstances. But some theories of intentionality have significant consequences in epistemology. If normative theories in epistemology are stated in terms of requirements for types of psychological state, we had better have a clear sense of what these psychological states amount to, how they differ from impermissible alternatives, and whether the differences are mere artifacts of our systems for representing the mind.44

References Carr, Jennifer (2013). “Imprecise Evidence without Imprecise Credences.” (manuscript.) Churchland, Paul M. (1979). Scientific Realism and the Plasticity of Mind. Cambridge University Press. Davidson, Donald (1973). “Radical Interpretation.” Dialectica, 27(1): pp. 314–28. Davidson, Donald (1989). “What is Present to the Mind?” In Grazer Philosophische Studien. Amsterdam: Rodopi, pp. 197–213. de Finetti, Bruno (1937). “La Prévision: Ses Lois Logiques, Ses Sources Subjectives.” Annales de l’Institut Henri Poincaré, 17: pp. 1–68. Dennett, Daniel C. (1987). The Intentional Stance. Cambridge, MA: MIT Press.

43

In particular, there may be tensions between the measure theory and possible worlds propositions. Many thanks to Seamus Bradley, Annalisa Coliva, Samuel Fletcher, Rachel Goodman, Hanti Lin, Daniel Malinsky, Conor Mayo-Wilson, Julia Staffel, Michael Titelbaum, Greg Wheeler, Robbie Williams, and audiences at the Centre for Metaphysics and Mind at the University of Leeds (2015), the European Philosophy of Science Association (2015), and the Pacific APA (2017). Special thanks to Jonathan Livengood and Eric Pacuit. The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 312938. 44

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

56 | Jennifer Rose Carr Dretske, Fred (1981). Knowledge and the Flow of Information. Cambridge, MA: MIT Press. Dretske, Fred (1988). Explaining Behavior: Reasons in a World of Causes. Cambridge, MA: MIT Press. Egan, Andy (2007). “Epistemic Modals, Relativism and Assertion.” Philosophical Studies, 133(1): pp. 1–22. Elga, Adam (2010). “Subjective Probabilities Should Be Sharp.” Philosophers’ Imprint, 10(05). Field, Hartry (1980). Science Without Numbers. Princeton, NJ: Princeton University Press. Fine, T. L. (1973). Theories of Probability. New York: Academic Press. Fitelson, Branden (2016). “Coherence.” (manuscript). Fodor, Jerry A. (1987). Psychosemantics: The Problem of Meaning in the Philosophy of Mind. Cambridge, MA: MIT Press. Frege, Gottlob (1892). “Über Sinn und Bedeutung.” Zeitschrift für Philosophie und philosophische Kritik, 100: pp. 25–50. Gibbard, Allan (1990). Wise Choices, Apt Feelings: A Theory of Normative Judgment. Cambridge, MA: Harvard University Press. Gibbard, Allan (2003). Thinking How to Live. Cambridge, MA: Harvard University Press. Hájek, Alan and Michael Smithson (2012). “Rationality and Indeterminate Probabilities.” Synthèse, 187(1): pp. 33–48. Harman, Gilbert (1986). Change in View. Cambridge, MA: MIT Press. Harman, Gilbert and Judith Jarvis Thompson (1996). Moral Relativism and Moral Objectivity. Oxford: Blackwell. Holton, Richard (2014). “Intention as a Model for Belief.” In M. Vargas and G. Yaffe (eds) Rational and Social Agency: Essays on the Philosophy of Michael Bratman. Oxford: Oxford University Press, pp. 12–37. Joyce, James M. (2010). “A Defense of Imprecise Credences in Inference and Decision Making.” Philosophical Perspectives, 24(1): pp. 281–323. Keynes, John Maynard (1921). A Treatise on Probability. New York: Dover Publications. Koopman, B. O. (1940). “The axioms and algebra of intuitive probability.” Annals of Mathematics, 2(41): pp. 269–92. Krantz, D. H., Suppes P., Luce, R. D., and Tversky, A. (1971). Foundations of Measurement, Volume I. New York: Academic Press. Kriegel, Uriah (2013). Phenomenal Intentionality. Oxford: Oxford University Press. Kyburg, Henry (1983). Epistemology and Inference. Minneapolis, MN: University of Minnesota Press. Levi, Isaac (1974). “On Indeterminate Probabilities.” Journal of Philosophy, 71(13): pp. 391–418. Lewis, David (1974). “Radical Interpretation.” Synthèse, 23: pp. 331–4. MacFarlane, John (2014). Assessment Sensitivity: Relative Truth and its Applications. Oxford: Oxford University Press. Matthews, Robert J. (1994). “The Measure of Mind.” Mind, 103(410): pp. 131–46. Millikan, Ruth G. (1989a). “Biosemantics.” Journal of Philosophy, 86(July): pp. 281–97. Millikan, Ruth G. (1989b). “In Defense of Proper Functions.” Philosophy of Science, 56 (June): pp. 288–302. Moss, Sarah (2015). “Credal Dilemmas.” Noûs, 48(3): pp. 665–83.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Subjective Probability and the Content/Attitude Distinction | 57 Pedersen, Arthur Paul and Gregory Wheeler (2014). “Demystifying Dilation.” Erkenntnis, 79(6): pp. 1305–42. Russell, Bertrand (1903). Principles of Mathematics. Cambridge: Cambridge University Press. Savage, Leonard (1954). The Foundations of Statistics. New York: John Wiley and Sons. Savage, Leonard (1972). The Foundations of Statistics, 2nd revised edition (first published 1954). New York: Dover. Schroeder, Mark (2009). “Being For: Evaluating the Semantic Program of Expressivism.” Analysis, 70(1): pp. 101–4. Seidenfeld, Theodore (2004). “A Contrast Between Two Decision Rules for Use with (Convex) Sets of Probabilities: Γ-Maximin Versus E-Admissibility.” Synthèse, 140: pp. 69–88. Sepielli, Andrew (2012). “Subjective Normativity and Action Guidance.” In M. Timmons (ed.) Oxford Studies in Normative Ethics, Vol. II. Oxford: Oxford University Press, pp. 45–73. Skyrms, Bryan (1977). “Resiliency, Propensities, and Causal Necessity.” Journal of Philosophy, 74: pp. 704–13. Soames, Scott (2014). Propositions as Cognitive Event Types. Oxford: Oxford University Press. Stalnaker, Robert (1984). Inquiry. Cambridge, MA: MIT Press. Stampe, Dennis W. (1977). “Towards a Causal Theory of Linguistic Representation.” Midwest Studies in Philosophy, 2(1): pp. 42–63. Sturgeon, Scott (2008). “Reason and the Grain of Belief.” Noûs, 42(1): pp. 139–65. Suppes, Patrick and Zinnes, Joseph L., (1963). “Basic Measurement Theory,” in R. D. Luce, R. R. Bush, and E. Galanter (eds), Handbook of Mathematical Psychology, Vol. 1. New York: Wiley, pp. 1–76. Tye, Michael (2000). Consciousness, Color, and Content. Cambridge, MA: MIT Press. Walley, Peter (1991). Statistical Reasoning with Imprecise Probabilities. London: Chapman & Hall. White, Roger (2009). “Evidential Symmetry and Mushy Credence.” In Tamar Szabo Gendler and John Hawthorne (eds), Oxford Studies in Epistemology. New York: Oxford University Press, pp. 161–86. Williams, J. Robert G. (manuscript). “Reductivist Theories of Intentionality and the Representation of Options.” Williamson, Timothy (2009). “The Philosophy of Philosophy.” Analysis, 69(1): pp. 99–100.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

3. Modal Empiricism What is the Problem? Albert Casullo In his introduction to the Critique, Kant offers an argument for the existence of a priori knowledge that is striking in its simplicity. He contends that necessity is a criterion of the a priori—that is, that all knowledge of necessary propositions is a priori. This contention, together with two others that Kant took to be evident—we know some mathematical propositions and such propositions are necessary—leads directly to the conclusion that some knowledge is a priori. The burden of Kant’s argument falls on his contention that necessity is a criterion of the a priori and, hence, the support that his argument offers for the existence of a priori knowledge is only as strong as his supporting argument for that claim. Kant (1965, p. 43) supports his contention with the terse remark: “Experience teaches us that a thing is so and so, but not that it cannot be otherwise.” Kant’s remark has exerted considerable influence on the tradition. For example, William Whewell (1840, pp. 59–61) maintains that Experience cannot offer the smallest ground for the necessity of a proposition. She can observe and record what has happened; but she cannot find, in any case, or in any accumulation of cases, any reason for what must happen. . . .

Over one hundred years later, Roderick Chisholm (1966, pp. 74–75) quotes the passage above from Whewell and maintains that Thus, Kant said that necessity is a mark, or criterion, of the a priori. If what we know is a necessary truth—if we may formulate it in a sentence prefixed by the model [sic] operator “necessarily,” or “it is necessary that”—then our knowledge is not a posteriori.

The question we must address is: How strongly does Kant’s observation, which is echoed by Whewell and Chisholm, support his criterion? In order to address this question, the following distinctions are necessary: (A) S knows the truth value of p just in case S knows that p is true or S knows that p is false. (B) S knows the general modal status of p just in case S knows that p is a necessary proposition (i.e., necessarily true or necessarily false) or S knows that p is a contingent proposition (i.e., contingently true or contingently false).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Modal Empiricism | 59 (C) S knows the specific modal status of p just in case S knows that p is necessarily true or S knows that p is necessarily false or S knows that p is contingently true or S knows that p is contingently false. (A) and (B) are logically independent; one can know one but not the other. (C), however, is the conjunction of both (A) and (B); one cannot know (C) unless one knows both (A) and (B). With these distinctions in place, we can now see that Kant’s contention can be read in three different ways: (KA)

If p is necessarily true and S knows that p then S knows a priori that p.

(KB)

If p is necessarily true and S knows that p is a necessary proposition then S knows a priori that p is a necessary proposition.1

(KC)

If p is necessarily true and S knows that p is necessarily true then S knows a priori that p is necessarily true.

(KA) is open to open to two immediate objections. First, it is not supported by Kant’s argument. Kant allows that experience can teach us that “a thing is so and so.” Whewell grants that experience “can observe and record what has happened.” Both appear to concede that experience can teach us what is the case or the truth value of propositions. What they deny is that experience can teach us, in Kant’s words, that “it cannot be otherwise,” or, in Whewell’s words, that experience “can find any reason for what must happen.” Second, Kripke’s cases of necessary a posteriori truths provide compelling counterexamples. It may appear that Kripke’s cases are also counterexamples to both (KB) and (KC). But the appearances are deceiving. Recall Kripke’s (1971, 153) discussion of the lectern case: In other words, if P is the statement that the lectern is not made of ice, one knows by a priori philosophical analysis, some conditional of the form “if P, then necessarily P.” If the table is not made of ice, it is necessarily not made of ice. On the other hand, then, we know by empirical investigation that P, the antecedent of the conditional, is true—that this table is not made of ice. We can conclude by modus ponens . . . that it is necessary that the table not be made of ice, and this conclusion is known a posteriori, since one of the premises on which it is based is a posteriori.

Kripke’s account makes explicit that knowledge of the specific modal status of a proposition involves both knowledge of its general modal status and knowledge of its truth value. Moreover, he maintains that, although knowledge of the latter is a posteriori, knowledge of the former is a priori. Hence, if Kripke is right, then his cases are counterexamples to only (KC). On the other hand,

1 I am assuming here that p is a truth-functionally simple statement. Truth-functionally compound statements require a more nuanced version of (KB). For further discussion of this issue, see Casullo (2003, sec. 7.4) and Strohminger and Yli-Vakkuri (2017).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

60 | Albert Casullo although he endorses (KB), he offers no supporting argument. Here we face the central question of modal epistemology: Is there any reason to endorse (KB)?2 Although many contemporary philosophers endorse (KB), supporting arguments are hard to come by. Gordon Barnes (2007) provides one of the few examples. My purpose in this chapter is to articulate and examine his argument. I have two goals in doing so. The first is to uncover several significant gaps in the argument. The second is to show that it suffers from a common defect in rationalist arguments. If the argument were successful against empiricist accounts of modal knowledge, it would apply with equal force to extant rationalist accounts of such knowledge. Hence, the cost of refuting modal empiricism is modal scepticism.

1. Modal Empiricism Rejected Barnes (2007, 497–8) begins by endorsing an explanationist account of knowledge: (K) S knows that p if and only if (i) S believes that p, (ii) p is true and (iii) there is a good explanation of why it is no accident that S’s belief is true.3 An explanation satisfies condition (iii) if and only if (a) it identifies the processes that form and sustain the belief in question, and (b) it shows why a belief that is formed and sustained in this way is likely to be true. He (2007, 498) goes on to offer an account of a priori knowledge that is intended to articulate the traditional idea that such knowledge is independent of sense experience: (KAP)

S knows a priori that p if and only if the correct explanation of the non-accidental truth of S’s belief that p makes no reference to sense experience.4

Let us call an explanation of an item of knowledge that posits only sense experience plus commonly accepted forms of inference an empirical explanation. Barnes has two goals. The first is to establish that there is no good empirical explanation of knowledge that p is absolutely necessary. The second, building on the first, is to establish that we have some a priori knowledge.

2 For a more general discussion of the relationship between the a priori and the necessary, see Casullo (2010). 3 Here Barnes (2007, 497) tells us:

So if we think of knowledge as non-accidentally true belief, then in every case of knowledge there will be a good explanation of why it is no accident that the belief in question is true. A good explanation of the non-accidental truth of a belief will explain why a belief that is formed and sustained in this particular way is objectively likely to be true . . . When there exists such an explanation of the non-accidental truth of a belief that is formed and sustained in this particular way, then that belief qualifies as knowledge. 4 There are at least two problems with this conception of the a priori. First, he does not specify what counts as “sense experience”. Second, it precludes merely enabling conditions based in sense experience from playing a role in the explanation of the non-accidental truth of a belief known a priori.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Modal Empiricism | 61 I begin by articulating the structure of his (2007, 498–9) arguments. The first goes as follows: (1.1) All possible empirical explanations of the non-accidental truth of a belief in some absolute necessity fall into one of two categories: (a) those that do not involve an inference from sense experience, and (b) those that do involve such an inference. (1.2) If the explanation does not involve an inference from sense experience, then there are two possibilities: (a1) the explanation is in terms of sense experience alone without any further cognitive processing, or (a2) the explanation posits some non-inferential cognitive process that essentially takes experience as input and produces a necessity-belief as output. (1.3) If the explanation does involve an inference from sense experience, then, in order to explain the non-accidental truth of its output beliefs, the inference must be truth-preserving. (1.4) Therefore, the inference must be a good inference of one of the following types: (b1) deductive, (b2) inductive, (b3) analogical, or (b4) inference to the best explanation. (1.5) The six options—(a1), (a2), (b1)–(b4)—exhaust all possible explanations of the non-accidental truth of a belief in an absolute necessity.5 (1.6) None of the six options can explain the non-accidental truth of a belief in an absolute necessity. (1.7) Therefore, there is no good empirical explanation of our knowledge of absolute necessity. The second argument goes as follows: (2.1) We have some knowledge of absolute necessity. (2.2) There is no good empirical explanation of our knowledge of absolute necessity. 5

Both referees raise questions about testimony and modal knowledge. One asks whether Chisholm denies the possibility of a posteriori knowledge of modal propositions via testimony. The other asks whether Barnes overlooks this possibility. Chisholm (1977, 47) has reservations about the possibility of such knowledge on the grounds that one knows a proposition only if one accepts it. But when, for example, a person reads a logical text, finds a formula that expresses a certain logical principle and concludes that the formula is true, the person may not accept the logical principle but only “the contingent proposition to the effect that a certain formula in a book expresses a logical principle that is true.” Barnes’s position is more vexing. He (2007, 505) maintains that testimony is a source of non-inferential warrant and that “for testimony to warrant a belief that p without inference, the testimony itself must have the content that p.” Since testimony can have the content that necessarily p, it follows that testimony can directly warrant beliefs in modal propositions. Hence, in order to sustain his claim that experience cannot directly warrant belief in modal propositions, it appears that Barnes must deny that all testimonial warrant is a posteriori. There is precedent for such a move in Burge (1993). Barnes, however, does not address the issue. Since testimony raises some special problems that require independent treatment, I set it aside for purposes of this chapter and attempt to show that Barnes’s argument fails even apart from considerations about testimony. For a discussion of some of the special issues raised by testimony, see Casullo (2007).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

62 | Albert Casullo (2.3) The only alternative is an explanation that posits some non-empirical knowledge from which we can derive our knowledge of necessity. (2.4) Therefore, we have some a priori knowledge. The bulk of the chapter is devoted to establishing premise (1.6). Hence, my initial focus will be on Barnes’s supporting arguments for that key premise.

2. Modal Empiricism Defended I begin by considering the two possibilities for a non-inferential empirical explanation: (a1) and (a2). According to (a1), the non-accidental truth of a belief in an absolute necessity is explained by sense experience alone. Here Barnes (2007, 500) argues that such explanation is possible only if “the content of our sense experience makes it objectively likely that some belief in necessity is true” but, since necessity is not among the contents of our sense experiences, “if we consider only the contents of our sense experiences, without any further cognitive processing, then the truth of a belief in necessity would have to be deemed accidental.” According to (a2), the non-accidental truth of a belief in absolute necessity is explained by a non-inferential process that takes experience as input and produces true belief in necessity as output. Here Barnes (2007, 501) maintains: “Since necessity is not part of the content of any sense experience, the role of sense experience in this cognitive process cannot be essential to the resulting explanation of our knowledge of necessity.” Barnes acknowledges that his argument against (a2) rests on the assumption that if necessity is not included in the representational content of sense experience, then no non-inferential process based on sense experience could generate knowledge of necessity. He (2007, 502) maintains, however, that this assumption is supported by the following true epistemic principle: (EP) For any mental state M, if M does not have the representational content that p, then M cannot warrant the belief that p directly, without inference. Barnes concludes that there can be no empirical explanation of the nonaccidental truth of a belief in some absolute necessity that does not involve an inference from sense experience. The first of Barnes’s two options for providing an account of non-inferential knowledge of belief in some absolute necessity is a non-starter. The distinguishing feature of (a1) is that it purports to explain the non-accidental truth of a belief in terms of the content of an experience alone without any appeal to the process that forms the belief. Such an explanation, however, cannot satisfy condition (iii) of (K) since an explanation satisfies that condition only if it identifies the processes that form and sustain the belief in question and explains why beliefs formed and sustained in that way are likely to be true. The fact that S has an experience with the representational content that p and forms the belief that p does not ensure that there is a non-accidental explanation of the

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Modal Empiricism | 63 truth of S’s belief that p. Suppose, for example, that S has an experience with the representational content that this object is red and forms the belief that this object is red via process M. Moreover, suppose that process M also produces the belief that this object is red when S has an experience with the representational content that this object is orange, or that this object is yellow, or that this object is purple. Here there is no explanation of the non-accidental truth of the belief that this object is red. Option (a2) also faces a serious objection. (EP) is not supported by the general theory of knowledge that Barnes endorses. Warrant, according to Barnes (2007, 521, n. 11), is that feature which, when added to true belief, transforms it into knowledge. Now consider a belief-forming process that generates a true belief that p. According to the theory, all that must be added to a true belief to transform it into knowledge is that there be an explanation of the non-accidental truth of the belief that p. The theory in question does not require any particular type of input into the belief-forming process. Hence, unless Barnes can show that a particular type of input into a belief-forming process is essential to providing an explanation of the non-accidental truth of its output beliefs, Barnes cannot appeal to (EP) to defend the assumption that if necessity is not included in the representational content of sense experience, then no non-inferential process based on sense experience could generate knowledge of necessity.6 I now turn to the four options that involve an inference from sense experience. The first three are quickly dismissed. With respect to deductive inference, Barnes (2007, 506) argues: “Since experience alone cannot explain our knowledge of necessity, it is hard to see how a deductive inference from experience alone could fare any better.”7 He (2007, 509) maintains that enumerative induction involves a hasty generalization from a single observed world to an indefinite number of unobserved worlds. In the case of analogical argument, 6 One might balk at my argument on the grounds that although the general theory of knowledge that Barnes endorses does not require any particular input into knowledgeproducing processes, his account of a posteriori knowledge requires that sense experience play an essential role in explaining the non-accidental truth of a posteriori warranted true beliefs. Hence, a posteriori warrant does require sense experience as input. One must be careful here. Suppose that some empiricist maintains that there is a belief-forming process that takes perception as input and produces true beliefs in necessity as outputs. Such a theorist need not be committed to the thesis that perceptual input into a process requires some mental state with representational content. 7 Barnes (2007, 506–8) considers and rejects two other options: (a) a trivially valid argument whose premises are about sense experience and whose conclusion is a necessary truth, and (b) the view that we perceive some identities directly through the senses and derive the necessity of those identities by a non-trivially valid deductive inference. A referee astutely notes that the empiricist has some options that Barnes overlooks: inferring necessarily actually p from an empirically justified belief that p, and inferring that necessarily a is not b from an empirically justified belief that a and b are numerically diverse. I suspect that Barnes would respond that each inference is mediated by a rule of inference, that belief in the conclusion is justified only if belief in the rule of inference is justified, but that such justification cannot be empirical. This response takes a controversial stance on the requirements of inferential justification which requires independent treatment.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

64 | Albert Casullo he (2007, 510–11) argues that since “an analogical inference requires that we begin with a case that we know has the very property that we seek to project onto other cases,” it cannot explain the origin of knowledge of necessity. The focus of Barnes’s discussion is on inference to the best explanation and, in particular, on the view that he (2007, 511) calls modal explanationism: (ME)

Positing a necessity sometimes provides the best explanation of some fact that we know through sense experience, and thus our belief in such a necessity, if true, is non-accidentally true.

Here Barnes considers four models of explanation: deductive-nomological, pragmatic, causal, and unification.8 He concludes that the best model for modal explanationism is Kitcher’s (1989) version of the unification model, which he (2007, 516) summarizes it as follows: According to Kitcher, the best explanation of some phenomenon is the one that belongs to [the] best systematization of our total set of beliefs. The best systematization of our total set of beliefs is the set of arguments which derives all and only beliefs that are acceptable, relative to our total set of beliefs, while simultaneously instantiating the fewest general argument patterns.

Barnes (2007, 516–17) goes on to offer two examples of how positing necessities might contribute to the best explanation of our total set of beliefs: First of all, identifying the terms of an observed correlation might systematize our total set of beliefs better than positing a brute law of nature. Assuming that identities are absolutely necessary, such an identity brings in its train a belief in necessity . . . Second, deriving true counterfactual conditionals from general claims of absolute necessity would systematize our acceptance of those counterfactual conditionals, rather than leaving them brute and unexplained.

Hence, modal explanationism appears to provide the basis for an empiricist account of knowledge of absolute necessities. Barnes, however, offers two arguments against the account. These two arguments form the core of his argument against modal explanationism and, consequently, merit careful scrutiny. The first argument is straightforward. It is based on the assumption that absolute necessity entails nomological necessity but not vice versa. Barnes (2007, 517) maintains that

8 Barnes rejects the first three models for different reasons. The deductive-nomological model is open to counterexamples. With respect to the pragmatic model, he (2007, 513) maintains “that the very idea of inference to the best explanation is that satisfying the goals of explanation is truth-conducive,” but there is no reason to think that its merely satisfying our curiosity makes a belief objectively likely to be true. Against, the causal model, Barnes (2007, 515) contends that, in order to use it to construct an account of knowledge of modality, it must be supplemented with “an account of how we could come to know that a necessity is the cause of some event.” Since we do not directly observe necessities via the senses, the only way we could come to know this is to infer that the best explanation of some event is that it was caused by some necessity. Here Barnes maintains that the real epistemological work is being done by the fourth account.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Modal Empiricism | 65 If this assumption is correct, then for any systematization of our beliefs that posits absolute necessities, there will be another systematization of our beliefs that posits merely nomological necessities to do the same explanatory work. Moreover, these two systematizations of our beliefs could unify our beliefs equally well. Thus, there is no good reason to posit absolute necessities, rather than merely nomological necessities.

Hence, according to Barnes, modal explanationism can explain at most our knowledge of nomological necessity but not our knowledge of absolute necessity. Barnes’s first argument fails since it assumes that if positing an absolute necessity systematizes our total set of beliefs, then positing a nomological necessity entailed by that absolute necessity systematizes our total set of beliefs equally well. This assumption is false. Moreover, the two examples offered by Barnes of absolute necessities systematizing our total set of beliefs show why. Suppose that there are laws of nature correlating water with the molecular structure H2O and correlating gold with the atomic number 79. Identifying water with H2O or gold with the element having atomic number 79 better systematizes our beliefs than positing brute laws of nature since the identities explain the laws of nature. Since brute laws of nature are nomological necessities, positing nomological necessities to explain them provides only a trivial explanation. There is a similar situation in the case of deriving true counterfactuals from general claims of absolute necessity. Presumably, brute laws of nature support brute counterfactuals. Since there are no other laws of nature that explain the brute laws of nature, the counterfactuals supported by such laws cannot be explained by appeal to other nomological necessities. They can be explained, however, by positing absolute necessities. The second argument (Barnes 2007, 517–19) is more complex: (3.1) The explanandum of every explanation has a contrasting alternative. (3.2) We feel the need for an explanation of some explanandum only if its alternative appears to us to be possible.9 (3.3) If it appears to us to be possible that not-p then it is rational for us to believe that it is possible that not-p. (3.4) Therefore, if we feel that we need an explanation for p then it is rational to believe that it is possible that not-p. (3.5) When we posit an absolute necessity to explain p, we commit ourselves to the absolute necessity of p.

9

Barnes (2007, 517–18) begins with two claims and an explanation of the second:

Explanation is contrastive, which is to say that what we explain is why it is the case that p, rather than that q, for some q. In other words, the explanandum of every explanation has a contrasting alternative . . . Moreover, and more importantly, we feel the need for an explanation of why the explanandum holds, rather than the alternative, only when we can at least conceive of the alternative obtaining . . . When I say that we can conceive of the explanandum failing to obtain, I mean that at the time at which we seek an explanation it appears to us to be possible that the explanandum fail to obtain.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

66 | Albert Casullo (3.6) Consequently, when we explain p by positing an absolute necessity, we contradict our rational belief that it is possible that not-p. (3.7) Any explanation that posits an absolute necessity to explain some phenomenon contradicts some rational belief that we hold. (3.8) Therefore, such an explanation does not unify our total system of beliefs better than the set of beliefs that denies this necessity. Since the unification model of explanation fails to warrant the positing of absolute necessities, Barnes concludes that modal explanationism cannot provide an account of knowledge of absolute necessities. The argument faces (at least) three serious objections.10 First, premise (3.2) is not endorsed by Kitcher, and Barnes does not defend it. Moreover, it is questionable. Consider elementary mathematical propositions, such as that 1 + 1 = 2. If there are any propositions whose alternatives do not appear to us to be possible, they are strong candidates. Russell (1919, 2), however, distinguishes between the epistemological order and the logical order:

10 Here are two others. Consider the initial premise of the argument. Kitcher’s version of the unification theory embraces van Fraassen’s account of the pragmatics of explanation. Hence, he maintains that explanation is contrastive. Barnes rejects the pragmatic model of explanation on the grounds that there is no reason to think that explanations meeting the goal of satisfying curiosity are likely to be true. Kitcher’s version of the unification theory, however, is open to a variant of that objection: there is no reason to believe that meeting the goal of unifying our beliefs is truth-conducive. Consider Kitcher’s (1989, 497) conception of a true statement and a correct explanation:

Conceive of science as a sequence of practices, each of which is distinguished by a language, a body of belief, and a store of explanatory derivations. Imagine the sequence extending indefinitely forward into the future, and suppose that its development is guided by principles of rational transition, including the principles about unification outlined in the previous section . . . [T]rue statements are those that belong to the belief corpus of scientific practice in the limit of its development under principles of rational transition. Finally, . . . correct explanations are those derivations that appear in the explanatory store in the limit of the rational development of scientific practice. Clearly, both the conception of a true statement and the conception of a correct explanation are epistemic. They are defined in terms of the beliefs and explanations that belong to some idealized rational development of scientific practices. Hence, Kitcher’s version of the unification theory offers a reason for thinking that those beliefs that emerge at the limit of idealized rational development of our scientific practices are likely to be true only if one embraces his anti-realist conception of truth. Hence, unless Barnes is willing to embrace Kitcher’s anti-realism, he should reject Kitcher’s version of the unification model. But, if he does so, his leading premise is unsupported. The second is that Barnes’s final conclusion depends on a misconception regarding Kitcher’s version of the unification model. Barnes maintains that, according to the model, a good explanation unifies our total set of beliefs. This assumption is crucial to his overall argument since (3.7) presupposes that the system of beliefs being unified includes the subject’s belief that it is possible that not-p. But Kitcher’s account does not support that assumption. He (1989, 497) maintains that a good explanation unifies the set of statements endorsed by scientific practice. But that set of statements does not contain statements about the beliefs of particular individuals.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Modal Empiricism | 67 The most obvious and easy things in mathematics are not those that come logically at the beginning; they are things that, from the point of view of logical deduction, come somewhere in the middle.

Nevertheless, he (1919, 1) maintains that instead of asking what can be defined and deduced from what is assumed to begin with, we ask instead what more general ideas and principles can be found, in terms of which what was our starting-point can be defined or deduced.

The goal of identifying more general ideas and principles in terms of which such propositions can be derived is not epistemological.11 What is the goal? Whitehead and Russell (1962,1) offer three, the first of which is “[at] effecting the greatest possible analysis of the ideas with which it deals and of the processes by which it conducts demonstrations, and at diminishing to the utmost the number of the undefined ideas and undemonstrated propositions . . . from which it starts.” This goal echoes Barnes’s characterization of the goal of the unification model of explanation and, as a consequence, supports the contention that the axioms and definitions proposed in Principia explain the more obvious elementary mathematical propositions that follow from them. The second centers on premise (3.3). Modal explanationism is an empiricist account of knowledge of modality. Although Barnes focuses exclusively on its account of knowledge of necessity, it also offers an account of knowledge of possibility. Presumably, that account, like its account of knowledge of necessity, appeals exclusively to explanatory considerations. Premise (3.3) is a modal epistemic principle: one that provides sufficient conditions for the rationality of modal beliefs. But it is not an empiricist principle since the modal appearances to which it appeals are not sense experiences. Since (3.3) is a rationalist modal epistemic principle, invoking it in an argument against modal empiricism is question-begging. Moreover, (3.3) is clearly false since even if the appearance of possibility is a reason to believe that p is possible, it is at best a prima facie reason and, in the presence of defeaters, does not make it rational to believe that p is possible. For example, suppose that the modal explanationist is seeking an explanation for the fact that all and only water samples are composed of H2O. Even if it were true that it appears possible to us that some water samples are not composed of H2O, it does not follow that it is rational for us to believe that it is possible that some water samples are not composed of H2O. Since we have absorbed Kripke’s lessons, we have good reason to believe that this modal appearance is deceptive.

11

Whitehead and Russell (1962, iv) are more explicit on this point:

In mathematics, the greatest degree of self-evidence is usually not to be found quite at the beginning, but at some later point; hence the early deductions, until they reach this point, give reasons rather for believing the premises because true consequences follow from them, than for believing the consequences because they follow from the premises.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

68 | Albert Casullo The third focuses on the transition from (3.7) to (3.8). In defense of that transition, Barnes (2007, 518–19, italics in the original) argues: The point is that an explanation that posits a necessity in order to explain some phenomenon loses as much overall systematization as it gains, since every such explanation contradicts a belief that is rational for us at the time at which we seek the explanation.

The argument overlooks an important feature of inference to the best explanation. The conclusion of such an inference can be a defeater for other justified beliefs in one’s system of beliefs. Consider a very oversimplified example. Let p = the sun revolves around the (stable) earth. The belief that p was justified for early observers of the sky by their visual experiences. Later observers posited that the earth revolves around the (stable) sun and the posit was justified by the fact that it offered a more systematic explanation of their observational data. It also provided a defeater for the justification conferred by visual experience on the belief that p. It provided a defeater because it explained why those visual experiences were unreliable indicators of the motion of the sun. If (3.7) were true, Ptolemaic astronomers would have been in a position to cogently reject the posit that the earth revolves around the (stable) sun by pointing out that it contradicted a rational belief held by many and, consequently, did not unify our total set of beliefs any better than the Ptolemaic theory. This argument, however, is not cogent because it overlooks the role of defeating evidence. Returning to Barnes’s argument, the transition from (3.7) to (3.8) is not valid. The fact that an explanation contradicts some rational belief that we hold need not result in a loss of systematization since that explanation could also defeat the justification that one has for the belief in question, with the consequence that the belief is no longer a member of the body of beliefs that we rationally accept.

3. First Sceptical Consequence: Nomological Necessities So far I have raised questions about the details of Barnes’s argument against modal explanationism. There is, however, a more general problem with his overall argumentative strategy. It has two significant sceptical consequences. Barnes (2007, 519) recognizes the first: The strongest objection to this argument is that there is a parallel argument concerning causal and nomological necessities, yet if causal and nomological necessities are known, then surely they are known empirically.

Here he (2007, 520) responds: To say that it is nomologically necessary that all F’s are G is to say that in relevantly similar counterfactual situations if an F occurs, then it will be G. We can observe a sample of F’s in the actual world, and we can see that they are all G. Then if we limit ourselves to relevantly similar F’s in other, relevantly similar possible worlds, we can justify an inductive inference to the claim that all of these F’s are also G.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Modal Empiricism | 69 The argument is opaque. Consider a true accidental generalization: (AG)

All As are G,

and a true law of nature: (LN)

All Ls are N.

Presumably, we can justify an inductive inference from observed As and Ls, to, respectively, (AG) and (LN). The crucial question, however, is how do we justify the inference from (LN) to (LN*)

It is nomologically necessary that all Ls are N.

Barnes’s response consists of two claims. First, (LN*) is equivalent to (LN+)

In relevantly similar counterfactual situations, if an L occurs, then it will be N.

Second, the argument from (LN) to (LN+) is “some sort of strong, nondeductive argument.” But this response is not sufficient to explain our knowledge of nomological necessities such as (LN*). If it were, one could employ the very same response to show that there is a strong non-deductive argument from (AG) to (AG*)

It is nomologically necessary that all As are G.

But an account of our knowledge of nomological necessities such as (LN*) must provide an account of the inductive method that allows us to move from (LN) to (LN*), but prohibits us from moving from (AG) to (AG*).

4. Second Sceptical Consequence: Modal Rationalism The second sceptical consequence, which Barnes does not recognize, is more general and more significant. Let us call the following view modal empiricism: there is a good empiricist explanation of our knowledge of absolute necessity. Suppose that we concede that all the premises of Barnes’s initial argument are plausible and that his argument is valid. It follows that modal empiricism is false. Let us call the following view modal rationalism: there is a good rationalist explanation of our knowledge of absolute necessity. The second general problem is that there is a parallel empiricist argument whose conclusion is that modal rationalism is false. Premises (1.1)—(1.3) of Barnes’s argument make reference to the empiricist’s account of the ultimate source of all knowledge of necessity: sense experience. Therefore, the three initial premises of a parallel argument against rationalism must make reference to the rationalist account of the ultimate source of knowledge of necessities. There is, however, no generally accepted rationalist account of such knowledge. Hence, for purposes of constructing the parallel argument, let us call the source of such knowledge

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

70 | Albert Casullo rational experience. The empiricist can now offer the following parallel version of Barnes’s argument: (1.1*) All possible rationalist explanations of the non-accidental truth of a belief in some absolute necessity fall into one of two categories: (a) those that do not involve an inference from rational experience, and (b) those that do involve such an inference. (1.2*) If the explanation does not involve an inference from rational experience, then there are two possibilities: (a1) the explanation is in terms of rational experience alone without any further cognitive processing, or (a2) the explanation posits some non-inferential cognitive process that essentially takes rational experience as input and produces a necessity-belief as output. (1.3*) If the explanation does involve an inference from rational experience, then, in order to explain the non-accidental truth of its output beliefs, the inference must be truth-preserving. (1.4) Therefore, the inference must be a good inference of one of the following types: (b1) deductive, (b2) inductive, (b3) analogical, or (b4) inference to the best explanation. (1.5) The six options—(a1), (a2), (b1)–(b4)—exhaust all possible explanations of the non-accidental truth of a belief in an absolute necessity. (1.6) None of the six options can explain the non-accidental truth of a belief in an absolute necessity. (1.7*) Therefore, there is no good rationalist explanation of our knowledge of absolute necessity. Armed with (1.7*), the empiricist can now offer a parallel version of Barnes’s second argument whose conclusion is that we have empirical knowledge of necessities: (2.1) We have some knowledge of absolute necessity. (2.2*) There is no good rationalist explanation of our knowledge of absolute necessity. (2.3*) The only alternative is an explanation that posits some empirical knowledge from which we can derive our knowledge of necessity. (2.4*) Therefore, we have some empirical knowledge of necessities. Consequently, unless Barnes can show that (1) there is a rationalist response to the empiricist version of his argument and (2) this response is better than any empiricist response to his version of the argument, he has not shown that empiricism is any worse off than rationalism in providing an account of our knowledge of absolute necessities. The prospects for a rationalist response to the empiricist version of his initial argument are quite limited. According to Barnes, a mental state can warrant directly—i.e., without inference—the belief that necessarily p only if it has the representational content that necessarily p. So the rationalist is faced with two

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Modal Empiricism | 71 options. Either some rational experiences have the representational content that necessarily p or not. If not, then rational experience can warrant the belief that necessarily p only by inference. However, the objections that Barnes offers against sense experience inferentially warranting a belief that necessarily p apply with equal force to rationalism. If rational experience alone cannot explain our knowledge of necessity, then it is hard to see how a deductive inference from rational experience alone could do so. An enumerative inductive inference from premises warranted by rational experience about the character of the actual world to the way things are in all possible worlds is a hasty generalization. Analogical argument cannot explain the origin of knowledge of necessity since it requires that we begin with a case that we know has the property that we wish to project on other cases. Finally, let us call the following view rationalist modal explanationism: positing a necessity sometimes provides the best explanation of some fact that we know through rational experience. The two arguments that Barnes offers against empiricist modal explanationism apply with equal force to rationalist modal explanationism. First, for any systematization of our beliefs that posits absolute necessities, there will be another that posits only nomological necessities and does the same explanatory work.12 Second, any explanation that posits a necessity to explain some phenomenon loses as much overall systematization as it gains. So the only response open to the modal rationalist requires a defense of two claims: (R1) some rational experiences are mental states that have the representational content that necessarily p, and (R2) there is a good explanation of why a belief that necessarily p formed and sustained on the basis of such a rational experience is objectively likely to be true. 4.1. Modal Rationalism and (R1) This result is disastrous for modal rationalism. The view that comes closest to meeting (R1) is traditional rationalism, which maintains that, via rational experience, we apprehend relations of inclusion and exclusion among properties and that such apprehensions warrant us in believing that certain propositions are necessarily true. Its recent proponents include Chisholm and BonJour. Upon

12 One might suggest that this objection does not apply with equal force to rationalist modal explanationism on the grounds that (1) rational experience warrants belief in elementary mathematical propositions and (2) some other mathematical beliefs, such as the Peano Axioms, are warranted by the fact that they explain the elementary mathematical propositions. However, the response continues, the nomological necessity of the Peano Axioms does not explain the truth of the elementary mathematical propositions. Only their absolute necessity provides such an explanation. This response does not favor rationalist modal explanationism since the proponent of empiricist modal explanationism can offer the same response to that objection. Since Barnes allows that sense experience can warrant belief in the truth (as opposed to the necessity) of necessary propositions, the empiricist can maintain that (1) sense experience warrants belief in elementary mathematical propositions and (2) some other mathematical beliefs, such as the Peano Axioms, are warranted by the fact that they explain the elementary mathematical propositions.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

72 | Albert Casullo closer examination, however, their accounts do not satisfy the requirement that rational experiences have the representational content that necessarily p. Chisholm offers the most explicit rationalist account of the process of acquiring non-inferential knowledge of necessary truths. He maintains that it begins with perceiving particular objects, for example red objects and blue objects, and, via a process of abstraction, coming to grasp the properties of being red and being blue. Chisholm (1977, 38) makes explicit the role of rational experience, or intuitive apprehension, in warranting belief in necessary propositions: 3. There is the intuitive apprehension of certain relations holding between properties—in the one case apprehension of the fact that being red excludes being blue, . . . 4. Once we have acquired this intuitive knowledge, then, ipso facto, we also know the truth of reason expressed by “Necessarily, everything is such that if it is red then it is not blue” . . . There are two striking features of Chisholm’s account. The first is that the content of the intuitive apprehension is nonmodal: it has the content that one property stands in a certain relation to another. The second is that the key transition from knowledge of the nonmodal proposition that being red excludes being blue to knowledge of the modal proposition that necessarily, everything is such that if it is red then it is not blue is left unexplained. BonJour’s account shares these two striking features. He (1998, 162) offers the following “intuitive picture” of the process of acquiring non-inferential knowledge of necessary truths: “A person apprehends or grasps, for example, the properties redness and greenness, and supposedly ‘sees’ on the basis of this apprehension that they cannot be jointly instantiated.” What is the content of the apprehension in question? Here he (1998, 162) maintains that the apprehending in question is “simply that involved in thought in general.” His leading idea is that, for the content of a thought to represent a property, the property that it represents must somehow be involved in that thought. BonJour (1998, 184–5) concludes that “If having a thought whose content is, for example, the claim that nothing can be red and green all over at the same time involves being in a mental state that instantiates a complex universal of which the universals redness and greenness are literal constituents, then at least much of the mystery surrounding my access to those universals and my ability to intuitively apprehend the relation of incompatibility between them is removed.” Hence, for BonJour, like Chisholm, the content of the intuitive apprehension involved in rational experience is nonmodal: by virtue of instantiating a complex universal of which the universals redness and greenness are constituents, one apprehends that the two universals stand in the relation of incompatibility to one another. Moreover, BonJour, like Chisholm, fails to explain how the nonmodal apprehension justifies belief in the modal proposition that nothing can be red and green all over at the same time.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Modal Empiricism | 73 Barnes’s argument also rules out rationalist accounts of knowledge of necessity that appeal to inconceivability. To see why, consider the following two principles: (C) If p is conceivable, then p is possible. (I) If p is inconceivable, then p is impossible. (C) and (I) will not do as epistemic principles. As Bealer (2002, 75–6) notes: Conceivability and inconceivability would not be suited to play their reputed evidential role in modal epistemology. That it is possible, or impossible, to conceive that p is itself a mere modal fact. But in order for someone to acquire evidence (reasons), something must actually happen: a datable psychological episode must occur . . . Modal facts do not occur.

So let us replace (C) and (I), respectively, with: (C*)

If S conceives that p, then S is prima facie justified in believing that p is possible; and

(I*)

If S attempts but fails to conceive that p, then S is prima facie justified in believing that p is impossible.

An immediate problem with (C*) and (I*) is that different theorists use the terms ‘conceivable’ and ‘inconceivable’ to refer to different states or processes. Yablo (1993, 29) is sensitive to the problem and offers the following account of the terms: Conceiving that p is a way of imagining that p; it is imagining that p by imagining a world of which p is held to be a true description. Thus p is conceivable for me if (CON) I can imagine a world that I take to verify p. Inconceivability is explained along similar lines: (INC) I cannot imagine any world that I don’t take to falsify p.

Utilizing Yablo’s account, we can now articulate (C*) and (I*), respectively, as follows: (C**)

If S imagines a world that S takes to verify p, then S is prima facie justified in believing that p is possible; and

(I**)

If, S attempts to imagine a world that verifies p but, for every world that S imagines, S takes that world not to verify that p, then S is prima facie justified in believing that p is impossible.

How do (C**) and (I**) square with modal rationalism? The primary question before us is whether a modal rationalist can offer an account of knowledge of necessity that is compatible with the empiricist version of Barnes’s argument. Since our primary question pertains to knowledge of necessity, (I**) is the relevant principle. (I**), however, runs afoul of Barnes’s requirements. On a conceivability-based account, beliefs about possibility are justified on the basis of instantiating a particular type of mental state. To find p conceivable is to be in a state that you take to verify p and being in such a state justifies the belief that possibly p. Beliefs about impossibility or

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

74 | Albert Casullo necessity are justified on the basis of failing to instantiate a particular type of mental state. Finding p inconceivable is not being in a mental state that justifies the belief that p is impossible. Justified beliefs about impossibility and necessity are based on the failure to instantiate a mental state that you take to verify p. Since beliefs about impossibility are not justified on the basis of instantiating some mental state that has the representational content that p is impossible, conceivability-based accounts do not satisfy (R1). Moreover, on a conceivability-based account, beliefs about necessity are inferentially justified. A single failure to instantiate a mental state that you take to verify p is not sufficient to justify the belief that p is impossible. Multiple attempts are necessary to ensure that one has not overlooked a world that verifies p. Perhaps background beliefs to the effect that the failure to imagine such worlds is not due to a cognitive or methodological deficiency are also necessary. The most appropriate model for the type of inferential justification involved appears to be inference to the best explanation, which is ruled out by the empiricist version of Barnes’s argument. One might wonder whether conceivability-based accounts of knowledge of possibility are compatible with the empiricist version of Barnes’s argument. Since beliefs about possibility are directly justified on the basis of instantiating a particular type of mental state, according to the account, (C**) is not open to the objection faced by (I**). Although justified beliefs about possibility are based on a particular type of mental state, this is not sufficient to satisfy Barnes’s standard for non-inferential justification. In order to non-inferentially justify a belief that possibly p, a mental state must have the representational content that possibly p. Yablo (1993, 6, italics in the original) offers the following observations regarding the act of conceiving that p: So, the truth conditions of an intentional state cannot be read off its content alone; . . . the state’s psychological mode or manner is also relevant. This is crucial because one thing I will be taking “conceivability involves the appearance of possibility” to mean is that the truth conditions of an act of conceiving that p include, not the condition that p, as in perception, but the condition that possibly p. From now on I will express this by saying that p’s possibility representatively appears to the conceiver.

The question before us is whether an act of conceiving p has the representational content that possibly p. The answer appears to be ‘no’, although unclear terminology presents an obstacle to offering a definitive answer. Note that Yablo explicitly distinguishes between the content of a state and its truth conditions. For Yablo, to say that p representatively appears to the conceiver is to say something about the truth conditions of the act of conceiving p and not something about its content. On the other hand, Barnes slides freely between talking about the ‘content’ of sense experience and talking about the ‘representational content’ of sense experience. The two expressions are used interchangeably. Hence, even if Yablo is correct in claiming that p’s possibility representatively appears to the conceiver, it does not follow that p’s possibility is part of the content of the act of conceiving.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Modal Empiricism | 75 George Bealer maintains that intuitions are evidence. His defense takes place within the context of what he calls the “Standard Justificatory Procedure” (SJP): “the procedure we standardly use to justify our beliefs and theories” (1992, 100). He maintains that the SJP counts not only experience, observation, memory, and testimony as prima facie evidence, but also intuition. In support of this thesis, he (1992, 100) invites us to consider one of the familiar counterexamples to the justified true belief analysis of the concept of knowledge: We find it intuitively obvious that there could be a situation like that described and in such a situation the person would not know that there is a sheep in the pasture despite having a justified true belief. This intuition . . . and other intuitions like it are our evidence that the traditional theory is mistaken.

Before turning to Bealer’s account of intuition, two points of clarification are necessary. First, Bealer’s description of the counterexample that provides our evidence against the justified true belief analysis of the concept of knowledge suggests that it involves only a single intuition. There are, however, two distinct types of intuition involved: (1) a modal intuition that the state of affairs described in the counterexample is possible and (2) a classificatory intuition that the state of affairs described in the counterexample is not a case of knowledge.13 Second, Bealer distinguishes specific concrete case intuitions and theoretical intuitions. Here he maintains that the former have greatest evidential weight; the latter have less. Bealer does not articulate the difference between the two types of intuition, but the contrast with specific concrete case intuitions suggests that theoretical intuitions are general. An example is the intuition that the naive comprehension axiom is true. Traditional rationalists, such as BonJour and Chisholm, focus on general intuitions as the source of non-inferential knowledge of general principles. Bealer’s focus is on the role of concrete case intuitions in the distinctively philosophical project of conceptual analysis. Concrete case intuitions do not non-inferentially justify general principles about the application of some concept. Instead, they non-inferentially justify beliefs about the application of some concept to particular cases. Concrete case intuitions play two distinct roles in the justification of general principles: positive and negative. In Gettier cases, their role is negative. They provide evidence that the justified true belief analysis is false. My focus is on the positive role of intuitions in providing support for the truth of a particular analysis of some philosophical concept such as knowledge. How do concrete case intuitions justify general principles? On the standard picture, one begins by considering specific cases, both actual and possible, and dividing them into three categories: clear cases of knowledge, clear cases of ignorance, and unclear or borderline cases. One then attempts to generalize from the verdicts about the specific cases to general 13 Bealer (1998, 207) states that “in the Gettier example we have a rational intuition that the case is possible, and we have a rational intuition that the concept of knowledge would not apply to the person in the case.”

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

76 | Albert Casullo principles. A general principle that parses the clear cases correctly is alleged to be supported by the fact that it yields the correct results regarding the clear cases. One that does not parse the clear cases correctly is alleged to be disconfirmed unless it can successfully explain away the initial classification of the cases that conflict with it. Perhaps additional confirming evidence comes from either explaining why the borderline cases are borderline or by providing a principled division of them into cases of knowledge and ignorance. The details are not important for our purposes. What is clear is that if the concrete case intuitions justify some general (necessary) principle then that justification is inferential. Moreover, the type of inference involved is an inference to the best explanation. But, as we have seen, the arguments that Barnes presents against empiricist modal explanationism apply with equal force to rationalist modal explanationism. Hence, if Barnes’s arguments against the former are correct, then modal rationalism must reject the standard view of conceptual analysis, according to which intuitions about concrete cases provide evidence for general (necessary) principles. As was noted earlier, Bealer distinguishes between concrete case and theoretical intuitions. Although he maintains that the former have greatest evidential weight, he does not deny that the latter count as evidence. He defends the view that all intuitions are evidence. Moreover, when he introduces and explains his account of intuition, he (1998, 207) features examples involving general logical principles: When you have an intuition that A, it seems to you that A. Here ‘seems’ is understood, not in its use as a cautionary or “hedging” term, but in its use as a term for a genuine kind of conscious episode. For example, when you first consider one of de Morgan’s laws, often it neither seems true nor seems false; after a moment’s reflection, however, something happens: it now just seems true. The view I will defend is that intuition (this type of seeming) is a sui generis, irreducible, natural . . . propositional attitude that occurs episodically.

Moreover, Bealer (1998, 207) maintains that there are both rational (or a priori) and physical intuitions, and that what is characteristic of rational intuitions is that they present themselves as necessary: We have a physical intuition that, when a house is undermined, it will fall. This does not count as a rational intuition, for it does not present itself as necessary: it does not seem that a house undermined must fall; . . . By contrast, when we have a rational intuition— say, that if P then not not P—this presents itself as necessary: it does not seem to us that things could be otherwise; it must be that if P then not not P.

Hence, it appears that Bealer’s account offers the prospect for a rationalist explanation of modal knowledge that meets the requirements of Barnes’s argument. If necessity is constitutive of the content of rational intuitions and such intuitions directly justify necessary truths, then the major objection to empiricist explanations of modal knowledge appears to have been circumvented.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Modal Empiricism | 77 This approach to offering a rationalist explanation of modal knowledge faces several obstacles. First, there is an interpretive issue. Bealer does not explicitly endorse the view that, in the case of rational intuition, necessity is a constituent of the content of the intuition. In fact, he seems to deny this. For example, he (1998, 205, italics in the original) maintains When I say that intuitions are used as evidence, I of course mean the contents of the intuitions count as evidence . . . Consider an example. I am presently intuiting that if P then not not P. Accordingly, the content of this intuition—that if P then not not P— counts as a bit of my evidence; I may use this logical proposition as evidence (as a reason) for various other things.

This suggests that the presentation as necessary, which is characteristic of rational intuition, is more naturally viewed as constitutive of the attitude. On the other hand, in other remarks, Bealer (1998, 207) offers, but does not endorse, an analysis of rational intuition on which necessity is constitutive of the content rather than the attitude of such intuitions: I am unsure how exactly to analyze what is meant by saying that a rational intuition presents itself as necessary. Perhaps something like this: necessarily, if x intuits that P, it seems to x that P and also that necessarily P. But I wish to take no stand on this.

Consequently, Bealer fails to articulate the characteristic feature of rational intuition.14 Given that Bealer fails to articulate the characteristic feature of rational intuition, it is difficult to assess whether his account meets the requirements of Barnes’s argument. There is, however, reason to be doubtful. Barnes’s challenge is to explain knowledge of propositions whose content is modal. Bealer’s account faces a problem in providing such an explanation. Consider, for example, a Gettier case. Such a case is a counterexample to the justified true belief analysis of the concept of knowledge only if it is possible. On Bealer’s account, an intuition of possibility provides evidence that such a case is possible. But, if the intuition is rational, then it presents itself as necessary. If the presentation as necessary is constitutive of the content of the intuition, then the intuition has the content that it is necessary that the Gettier case is possible. This does not appear to be an accurate description of the content of the intuition and it is not the way that Bealer describes it. A similar problem arises when we consider a priori justification for the belief that necessarily P. In order

14 A referee raises the following concern regarding Bealer’s view: If seemings are the only source of prima facie justification and the distinction between a priori and physical intuitions is drawn in terms of the contents of seemings, then the distinction between a priori and a posteriori justification does not appear to be epistemically significant. It is no more significant than the distinction between beliefs justified by seemings that concern colors and beliefs justified by seemings that concern shape. I am sympathetic to the concern and suggest that it provides Bealer with a strong consideration in favor of drawing the distinction between a priori and physical intuitions in terms of a difference in attitude rather than a difference in content. For a more detailed discussion of Bealer’s view, see Casullo (2012b).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

78 | Albert Casullo for S’s belief that necessarily P to be justified a priori, S must have a rational intuition that necessarily P. Rational intuitions, however, present themselves as necessary. If the presentation as necessary is constitutive of the content of the intuition, then the intuition has the iterated modal content that it is necessary that necessarily P. Bealer, however, does not maintain either that we have such iterated modal intuitions or that they are necessary for basic a priori modal knowledge that necessarily P. An analogous problem arises if one maintains that the presentation as necessary characteristic of rational intuition is constitutive of the attitude. Consider again some Gettier case. On Bealer’s account, an intuition of possibility provides evidence that such a case is possible. But, if the intuition is rational, then it presents itself as necessary. If the presentation as necessary is constitutive of the attitude then, if S has the rational intuition that the Gettier case is possible, it seems necessary to S that the Gettier case is possible. Once again, this does not appear to be an accurate description of the intuition and it is not the way that Bealer describes it. Similarly, in order for S’s belief that necessarily P to be justified a priori, S must have a rational intuition that necessarily P. If the presentation as necessary is constitutive of the attitude, then it must seem necessary to S that necessarily P. Bealer, however, does not maintain either that we have such intuitions or that they are necessary for basic a priori modal knowledge that necessarily P. Hence, it is doubtful that Bealer’s account meets the requirements of Barnes’s argument.15 4.2. Modal Rationalism and (R2) My discussion to this point has focused on (R1). I now turn to (R2), which requires that there be a good explanation of why a belief that necessarily p formed and sustained on the basis of such a rational experience is objectively likely to be true. Such an explanation, in turn, must satisfy two conditions: (a) it must identify the processes that form and sustain the belief in question, and (b) it must show why a belief formed and sustained in this way is likely to be true. My goal is to show that the two accounts of rational experience that come closest to satisfying (R1)—the traditional account, which takes such experiences to consist in the apprehension of features of abstract entities, and the more contemporary account, which takes them to be rational intuitions or seemings—fail to satisfy (R2). Empiricist criticisms of rationalist accounts of a priori knowledge have focused on the traditional account. They frequently allege that the traditional rationalist accounts are “mysterious” or “obscure.” Upon closer examination, such criticisms can be seen as maintaining that the accounts fail to satisfy (R2). Take, for example, Devitt’s (2005, 114) explanation of the obscurity charge:

15

Thanks to Margot Strohminger and Tim Williamson for pressing me to clarify this argument.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Modal Empiricism | 79 What non-experiential link to reality could support insights into its necessary character? There is a high correlation between the logical facts of the world and our beliefs about those facts which can only be explained by supposing that there are connections between those beliefs and facts. If those connections are not via experience, they do indeed seem occult.

Devitt’s focus is on condition (a): identifying the process of rational experience. Field’s (1989, 25) focus is on condition (b): the challenge . . . is to provide an account of the mechanisms that explain how our beliefs about these remote entities can so well reflect the facts about them. The idea is that if it appears in principle impossible to explain this, then that tends to undermine the belief in mathematical entities, despite whatever reason we might have for believing in them.

The challenge is to provide an explanation of why beliefs formed on the basis of rational experience are likely to be true. BonJour (1998, 161) concedes that if rational experience requires a quasiperceptual relation to abstract entities that is analogous to sense perception, then his account cannot address this challenge. Moreover, he also concedes that his intuitive characterization of rational experience in terms of apprehending properties and “seeing” on this basis that some propositions are true suggests the perceptual account. In response, BonJour rejects the analogy with sense experience and maintains that the apprehension of properties involved in rational insight is simply that involved in thought in general. Such an account, according to BonJour (1998, 185), removes the mystery surrounding access to universals since “there is no need to regard the apprehension of properties as a perceptual relation involving some mental analogue of vision that somehow reaches out to the Platonic realm.” There are three shortcomings in BonJour’s account of rational experience, each of which is sufficient to prevent his account from satisfying (R2). The initial step in the process of rational experience is the apprehension of properties. It provides the input into the belief-forming processes whose output is belief in various necessary propositions. Although BonJour claims to have provided an alternative to the quasi-perceptual account of property apprehension, his alternative model falls short of that goal. Here it is important to distinguish between a thought about things that instantiate a property, such as green things, as opposed to a thought about the property itself, such as greenness. The first shortcoming is that what BonJour (1998, 184) offers is an account of the former rather than the latter: “The key claim of such a view would be that it is a necessary, quasi-logical fact that a thought instantiating a complex universal involving the universal triangularity in the appropriate way . . . is about triangular things.” In order to provide an account of our apprehension of properties, he must provide an account of the difference between thinking about things instantiating properties and thinking about the properties themselves. Second, satisfying condition (a) in (R2) requires identifying the belief-forming process involved in rational experience—i.e.,

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

80 | Albert Casullo the process that begins with the apprehension of the properties of redness and greenness and results in the belief that nothing can be both red and green all over at the same time. BonJour has focused his attention exclusively on providing an account of the initial stage of the process—the apprehension of properties—but has said nothing about the process itself. He has not explained how the apprehension of properties provides insight into their intrinsic natures or relational properties. This is Devitt’s complaint. Third, in the absence of the characterization of the belief-forming process in question, it is impossible to determine whether it is reliable, let alone to explain why it is reliable. This is Field’s complaint. Bealer’s account differs from the traditional rationalist account. Rational intuitions are seemings and seemings do not involve the apprehension of abstract entities. Moreover, Bealer (1998, 218) maintains that he can provide an explanation why it is no accident that beliefs based on rational intuitions are likely to be true. His explanation proceeds in three steps. At the ground level, he argues that intuitions are evidence by appeal to the fact that (a) the SJP sanctions them as a source of evidence and (b) empiricism cannot justify its departure from the SJP. The second step offers a modal reliabilist account of basic sources of evidence, where a source of evidence is basic if and only if its deliverances have an appropriate kind of strong modal tie to the truth. Bealer (1998, 217) maintains that sources of evidence are either basic or derived, where “something is a derived source of evidence relative to a given subject iff it is deemed (perhaps mistakenly) to have a reliable tie to the truth by the simplest comprehensive theory based on the subject’s basic sources of evidence.” Since, according to Bealer, (1998, 217–18), empiricist explanations of the reliability of intuitions fail, it follows that intuitions are a basic source of evidence. Since intuitions are a basic source of evidence, they have a strong modal tie to the truth. Here a version of Field’s challenge surfaces. What explains this strong modal tie to the truth? The third step in the argument is an explanation in terms of a theory of concept possession. Bealer’s (1998, 225) basic idea is to distinguish between nominal and determinate concept possession, where (to a first approximation): x determinately possesses a given concept iff, for associated test property-identities p: x would have intuitions which imply that p is true iff p is true.

Hence, determinate concept possession guarantees that intuitions with respect to test property- identities are truth-tracking. There are three significant problems with Bealer’s explanation of the reliability of intuitions, each of which is sufficient to prevent his account from satisfying (R2). With respect to the first step, the order of justification is critical. Bealer does not conclude that intuitions are evidence by appeal to their reliability. He concludes that intuitions are reliable by appeal to their status as evidence, and defends their status as evidence by appeal to the SJP. The key premise of his argument is the contention that empiricism cannot justify a departure from the SJP that excludes intuition as evidence.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Modal Empiricism | 81 In defense of this contention, Bealer (1992, 114–18) maintains that the empiricist cannot explain how empiricism differs from views such as visualism, the view that visual experience is the only source of evidence, that arbitrarily exclude sources of evidence admitted by the SJP. I (2012, 245–8) contend, however, that his argument proves too much. If it were correct, Bealer would face an analogous problem. He would not be able to explain how the SJP differs from views that arbitrarily introduce basic sources of evidence not admitted by the SJP, such as the pronouncements of a political authority. Hence, his argument for the reliability of intuition never gets off the ground. Second, Bealer’s contention that the empiricist cannot offer an explanation of the reliability of intuitions rests on equating empiricism with Quine’s version of empiricism, according to which the simplest comprehensive theory based on the subject’s basic sources of evidence is a theory free of modals. Since such a theory would not deem there to be a reliable tie between modal intuitions and the truth, Bealer concludes that such intuitions are a basic source of evidence. Clearly, empiricism need not be committed to Quine’s version of empiricism and most contemporary empiricists are not. So it remains an open question whether empiricists can offer an explanation of the reliability of modal intuitions. Finally, Bealer’s appeal to a theory of determinate concept possession to explain the reliability of intuition raises questions. The most pressing is whether there is any independent reason for accepting it apart from the fact that it delivers the results that he needs.16 The problem is exacerbated by the fact that there are competing accounts available of the possession conditions for concepts and, as a consequence, the issue of choosing among them is not of merely theoretical interest. Perhaps Bealer can maintain that the conditions for possessing (or determinately possessing) the concept of concept underwrite his theory. This response has the unwelcome consequence that those who endorse different accounts of the possession conditions for concepts either fail to possess (or determinately possess) the concept of concept or have a different concept of concept.

5. Conclusion One of the most resilient arguments in favor of the existence of a priori knowledge derives from Kant’s contention that necessity is a criterion of the a priori. The most plausible reading of this contention maintains that if p is necessarily true and S knows that p is a necessary proposition then S knows a

16 There are at least two others. The first is whether any actual cognizer determinately possesses any concepts. The second is whether Bealer’s explanation actually explains the reliability of intuitions. Determinate possession of the concept C explains the reliability of one’s intuitions with respect to the concept C, according to Bealer, because it is constitutive of determinate possession of the concept C that one’s intuitions with respect to the application of concept C are reliable. This explanation strikes me as vacuous.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

82 | Albert Casullo priori that p is a necessary proposition. Although this contention is widely endorsed, supporting arguments for it are difficult to come by. Barnes offers one of the few available arguments in support of it. I maintain that his argument fails. In section 3, I present three serious objections to his argument: one premise is questionable, another is questionbegging, and its final transition is invalid. These objections constitute sufficient grounds for rejecting the argument. One natural reaction to these objections is that they focus on matters of detail and that the argument can be revised to circumvent these problems. That reaction spells disaster for proponents of modal rationalism. For, as I go on to show in section 4, if Barnes’s argument is sound, then it has two significant sceptical consequences. First, it can be extended to show that empirical knowledge of nomological necessities is not possible. Second, a parallel version of the argument shows that there is no good rationalist explanation of knowledge of absolute necessities. The arguments of section 4 complement and reinforce those of section 3. If one is tempted to view my objections to Barnes’s arguments as mere matters of detail, one should make the necessary changes to Barnes’s argument and ask if the revised version of the argument retains the sceptical consequences of the original. This test provides a check against a glib dismissal of the objections to the original argument. In fact, most, if not all, contemporary rationalists reject the two leading epistemic premises of Barnes’s argument: (a) his explanationist analysis of the concept of knowledge (and, a fortiori, his explanationist analysis of the concept of a priori knowledge), and (b) his contention that a mental state can warrant a belief that necessarily p only if it has the representational content that necessarily p.17 None of the proponents of rationalism surveyed in section 4 endorse either (a) or (b). Hence, they are not saddled with the sceptical consequences of his argument. This benefit, however, comes with a cost. They cannot coherently endorse his argument against modal empiricism. There is a more general lesson worth noting. Barnes’s argument suffers from a characteristic defect of many rationalist arguments against empiricism: parallel versions of the arguments apply with equal force to rationalism. Hence, such arguments suffer from a form of self-defeat: if they succeed in showing that the target empiricist theory is untenable, they also show that versions of rationalism are untenable. Three prominent examples of such arguments are BonJour’s (1998) argument in support of the conclusion that radical empiricism leads to scepticism and his two arguments directed at Quinean radical empiricism.18 Hence, rationalist proponents of arguments against radical empiricism

17

Jenkins (2008, 2010) may appear to be an exception. However, as I argue in Casullo (2012c), her accounts of arithmetical and modal knowledge are not a priori accounts. 18 For BonJour’s (1998) arguments, see sections 1.1 and 3.7. For parallel versions of his arguments, see Casullo (2000, and 2003 sections 4.6 and 4.7). For further discussion, see Thurow (2009) and Watson (2014).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Modal Empiricism | 83 should always ask whether parallel versions of their arguments apply with equal force to rationalism.19

References Barnes, G. 2007. ‘Necessity and Apriority’. Philosophical Studies 132: 495–523. Bealer, G. 1992. ‘The Incoherence of Empiricism’. Proceedings of the Aristotelian Society, supp. vol 66, 99–138. Bealer, G. 1998. ‘Intuition and the Autonomy of Philosophy’. In Rethinking Intuition, eds M. DePaul and W. Ramsey. Lanham, MD: Rowman & Littlefield, 201–39. Bealer, G. 2002. ‘Modal Epistemology and the Rationalist Renaissance’. In Conceivability and Possibility, eds T. Gendler and J. Hawthorne. Oxford: Oxford University Press, 71–125. BonJour, L. 1998. In Defense of Pure Reason. Cambridge: Cambridge University Press. Burge, Tyler. 1993. ‘Content Preservation’. Philosophical Review 102: 457–88. Casullo, A. 2000. ‘The Coherence of Empiricism’. Pacific Philosophical Quarterly 81: 31–48. Casullo, A. 2003. A Priori Justification. New York: Oxford University Press. Casullo, A. 2007. ‘Testimony and A Priori Knowledge’. Episteme 4: 322–34. Reprinted in Casullo 2012a. Casullo, A. 2010. ‘Knowledge and Modality’. Synthese 172: 341–59. Reprinted in Casullo 2012a. Casullo, A. 2012a. Essays on A Priori Knowledge and Justification. New York: Oxford University Press. Casullo, A. 2012b. ‘Intuitions, Thought Experiments, and the A Priori’. In Casullo 2012a. Casullo, A. 2012c. ‘Articulating the A Priori—A Posteriori Distinction’. In Casullo 2012a. Chisholm, R. M. 1966. Theory of Knowledge, 1st edn. Englewood Cliffs, NJ: PrenticeHall, Inc. Chisholm, R. M. 1977. Theory of Knowledge, 2nd edn. Englewood Cliffs, NJ: PrenticeHall, Inc. Devitt, M. 2005. ‘There is No A Priori’. In Contemporary Debates in Epistemology, eds M. Steup and E. Sosa. Malden, MA: Blackwell, 105–15. Field, H. 1989. Realism, Mathematics and Modality. Oxford: Blackwell Publishers. Jenkins, C. 2008. Grounding Concepts. Oxford: Oxford University Press. Jenkins, C. 2010. ‘Concepts, Experience and Modal Knowledge’. Philosophical Perspectives 24: 255–79.

19 Earlier versions of this chapter were presented at the International Workshop: Directions in the Epistemology of Modality, Stirling University, October, 22–4, 2015; the Mountain-Plains Philosophy Conference, University of Colorado at Colorado Springs, October 6–8, 2016; the Workshop on Modal Knowledge, Bielefeld University, March 16–17, 2017; and the Conceivability and Modality International Conference, Sapienza University, Rome, June 19–20, 2017. Thanks to the audiences at these presentations and to two anonymous referees for Oxford Studies in Epistemology for their challenging questions.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

84 | Albert Casullo Kant, I. 1965. Critique of Pure Reason, trans. Norman Kemp Smith. New York: St Martin’s Press. Kitcher, P. 1989. ‘Explanatory Unification and the Causal Structure of the World’. In Minnesota Studies in the Philosophy of Science, Scientific Explanation, eds P. Kitcher and W. Salmon. Minneapolis, MN: University of Minnesota Press. Kripke, S. 1971. ‘Identity and Necessity’. In Identity and Individuation, ed. M. K. Munitz. New York: New York University Press. Russell, B. 1919. Introduction to Mathematical Philosophy. London: George Allen and Unwin. Strohminger, M. and Yli-Vakkuri, J. 2017. ‘The Epistemology of Modality’. Analysis. Advance article. doi: 10.1093/analys/anx058. Thurow, J. 2009. ‘The A Priori Defended: A Defense of the Generality Argument’. Philosophical Studies 146: 273–89. Watson, J. 2014. ‘Dilemma Arguments Against Naturalism’. Episteme 11: 229–43. Whewell, W. 1840. Philosophy of the Inductive Sciences Founded upon Their History, I. London: J. W. Parker & Son. Whitehead, A. N. and Russell, B. 1962. Principia Mathematica to *56. London: Cambridge University Press. Yablo, S. 1993. ‘Is Conceivability a Guide to Possibility?’ Philosophy and Phenomenological Research 53: 1–42.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

4. Accuracy and Educated Guesses Sophie Horowitz Belief, supposedly, “aims at the truth.” Whatever else this might mean, it’s at least clear that a belief has succeeded in this aim when it is true, and failed when it is false. That is, it’s obvious what a belief has to be like to get things right. But what about credences, or degrees of belief? Arguably, credences somehow aim at truth as well. They can be accurate or inaccurate, just like beliefs. But they can’t be true or false. So what makes credences more or less accurate? One of the central challenges to epistemologists who would like to think in degreed-belief terms is to provide an answer to this question. A number of answers to this question have been discussed in the literature. Some argue that accuracy, for credences, is not a matter of credences’ relation to what’s true and false, but to frequencies or objective chances.1 Others are skeptical that there is any notion of accuracy that can be usefully applied to credences, and argue that we should instead assess them according to their practical efficacy.2 Yet another approach assesses accuracy using “scoring rules”—functions of the distance between credences and truth. According to this class of views, the closer your credence is to the truth (1 if the proposition is true, and 0 if it is false), the better it is, in a quite literal sense: scoring rules are understood as a special kind of utility function.3 This last approach—epistemic utility theory—has gained a significant amount of support in recent years. Part of its appeal is that it looks like a natural extension of a common-sense thought about accuracy: that it’s better for our doxastic states to be right than wrong, and that for credences, it’s better to be close to the truth than far away.4 It is also a powerful bit of machinery, which can be used to justify or vindicate quite strong formal constraints on rational credence. But the approach faces problems as well. Just saying that close is better than far does not do much to narrow down the possible ways of measuring accuracy. And when we do try to narrow things down, defending the use of one scoring rule over another, we move farther and farther from the common-sense understanding of accuracy that we started with. I won’t enter this debate in depth here. Instead, I will propose a new way to understand accuracy, which sidesteps these concerns. That is: we can evaluate 1 For discussion of these views, see Hájek (2011). (Van Fraassen and Lange are among the defenders of frequentism; Hájek prefers objective chance.) 2 See Gibbard (2008). 3 Supporters of this approach include Joyce, Greaves and Wallace, and Pettigrew, among others. 4 Joyce (2009) endorses this thought in the axiom he calls “Truth-Directedness.” Gibbard (2008) expresses the same idea in his “Condition T.”

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

86 | Sophie Horowitz credences’ accuracy by looking at the “educated guesses” that they license. This framework is motivated by the thought that there is a straightforward way to assess credences’ accuracy according to their relation to the truth, rather than to our practical aims—and by the common-sense thought that credences are more accurate as they get closer to the truth. Here is the plan for the rest of the chapter. In Section 1, I will introduce my proposal. In Section 2 I will argue that educated guesses can help us make sense of the phenomenon David Lewis calls “immodesty”: the sense in which a rational agent’s own doxastic states should come out looking best, by the lights of her way of evaluating truth-conduciveness or accuracy. (I will say much more about this in Section 2.) As I’ll argue, vindicating Immodesty is a minimum requirement for an account of accuracy, so it is good news for the guessing framework that it can be put to work in that way. In Section 3, I’ll turn to the question of which formal requirements can be justified through this framework. I will argue that with some plausible constraints on rational guessing, we can use this framework to argue for probabilism; I will also briefly discuss some possible further applications, and alternative options for those who think that probabilism is too strong. In Section 4 I will (very) briefly survey two other accounts of accuracy, using them to bring out some of the strengths and weaknesses of the guessing framework.

1. Educated Guesses In gathering evidence and forming opinions about the world, we aim to get things right. If we’re very lucky, the evidence is decisive, and we can be sure of what’s true and false. If we’re not so lucky—which is most of the time—the evidence is limited, and things are not so clear. In these cases, it’s rational to adopt intermediate degrees of confidence, or credences. If we must act, we should do the best we can. I want to look at a special kind of action: educated guessing. This is a type of action with the same correctness conditions as all-out belief. A guess is correct if it’s true, and incorrect if it’s false. Guessing is something we are often called upon to do even when we’re quite unsure what is the right answer to a question. As with any other action, if we must guess, it’s rational to give it our best shot. The way to do that is to guess on the basis of our credences. In short, guessing is a way that we can get things right or wrong, and rational guessing is done on the basis of our credences. In a relatively straightforward way, then, your credences can get things right or wrong by licensing true or false guesses. I’d like to propose that we make use of this connection to build an account of accuracy. Specifically: Your credences are more accurate insofar as they license true educated guesses. They are less accurate insofar as they license false educated guesses. What are educated guesses? My characterization will be partially stipulative, but we won’t end up too far from the everyday notion of guessing that we are all familiar with. To get an idea of the type of guesses I’m interested in, think of

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Accuracy and Educated Guesses | 87 multiple choice tests, assertion under time or space constraints (such as telegrams), or statements like “if I had to guess, [P] . . . but I’m not sure . . . ” More precisely, we can think of an educated guess as a potential forced choice between two (or more) propositions, made on the basis of your credences. If you are given some options—say, P and ~P—and asked to choose between them, your educated guess should correspond to the option you take to have the best shot at being true. Two important notes. First, the type of guesses I’m interested in are those that are licensed by your credences, and governed by rational norms. (I call them “educated” guesses to emphasize this.) Second, as I said before, guessing is an action, not a doxastic state. It is possible to rationally guess that P if you know or believe that P, or if you don’t; in some cases, it may even be rational to guess that P if you rationally believe that ~P.5 (See Question 2, below, for a possible example like this.) What are the norms that govern educated guesses? As a start, here are three norms, which seem plausible enough to me (and which I’ll assume for the rest of the chapter):6 Simple questions: When faced with a forced choice between two propositions, your educated guess should be the proposition in which your credence is highest. Suppositional questions: When faced with a forced choice between two propositions given some supposition, your educated guess should be the proposition in which your conditional credence (conditional on the supposition being true) is the highest. Equal credence: With both suppositional and non-suppositional questions, if you have equal credence in both options, you are licensed to guess in favor of either one. I’ll be interested in the guesses that are licensed by a rational agent’s credences, according to the norms above. To get a handle on how these norms are meant to work, consider a couple of sample questions. Simple, non-suppositional questions are easy enough: Q1:

Is it raining?

5 One way in which my notion of guessing is somewhat stipulative is that, on my account, guessing that P is compatible with knowing that P. However, we would not normally describe acting on our knowledge as “guessing.” Thanks to Brendan de Kenessey for pointing this out. 6 At the moment I’ll keep things simple and just look at two-option cases, but there is no reason I can see why the framework couldn’t be extended to choices among three or more options. How the framework would develop, if expanded in this way, is an interesting question—it would likely turn out that, on any plausible expansion, licensed guessing would be partition-relative. Would that be a good thing, or a bad thing? Possibly, not so bad. See Lin and Kelly (2011) for an argument that partition-relativity is good—as applied to theory acceptance, rather than guessing. Similarly, Schaffer (2004) argues that knowledge is questionrelative. Thanks to Hanti Lin for helpful discussion here. These points deserve further attention, but I will set them aside for present purposes.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

88 | Sophie Horowitz In this case, if you are more confident of Rain than of ~Rain, you’re licensed to guess Rain. If you are more confident of ~Rain, you’re licensed to guess ~Rain. If you are equally confident in both options, you may guess either way. Suppositional questions are just slightly more complicated: Q2: Supposing that it’s not sunny, which is it: rain or snow? Suppose your credences in these three (disjoint7) possibilities are as follows, where Cr is your credence function: Cr(Sun) = 0.75 Cr(Rain) = 0.2 Cr(Snow) = 0.05 By your lights, then, it’s most likely sunny. But Q2 asks you to suppose that it’s not sunny. In response to this question, your credences license guessing Rain: given that it’s not sunny, you regard it as more likely to be raining than snowing. Your guesses can then be assessed straightforwardly for truth and falsity: either it’s raining, or it’s not. Suppositional guesses won’t be assessed at all in cases where the supposition is false. That’s all I’ll say for now about what guessing is, and when it’s licensed. Does the guess framework give us a plausible account of accuracy? One way to test it is to see how well it fits together with the rest of our epistemological picture. I’ll begin to explore this question in the next two sections.

2. Immodesty In this section I’ll argue that educated guesses can be used to vindicate “immodesty”: roughly, the thesis that an epistemically rational agent should regard her own credences as giving her the best shot at the truth, compared to any other (particular) credences. The argument here will rely on the three norms for licensed guesses introduced in the last section. For this section, I will also assume probabilism: the thesis that rational credences are probabilistically coherent. (I will come back to probabilism in Section 3.) What is immodesty, and why should we accept it? The term comes from David Lewis, who introduces it with the following example. Think about Consumer Reports, a magazine that ranks consumer products. Suppose that this month, Consumer Reports is ranking consumer magazines. What should it say? If Consumer Reports is to be trusted, Lewis argues, it must at least recommend itself over other magazines with different product-ranking methods. Suppose Consumer Reports was “modest,” and recommended Consumer Bulletin instead of recommending itself. Then its recommendations would be

7

To keep things simple, pretend they are disjoint. As I’m writing this, it’s sunny and raining at the same time.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Accuracy and Educated Guesses | 89 self-undermining, or inconsistent, in a problematic way. On p. 3, say, Consumer Reports recommends the Toasty Plus as the best toaster. On p. 7 it recommends Consumer Bulletin. Then, when you open up Consumer Bulletin to the toaster reviews, you find out that it recommends the Crispy Supreme. Which toaster should you buy? Consumer Reports is giving you incoherent advice. It can’t be trusted.8 Lewis’s example needs a few qualifications. Without saying more about the situation, it’s not clear that Consumer Reports really should rank itself best. For instance, if Consumer Bulletin reviews a wider variety of products or has a bigger budget for product testing, it might be reasonable for Consumer Reports to recommend Consumer Bulletin as the best consumer magazine. It would also surely be reasonable for Consumer Reports to admit that some possible magazine could be better—say, God’s Omniscient Product Review Monthly—especially if it does not have access to GOPRM’s testing methods or recommendations. What Consumer Reports can’t do, on pain of incoherence, is recommend a magazine that (a) ranks the same products, (b) on the basis of the same information, but (c) comes out with different results. Carried over to epistemology, the idea is that a rational agent should regard her own credences as optimal in the same sense as Consumer Reports should regard its own recommendations as optimal. Compared to other credences she might adopt—ranging over the same propositions, and on the basis of the same evidence—a rational agent should regard her own credences as giving her the best shot at the truth. To see why immodesty should be true for doxastic states, just imagine an agent who believes that it’s raining, but also believes that the belief that it’s not raining would be more accurate. This would be inconsistent and self-undermining—it would indicate that something has gone wrong, either with the agent’s beliefs or with her way of assessing accuracy. The same should be true of credences: if credences are genuine doxastic states, aiming to represent the world as it is, they must aim at accuracy in the way that belief aims at truth. So if an agent has both rational credences and an acceptable way of assessing accuracy, she will be immodest. I understand immodesty as a kind of coherence between rational credences and the right account of accuracy. Given the right account of accuracy, credences that aren’t immodest aren’t rational; given rational credences, an account of accuracy that makes those credences modest isn’t a good account.9 What I’ll be doing here is arguing that, given the assumption that rational credences are probabilistically coherent, the guessing framework delivers

8 See Lewis (1971). Lewis defines “immodesty” slightly differently—in his terms an “inductive method,” rather than the person who follows it, is immodest. (An “inductive method” can be understood as a function from evidence to doxastic states.) I’ll follow Gibbard (2008) here in calling credences, or an agent who has those credences, immodest. 9 Joyce (2009) makes a similar claim about his principle, Admissibility, which claims that rational credences will never be weakly accuracy-dominated. (p. 267)

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

90 | Sophie Horowitz immodesty. Since probabilism is a plausible and popular constraint on rational credence, I think this is a significant step in favor of the guessing framework. However, to show that guessing can do everything we want from an account of accuracy, we might also want to use it to argue for probabilism. I’ll set this possibility aside until the next section.10 We are now ready to show how the guessing framework delivers Immodesty. This involves introducing a cleaned-up principle that expresses Immodesty in terms of educated guesses, and then showing why this principle is true. First, here is the principle: Immodesty: A rational agent should take her own credences to be best, by her own current lights, for the purposes of making true educated guesses. The guessing defense of Immodesty asks us to see epistemically rational agents as analogous to students preparing to take a multiple-choice test. Even if you aren’t sure of the right answers—after all, you don’t know everything—you should take your best shot. Of course, we aren’t actually preparing for a test like this, just as we aren’t (usually) preparing to meet Dutch bookies or other potential money-pumpers. But imagining this scenario will help us show why Immodesty is true; it will help us show that insofar as you’re rational, you take your credences to license the best guesses.11 To see how Immodesty follows from the guessing picture, consider the following hypothetical scenario. You will take an exam. The exam will consist of just one question regarding a proposition (you don’t know which one, beforehand) in which you have some degree of credence. You will have to give a categorical answer—for example, “It’s raining”—as opposed to expressing some intermediate degree of confidence. You will not have the

10 A final clarification about immodesty, before proceeding: immodesty is not a requirement that rational agents hold some particular attitude—for instance, that they know or believe that their credences are the most accurate. (Given some extra assumptions, we might argue that immodest agents have propositional justification for these things—but we don’t need to get into that at the moment.) Agents can be immodest even if they have never considered questions about their own credences’ accuracy; their credences must simply fit together with their notion of accuracy. 11 My strategy here is directly based on the one employed by Gibbard (2008), discussed further in Section 4. Gibbard argues that we should assess our credences for their “guidance value,” or their ability to get us what we want, practically speaking. His argument, based on a proof by Schervish, involves imagining a hypothetical series of bets. It might be helpful to think of my general line of argument as a “depragmatized” version of Gibbard’s. Gibbard points out that of course we aren’t really preparing for any such bets, nor are we choosing our credences for that purpose—but it is “as if” we are. I want to take this stance towards my hypothetical quiz, as well. The test scenario, as the bet scenario, shouldn’t be taken literally—it is still a useful illustration even if we know we won’t encounter the relevant bets. And we needn’t require agents to have beliefs or credences about which questions they’ll encounter, or to even consider potential guessing scenarios at all. (In fact, there are reasons to refrain from doing so, both for my strategy and for Gibbard’s: if there is an infinite number of potential questions, it’s impossible for agents to have positive credence, of each question, that that’s the question they’ll encounter. Thanks to someone at FEW, who was probably Kenny Easwaran, for pointing this out.)

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Accuracy and Educated Guesses | 91 option of refusing to answer.12 For the purposes of this exam, you only care about answering truly. Now suppose that you are choosing a credence function to take with you into the exam. You will use this credence function, together with the norms for guessing, to give answers on the exam. Which credence function should you choose? What we are interested in is which credence function does well by your current lights. So we will be considering various different candidate credence functions and evaluating their prospective success according to your current credence function. My claim is that if you are rational, then the prospectively best credence function, by your current lights, is your own. For concreteness, let’s call your current credence function “Cr,” and the credence function you should pick for the purposes of guessing “Pr.” So more precisely, my claim is that Pr = Cr. You should pick your own credences as the best credences to use for guessing.13 To see how the argument works, we can start off by looking back at Q1 and Q2. (These will just be warmup questions; the real argument for Immodesty will come with Q3.) Suppose the exam question is Q1: Q1:

Is it raining?

Whatever credence function you choose for Pr will license guessing “yes” if Pr(Rain)  0.5, and “no” if Pr(Rain)  0.5. Suppose your credence in Rain is 0.8. Then, by your current lights, a “yes” answer has the (uniquely) best shot at being right. So you should pick a Pr such that Pr(Rain) > 0.5. Simple questions like Q1 impose some constraints on Pr. In particular, Pr needs to have the same “valences” as Cr. That is, Pr needs to assign values that, for every proposition it ranges over, are on the same side of 0.5 as the values that Cr assigns. But questions like Q1 are not enough to fully prove Immodesty: to do well on Q1 and questions like it, you don’t need to pick Pr such that Pr = Cr. In this example, Pr could assign 0.8 to Rain, like Cr does, or it could assign 0.7 or 0.9. In fact, to do well on questions like Q1, you might as well

12 It is an interesting question how the framework might be extended if we did give agents such an option. We might be able use it, for example, to express agents’ attitudes towards risk as well as truth and falsity. I will leave this discussion for future work. Thanks to Dennis Whitcomb for suggesting this extension. 13 Some might object to the thought that there is just one credence function that you should pick, given your evidence. After all, if permissivism is true, many different credence functions are rational given your evidence. However, I don’t think that the current line of argument assumes that permissivism is false, at least if permissivism is understood interpersonally. Interpersonal permissivists should still accept immodesty—and indeed, may want to appeal to it as an explanation for why agents should not switch from one rational credence function to another without new evidence. See Schoenfield (2014) for an endorsement of immodesty in this context: Schoenfield argues that a rational agent should stick to her “epistemic standards” rather than switching because she should regard her own standards as the most truthconducive.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

92 | Sophie Horowitz round all of your credences to 0 or 1, and guess based on this maximally opinionated counterpart of Cr. More complicated questions impose stricter constraints on Pr. For example: Q2: Supposing that it’s not sunny, which is it: rain or snow? Suppose again that your credences in Sun, Rain, and Snow are as follows: Cr(Sun) = 0.75 Cr(Rain) = 0.2 Cr(Snow) = 0.05 For this question, you need to be more picky about which credence function you choose for Pr. You will not do well, by your current lights, if you guess based on the maximally opinionated counterpart of Cr. That credence function assigns 1 to Sun, and 0 to both Rain and Snow. So that credence function will recommend answering Q2 by flipping a coin or guessing arbitrarily. But, by your current lights, guessing arbitrarily on Q2 does not give you the best shot at guessing truly; it’s better to guess Rain. So you need to pick Pr such that it licenses guessing Rain, and does not license guessing anything else, on Q2. To answer questions like Q2, then, you need to choose not only credences with the same valences as yours, but also credences that differentiate among unlikely possibilities in the same way that Cr does. But this still does not show that Pr = Cr. You could do well on Q2, for example, by choosing a credence function that is uniformly just a bit more or less opinionated than Cr. This credence function is not Cr, but it will do just as well as Cr on questions like Q2. Now consider another, more complicated question. For this example, suppose Cr(Rain) = 0.8. Q3: A weighted coin has “Rain” written on one side, and “~Rain” on the other. It is weighted 0.7:0.3 in favor of whichever of Rain or ~Rain is true. Now suppose: (a) the coin is flipped, out of sight; (b) you answer whether Rain; and (c) you and the coin disagree about Rain. Who is right? In this case, the best answer by the lights of Cr is that you are right. So you should choose a Pr that will also answer that you are right. I’ll first go through the example to show why this is, and then argue that questions like Q3 show that Immodesty is true. We can work out why you should guess that you are right, in Q3, as follows. Since your credence in Rain is 0.8, you can work out that you will answer

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Accuracy and Educated Guesses | 93 “Rain.” The only situation in which you will disagree with the coin, then, is one in which the coin lands “~Rain.” So we are comparing these two conditional credences: Cr(The coin is right | The coin says “~Rain”) and Cr(The coin is wrong | The coin says “~Rain”). First, your credence that the coin will say “~Rain” is given by the following sum: Cr(The coin says ~Rain and it’s right) + Cr(The coin says ~Rain and it’s wrong). Plugging in the numbers, using the weighting of the coin and the values that Cr assigns to Rain and ~Rain, we get: (0.7 * 0.2) + (.3 * 0.8) = 0.38. Your conditional credence that the coin is right, given that it says ~Rain, is (0.7 * 0.2)/0.38 = 0.37. Your conditional credence that the coin is wrong, given that it says ~Rain, is (0.3 * 0.8)/0.38 = 0.63. Since the second value is higher, the best answer by the lights of Cr is that, given that you disagree, you are right and the coin is wrong. Questions like Q3 could be constructed with any proposition, and any weighting of the coin. To do well on the exam, when you don’t know what question you will encounter, you need to be prepared for any question of this form. So you need to pick Pr such that it will give the best answers (by the lights of Cr) given any question like Q3—involving any proposition and any possible coin. The guesses that any credence function licenses on questions like Q3 depend on the relationship between the value that credence function assigns to the proposition (in this case, Rain) and the bias of the coin. If the credence function is more opinionated than the coin (in this case, if Pr(Rain) > 0.7), it will license guessing in favor of yourself. If the credence function is less opinionated than the coin (in this case, if Pr(Rain) < 0.7) it will license guessing in favor of the coin. This is what we need to show that Immodesty is true. Suppose you choose a Pr that is different from Cr, so it assigns a different value to at least one proposition. Then, there would be at least one question for which Pr will license the “wrong” answer, by the lights of Cr. For example, suppose that Cr(Rain) = 0.8, but Pr(Rain) = 0.6. Then Pr will license the wrong answer in Q3: it will license guessing that the coin is right and you are wrong. This is because while Cr’s value for Rain is more opinionated than the weighting of the coin, Pr’s value for Rain is less opinionated. And it’s easy to see how the point generalizes. To create an example like this for any proposition, P, to which Pr and Cr assign different values, just find a coin whose weighting falls between Cr(P) and Pr(P). Then, in a setup like Q3, Cr and Pr will recommend different answers. And by the lights of Cr, Pr’s answer will look bad; it won’t give you the best shot at getting the truth. To guarantee that Pr will license good guesses in every situation, Pr must not differ from Cr. So Immodesty is true: you should choose your

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

94 | Sophie Horowitz own credence function, Cr, for the purpose of making educated guesses. Pr = Cr.14, 15

3. Probabilism We have now seen how educated guessing works, and how it delivers Immodesty. A rational agent should take her own credences to be the best guessers. This is a necessary condition on the right account of accuracy. But we might want more from accuracy: we might want to give accuracy-based defenses of certain rational coherence requirements. Since my defense of Immodesty

14 Here is the more general form of Q3, and a more general explanation for why it delivers Immodesty:

Q3*: A weighted coin has P written on one side, and ~P on the other. It is weighted x:1-x in favor of whichever of P or ~P is true, where 0 < x < 1. Now suppose: (a) the coin is flipped, out of sight; (b) you answer whether Rain; and (c) you and the coin disagree about Rain. Who is right? Suppose Cr(P) > Cr(~P); turn the example around if the opposite is true for you. You should guess in favor of yourself if Cr(P) > x, and in favor of the coin if Cr(P) < x. The probability that the coin says ~P will be the sum CrðThe coin is right & The coin says PÞ þ CrðThe coin is wrong & The coin says PÞ If Cr(P) = y, this is equivalent to: ð1  yÞðxÞ þ ðyÞð1  xÞ The following therefore gives you your conditional credences: CrðCoin is rightjCoin says PÞ

¼ ð1  yÞðxÞÞ = ðð1  yÞðxÞ þ ðyÞð1  xÞÞ ¼ ðx  xyÞ = ðð1  yÞðxÞ þ ðyÞð1  xÞÞ

CrðCoin is wrongjCoin says PÞ ¼ ðyÞð1  xÞ = ðð1  yÞðxÞ þ ðyÞð1  xÞÞ ¼ ðy  xyÞ = ðð1  yÞðxÞ þ ðyÞð1  xÞÞ To see which of the conditional credences will be higher, just look at the numerators (the denominators are the same). It’s easy to see that if x > y, the first conditional credence will be higher than the second; if y > x, the second will be higher than the first. So you should guess that the coin is right, conditional on disagreeing, if your credence in P is greater than the weighting of the coin. You should guess that you are right, conditional on disagreeing, if your credence in P is less than the weighting of the coin. 15 An important, and perhaps controversial feature of this argument for immodesty is that it makes certain richness assumptions about the credences of the agent in question. The argument won’t work if there are some possible coin weightings about which the agent simply has no opinion. (It is also worth noting that Gibbard’s practical argument for immodesty seems to require similar assumptions, insofar as it presupposes that the agent will have certain dispositions to accept or reject bets at any possible odds.) This topic deserves further consideration, but I will not attempt to fully sort it out here. Thanks especially to Chris Meacham and Simon Goldstein for helpful discussion here; and apologies to them for leaving this issue for future work.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Accuracy and Educated Guesses | 95 assumed probabilism, we might hope that the guessing framework could be used to defend probabilism as well. The task is particularly pressing if we take educated guessing to be a rival of epistemic utility theory, which (usually) aims to deliver both probabilism and immodesty. I’ll argue in this section that we can use educated guesses to argue for probabilism. However, if readers find this argument contentious (as is inevitable: every existing argument for probabilism has its detractors) I hope they will still be interested in seeing what the guessing framework can do: either as a supplement to an independent argument for probabilism, or as a way to justify weaker coherence requirements such as Dempster-Schafer. Section 3.4 offers some options along these lines. Probabilism is traditionally expressed in three axioms. I’ll use the formulations listed below. Assuming that Pr is any rational credence function, T is a tautology, and Q and R are disjoint propositions, the axioms are: Non-Triviality: PrðTÞ Cr(T). This immediately leads to problems: if you were asked to guess whether T or ~T, you would be licensed to guess ~T. But T is a tautology, and therefore guaranteed to be true. So your guess is guaranteed to be false. And it is unnecessarily guaranteed to be false: if your credence in T were greater than your credence in ~T, your guess would not be guaranteed to be false. Even stronger, in fact: it would be guaranteed to be true! Therefore if Cr(~T) > Cr(T), you violate both No Self-Sabotage and No Missing Out. Second, suppose that Cr(T) = Cr(~T). If you were asked to guess whether T or ~T, you would be licensed to answer either way. This means that you would be licensed to guess ~T, which is guaranteed to be false. This guess is also unnecessarily guaranteed false: if your credence in T were greater than your credence in ~T, you would not be licensed to guess ~T in this situation, so you would not be licensed to make a guaranteed-false guess. If Cr(T) = Cr(~T), you violate No Self-Sabotage. (You do not violate No Missing Out, however, since you are licensed to make a guaranteed-true guess that T.) In both cases, violating Non-Triviality entails violating our new norms on rational guessing. The way to avoid violating these norms is to obey Non-Triviality. So given our two norms, Non-Triviality is a requirement on rational credence. 3.2. Boundedness Boundedness: PrðTÞ  PrðQÞ  PrðTÞ Boundedness says that it is irrational for you to be more confident of any proposition than you are of a necessary truth, and it is irrational for you to be less confident of any proposition than you are of the negation of a necessary falsehood. One way to read this axiom is as saying that, of all of the possible credences you could have, your credence in necessary truths must

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

98 | Sophie Horowitz be highest—nothing can be higher! And your credence in necessary falsehoods must be lowest—nothing can be lower! If we add in a plausible assumption about what this means, we can prove Boundedness within the educated-guess framework. The assumption is this: there is a maximal (highest possible) degree of credence, and a minimal (lowest possible) degree of credence. I’ll also assume a plausible consequence of this assumption in the guessing framework. First: if you have the maximal degree of credence in some proposition, A, you are always licensed to guess that A when A is one of your choices. That is, if you are asked to guess between A and A*, your credences always license guessing A. (If Cr(A) = Cr(A*), of course, you are licensed to guess either way by Equal Credence.) Second: if you have the minimal degree of credence in some proposition, B, you are never uniquely licensed to guess B. That is, if you are asked to guess between B and B*, you are only licensed to guess B if Cr(B) = Cr(B*). For simplicity, let’s assume that your credences satisfy Non-Triviality, which we have already argued for. So, Cr(~T) < Cr(T). Assuming that there is a maximal credence and a minimal credence, we can normalize any agent’s credences, assigning the value 1 to the maximal credence and the value 0 to the minimal credence. So, if Cr(T) is maximal, Cr(T) = 1. If Cr(~T) is minimal, Cr(~T) = 0. First, let’s prove that your credence in T should be maximal; that is, Pr(T) = 1. Suppose that Cr(T) < 1. Then, I will argue, you violate both No Self-Sabotage and No Missing Out. To show this, we can return to a question like Q3 from the last section. Suppose that you’re “competing” against a weighted coin, biased in favor of the truth about T. The weighting of the coin, x, is such that Cr(T) < x < 1. (That is: the coin is weighted x:1x, in favor of the truth about T, and it is more opinionated about T than you are.) Suppose that you and this coin disagree about whether T. Given that supposition, you will guess that the coin is right and you are wrong. This violates No Self-Sabotage. In guessing that the coin is right, you are making a guaranteed-false guess. (“The coin is right,” in this case, is equivalent to “~T.”) It also violates No Missing Out. You are missing out on a guaranteedtrue guess in favor of T. So you violate both additional norms. It is irrational for Cr(T) to be non-maximal. For the second part of Boundedness, we must prove that your credence in ~T should be minimal. So, Pr(~T) = 0. Again, we can use a question like Q3. Suppose that your credence in ~T is 0.2. Consider the following question: Q4: A weighted coin has some contingent proposition—you don’t know which one, but call it “R”—on one side, and ~R on the other. It is weighted 0.9:0.1 against whichever of R or ~R is true. Now suppose that the coin is flipped out of sight. Which is right? The coin (however it landed), or ~T? Here we want to show that you will guess ~T, which is guaranteed false.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Accuracy and Educated Guesses | 99 In Q4, the coin is weighted heavily against the truth about R. You aren’t told what R is; without any more information, your credence that the coin will be right should be 0.1. Your credence in ~T is 0.2. Although your credences in both propositions are quite low, your credence in ~T is still higher—so, you are licensed to guess ~T. But ~T is guaranteed to be false. Your non-minimal credence in ~T is causing the problem here: if your credence in ~T was minimal, you would have been licensed to guess in favor of the coin, which is not guaranteed to come up false. So you should have minimal credence in ~T.17 Violating Boundedness also entails violating our two norms, No SelfSabotage and No Missing Out. You could avoid these problems by adhering to Boundedness. So your credence in T should be maximal, and your credence in ~T should be minimal.18 3.3. Finite Additivity While Non-Triviality and Boundedness provide constraints on our credences in necessary truths and falsehoods, Additivity says that our credences in contingent propositions should fit together with one another as follows: Finite Additivity :

PðQ v RÞ ¼ PðQÞ þ PðRÞ

Contingent propositions are not themselves guaranteed to be true or false. So violating Additivity—while it may lead to some irrational guesses—will not 17 Again, here is the general recipe for creating examples like this. Suppose your credence in ~T is z, where 0 < z < 1, so z is not the minimal credence. Consider the following question:

Q4*: A weighted coin has some contingent proposition R on one side, and ~R on the other. It is weighted 1-x:x against whichever of R or ~R is true, where 0 < x < z. Now suppose that the coin is flipped out of sight. Your question is: which is right? The coin (however it landed), or ~T? If you have minimal credence in ~T, you will be licensed to guess in favor of the coin, no matter how it is weighted. You will only be licensed to guess ~T if the coin is weighted 1:0 against the truth about R—which is a necessary guaranteed-false guess, so not a mark of irrationality. 18 Note that the Boundedness principle I defend is weaker than the more general Boundedness principle that some other approaches aim to justify. The more general principle says that there should be an upper bound to your credences, rather than assuming from the outset that there is one. For instance, we can use Dutch Book Arguments to show that you should never have credence greater than 1: if you did, you would be licensed to make bets that guarantee you a loss. This stronger Boundedness principle can’t be defended on the guessing picture. However, I am not convinced that this should worry us. When we associate credences with dispositions to bet, we can make sense of what it means to have credence greater than 1; so, we need an argument showing that this is irrational. But if we associate credences with dispositions to guess, it’s not clear what it is to have credence greater than 1. You can be licensed to always guess that A, but you can’t be licensed to “more-than-always” guess that A. The guessing picture therefore leaves us free to argue that credence greater than 1 is impossible—so no further argument for its irrationality is needed. Insofar as it is irrational to bet at odds that would seem to be sanctioned by more-than-maximal credence, this is a form of practical, not epistemic, irrationality.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

100 | Sophie Horowitz necessarily lead to Self-Sabotage or Missing Out. That means that our two norms will not be enough to establish Additivity as a rational constraint. I will provide a different kind of argument for Additivity, and then address a potential objection. Suppose you have the following credences in two disjoint propositions, Q and R: Cr(Q) = 0.3 Cr(R) = 0.4 Additivity says that, if you are rational, Cr(Q v R) = 0.7. My argument will bring out the fact that, if you violate Additivity, the way you guess regarding Q and R will differ depending on how the options are presented to you. (This is in line with the interpretation of the Dutch Book argument adopted by Skyrms, who draws on Ramsey: “If anyone’s mental condition violated [the probability axioms], his choice would depend on the precise form in which the options were offered him, which would be absurd.”19) The intuitive strategy will be to create two guessing scenarios regarding Q and R, and show that you will guess one way if you consider the disjunction, and another way if you consider whether one of Q and R is true, but they are presented separately. I’ll discuss the significance of this after going through the example. As before, the argument for Additivity is broken into two cases. First, suppose that Cr(Q v R) = 0.9 (higher than the credence recommended by Additivity). Now consider the following question: Q5a: Coin A has “yes” on one side, and “no” on the other. It is weighted 0.8:0.2, in favor of “yes” if (Q v R) is true and in favor of “no” if (Q v R) is false. Now suppose: (a) the coin is flipped out of sight, and (b) you guess whether (Q v R). Say “yes” if you guess (Q v R), and “no” if you guess ~(Q v R). Interpret the coin’s “yes” or “no” as answering whether (Q v R). If you and Coin A disagree, who is right? This question is again very similar to Q3. You and the coin are both answering whether the disjunction (Q v R) is true, and your credence in (Q v R) is more opinionated than the coin’s weighting. (Intuitively: from your perspective, the probability that you’re right about (Q v R) is 0.9, but the probability that the coin is right is only 0.8. So your conditional credence that you are right, given that you disagree, should be higher than your conditional credence that the coin is right, given that you disagree.) You should guess that, if you and Coin A disagree, you are right and the coin is wrong.20 19

Skyrms (1987); citation from Ramsey (1926), p. 41. Plugging in the numbers: since your credence in (Q v R) is 0.9, you will guess “yes.” So if you disagree, that means the coin must have landed “no.” We are therefore comparing the 20

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Accuracy and Educated Guesses | 101 Compare Q5a to the following question, again supposing that Cr(Q) = 0.3, Cr(R) = 0.4, and Cr(Q v R) = 0.9: Q5b: Coin A has “yes” on one side, and “no” on the other. It is weighted 0.8:0.2 in favor of “yes” (Q v R) is true and in favor of “no” (Q v R) is false. Coin B has “Q” on both sides. Coin C has “R” on both sides. Now suppose: (a) all three coins are flipped out of sight, (b) you guess “yes” or “no” in response to this question: Did at least one of Coin B and Coin C land true-side-up?and (c) You and Coin A disagree: either you said “yes” and the coin said “no,” or you said “no” and the coin said “yes.” Interpret the coin’s “yes” or “no” as answering whether at least one of Coin B and Coin C landed true-side-up. Between you and Coin A, who is right? Your credence that at least one of Coin B and Coin C landed true-side-up should be 0.7: after all, your credence that Coin B landed true-side-up is 0.3, your credence that Coin C landed true-side-up is 0.4, and Q and R are disjoint. So from your perspective, the probability that you will be right is 0.7. The probability that the coin is right, however, is 0.8. So your conditional probability that you will be right, given that you disagree, is less than your conditional probability that the coin will be right, given that you disagree. You should guess that if you disagree, Coin A will be right.21 following two conditional probabilities: Cr(Coin A is right | Coin A says “no”) and Cr(Coin A is wrong | Coin 1 says “no”). Your credence that Coin A says “no” is given by this sum: CrðCoin A says “no” and it’s rightÞ þ CrðCoin A says “no” and it’s wrongÞ Plugging in the numbers, we get (0.8 * 0.1) + (0.2 * 0.9) = 0.26. Your credence that Coin A says “no” and it’s right is (0.8 * 0.1). So your conditional credence that Coin A is right, given that it says “no,” is 0.31. Your credence that Coin A says “no” and it’s wrong is (0.2 * 0.9). So your conditional credence that Coin A is wrong, given that it says “no,” is 0.69. So you should guess that, if you disagree, you are right and Coin A is wrong. 21 Plugging in the numbers again: Your credence in Q is 0.3, and your credence in R is 0.4. You know that Coin B will say “Q” and Coin C will say “R.” So your credence that at least one of Coin B and Coin C will land true-side-up should be 0.7. You should guess “yes.” If you disagree with Coin A, then, that means that Coin A must have said “no.” Your credence that Coin A says “no” is given by this sum:

CrðCoin A says “no” and it’s rightÞ þ CrðCoin A says “no” and it’s wrongÞ: Plugging in the numbers, we get ((0.8 * 0.1) + (0.2 * 0.9)) = 0.26. In this question, when you disagree with Coin A, you are each answering the question of whether at least one of Coin B and Coin C landed true-side-up. Your credence that Coin A says “no” and is right about that question is (0.8 * 0.3). So your conditional credence that Coin A is right, given that it says “no,” is 0.92. Your credence that Coin A says “no” and it’s wrong about that question (0.2 * 0.7). So your conditional credence that Coin A is wrong, given that it says “no,” is 0.53. So you should guess that, if you disagree, the coin is right and you are wrong.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

102 | Sophie Horowitz This combination of guesses illustrates the inconsistency in your credences. In Q5a, you are licensed to guess that if you disagree with Coin A, you will be right. In Q5b, you are licensed to guess that if you disagree with Coin A, the coin will be right. But the only difference between Q5a and Q5b was in how your guess about Q and R was presented: as a disjunction in Q5a, and as separate guesses on Q and R in Q5b. So if you are rational, you should not answer differently in Q5a and Q5b.22 We can create a parallel setup for the case where your credence in (Q v R) is lower than the credence recommended by Additivity. All we need is a Coin A’, whose weight is between your credence in (Q v R) and the sum of your credence in Q and your credence in R. (For example, if your credence in (Q v R) is 0.51, we could weight the coin 0.6:0.2 in favor of “yes” if (Q v R) is true, and in favor of “no” if (Q v R) is false.) Again, you will guess inconsistently: you will guess in favor of the coin when you consider (Q v R) presented as a disjunction, and you will guess in favor of yourself when you consider Q and R separately. This is irrational. You have no basis for treating Q5a and Q5b (or their counterparts, with coin A’) differently from one another. But if you violate Additivity, your credences require you to treat the two cases differently. Here is another way we could put the point. Your guesses in questions Q5a and Q5b reflect how you regard the strength of your evidence about Q and R. In Q5a, guessing in favor of yourself, over the coin, makes sense because you consider your evidence to be a stronger indicator of whether Q or R is true than the coin is. From the perspective of your evidence, trusting the coin over your own guess is a positively bad idea; it gives you a worse shot at being right. Compare this to your guess in Q6b. From the perspective of your evidence, as

22 Here is the general recipe for examples of this form. Suppose that Cr(Q) = x, Cr(R) = y, and Cr(Q v R) = z. Now, suppose z > x + y. Compare the following two questions:

Q5a*: Coin A has “yes” on one side, and “no” on the other. It is weighted v:1v, where x + y < v < z, in favor of “yes” if (Q v R) is true and in favor of “no” if (Q v R) is false. Now suppose: (a) the coin is flipped out of sight, and (b) you guess whether (Q v R). Say “yes” if you guess (Q v R), and “no” if you guess ~(Q v R). Interpret the coin’s “yes” or “no” as answering whether (Q v R). If you and the coin disagree, who is right?

Q5b*: Coin A has “yes” on one side, and “no” on the other. It is weighted v:1v, where x + y < v < z, in favor of “yes” if (Q v R) is true and in favor of “no” if (Q v R) is false. Coin B has Q on both sides. Coin C has R on both sides. Now suppose: (a) all three coins are flipped out of sight, (b) you guess “yes” or “no” in response to this question: Did at least one of Coin B and Coin C land true-side-up? and (c) You and Coin A disagree. Between you and Coin A, who is right? You will guess in favor of yourself in Q5a*, and in favor of the coin in Q5b*.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Accuracy and Educated Guesses | 103 characterized in Q6b, trusting your own guess over the coin is a positively bad idea. But if the relevant evidence—the evidence bearing on Q, and the evidence bearing on R—is the same, and you are judging its strength in comparison to the very same coin, it doesn’t make sense to guess differently in the two cases. Your credences should not license both guesses simultaneously. The only rational option is to obey Additivity. I’d like to close by addressing two worries you might have about this argument. First: you might think that providing a different kind of argument for Additivity from the kind we had for Boundedness and Non-Triviality is a weakness of the guessing picture. After all, popular defenses of probabilism— Dutch Book arguments and epistemic utility theory—argue for all three axioms in a unified way. The Dutch Book argument says that agents with incoherent credences will be licensed to take bets that guarantee a net loss of money (or utility, or whatever you’re betting on). Epistemic utility theorists argue that incoherent credences are accuracy-dominated by coherent credences, or else that incoherent credences fail to maximize expected epistemic utility. On the guessing picture, however, the argument for Additivity is, in a way, weaker than the arguments for the other two: it gives us an illustration of tension in your credences, rather than pointing to something positively bad that will result from that tension. Is this a problem? I’d like to propose that we think of Additivity differently from the other axioms. The argument I gave was meant to show how, if your credences violate Additivity, you will fail to make sense by your own lights. How reliable you take yourself to be regarding Q and R depends on how you are asked about Q and R—how the very same guessing situation is presented to you. This is the same sort of argument we might make to show that it is irrational to believe that John is a bachelor, but also believe that he’s married. Neither of these particular beliefs is guaranteed to be false in virtue of your holding both of them. But you will have beliefs that don’t make sense by your own lights—at least if you understand what it is to be a bachelor, and what it is to be married. We could try to make the same kind of argument in favor of other informal coherence constraints: for example, to show that it is irrational to believe both P and my evidence supports ~P. There is a kind of incoherence involved in holding both beliefs, even if doing so does not lead to a straight-out contradiction. In both cases, we might not have the security of a decisive proof on our side. But that doesn’t show that the rational requirements in question don’t hold. Of course, my argument for Additivity relied on some controversial assumptions. Most obviously, I relied on the thought that if you have evidence bearing on (Q v R) as a disjunction, that very same evidence bears on both Q and R, separately. This leads us to a second worry: does my argument for Additivity assume what it’s trying to prove? After all, claiming that Q5a and Q5b are “the same question, presented differently” might seem to beg the question against an opponent of Additivity. An advocate of Dempster-Schafer theory, for instance, might argue that it’s possible to have evidence bearing on (Q v R) as a disjunction that has no bearing on either Q or R individually. My argument

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

104 | Sophie Horowitz would do little to persuade a fan of Dempster-Schafer to be a probabilist. So you might think this shows that the guessing account can’t really provide a strong justification of probabilism. I take this to count in favor of the guessing account. It can be used to make sense of, and argue for, the axioms of probability for those who are sympathetic to certain background assumptions. But it is also flexible enough that, were we to deny these assumptions, we would still be able to make use of the general framework. (See the next subsection for some suggestions to this effect.) The guessing picture can therefore serve as a backdrop for some of the substantive debates in formal epistemology. And the particular argument I proposed for Additivity makes clear where the substantive assumptions come in to those debates. 3.4. Other Applications We’ve seen how the guessing picture can help us argue for the probability axioms as constraints on rational credence. One might wonder whether there are other constraints that it can justify: can it do more? Or, for those skeptical of Additivity, can it do less? A full exploration of these questions is beyond the scope of this chapter (indeed, one of my hopes for this chapter is to point towards these questions, rather than answering all of them). But here is a brief survey of some further questions we might use the guessing framework to answer. (A) Setting aside formal requirements for the moment, the educated guessing framework is potentially useful for contexts in which we want to draw a connection between credences and various all-out epistemic notions. A salient example is reliability, which is typically understood as the propensity to get things right and wrong in all-out terms. One place this might come in handy is in thinking about “higher-order” evidence: evidence about your own rationality, what your evidence is, or what it supports. Many epistemologists find it plausible that this kind of evidence should influence what credences are rational for you to adopt. A natural explanation for this is that impairments in rationality often go along with impairments in reliability as well.23 I have also argued elsewhere for an explicit connection between credences and educated guesses, in the interest of spelling out how higher-order evidence works.24 Guessing could be used in contexts like this one to allow us to use degreed and all-out notions at the same time. 23 The temptation to speak in all-out terms is clear in much of the literature on higher-order evidence. For example, see White (2009)’s “Calibration Rule,” which states: “If I draw the conclusion that P on the basis of any evidence E, my credence in P should equal my prior expected reliability with respect to P.” See also Elga (2007)’s (similar) formulation of the Equal Weight View: “Upon finding out that an advisor disagrees, your probability that you are right should equal your prior conditional probability that you would be right . . . ” Though both White and Elga work in a degreed-belief framework, they often slip into all-or-nothing terms to describe how higher-order evidence should work. The guessing picture could help to make this connection more precise. 24 See Horowitz and Sliwa (2015).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Accuracy and Educated Guesses | 105 (B) Back to formal constraints: we could use the guessing framework to argue for Regularity by endorsing stronger versions of No Self-Sabotage and No Missing Out. A stronger version of No Self-Sabotage might say that rational credences will never license guaranteed-false guesses, unless it is unavoidable (because all of one’s options are guaranteed to be false). That would mean that it’s irrational to have minimal credence in any contingent proposition: doing so would license you to make a guaranteed-false guess when your choice is between that guaranteed-false proposition and the contingent proposition. Similarly, a stronger version of No Missing Out might say that, whenever one could be licensed to make a guaranteed-true guess, one should only be licensed to make guaranteed-true guesses. That would mean that it’s irrational to have maximal credence in any contingent proposition. (The weaker versions of these norms—which don’t entail Regularity—were strong enough for Non-Triviality and Boundedness. For now, I am only endorsing the weaker norms.)25 (C) If we adopt a dominance-avoidance principle, we can use the guessing framework to argue for the following two principles: (i) For all P and Q: if P entails Q, then Pr(P)  Pr(Q). (ii) For all P, Q, and R: if P entails Q, and Q and R are mutually exclusive, then if Pr(Q) > Pr(P), then Pr(Q v R) > Pr(P v R). The dominance-avoidance principle is as follows: it’s irrational to hold some credences, if they license strictly more false guesses than other particular credences you could have had.26 To prove (i): Suppose that P entails Q, but Cr(P) > Cr(Q). Now suppose that you are asked to guess: P or Q? You will be licensed to guess that P. In the state of the world where P is false and Q is true, your guess will be false. You could have avoided guessing falsely in that state of the world if Cr(P) had not been greater than Cr(Q). Therefore, Cr is weakly dominated by another credence function—one which is just like it in every respect, except that it obeys (i). To prove (ii): suppose that P entails Q, Cr(Q) > Cr(P), but Cr(Q v R)  Cr(P v R). Then suppose you’re asked to guess: (Q v R), or (P v R)? You will be licensed to guess (P v R). There are four possible states of the world consistent with our supposition that P entails Q: w1: P; Q; R w2: P; Q; R

25

Thanks to Jennifer Carr and Simon Goldstein for (separately) suggesting this to me. The argument in this subsection draws directly on work by Branden Fitelson and David McCarthy. (See their (2014).) They defend Dempster-Schafer axioms using a similar dominanceavoidance norm in their formal framework, which is a decision-theoretic picture using comparative confidence and all-out belief. Fitelson and McCarthy also prove that probabilism can’t be defended using those tools alone. Thanks to Branden Fitelson for extensive discussion and help on this section. 26

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

106 | Sophie Horowitz w3: P; Q; R w4: P; Q; R Your guess, (P v R), will be true in w1 and w3, and false in w2 and w4. Compare this to another credence function, Cr’, which is just like Cr except that it obeys (ii). Cr’ will guess (Q v R), which is true in w1, w2, and w3, and false in w4. Therefore Cr’ licenses strictly more true guesses than Cr—it licenses a guess that’s true in w2, where Cr’s guess is false, and licenses the same guesses everywhere else. Cr is therefore weakly dominated by Cr’. These two constraints, (i) and (ii), are especially interesting. Together with Non-Triviality, they are sufficient for Dempster-Schafer. So by adding this dominance avoidance norm, we could prove Dempster-Schafer using the guessing framework. I want to stay neutral here on whether decision-theoretic reasoning is appropriate in this context, and hence whether using rules like dominance-avoidance is the right way to think about rational guessing. So I do not want to either endorse or rule out this particular application of the framework. However, it is an interesting possibility for further exploration, and one that is available to those who are sympathetic to thinking about epistemic rationality in these terms. There may also be other ways to argue for constraints like Dempster-Schafer without thinking in decision-theoretic terms. I will leave that question open for now. (D) Finally: it is interesting to note that, with the exception of Additivity and Immodesty, none of the norms I have argued for have relied on particular numerical values for our credences. (Notice that the original three norms about when guessing is licensed are put in terms of comparative confidence.) Therefore, much of what I have argued for could be adopted by someone who is skeptical of numerical-valued credences or subjective probabilities, and instead more interested in comparative confidence or plausibility.27 Such a person could adapt my argument for Immodesty to defend a similar principle: rather than arguing that a rational agent should regard her own (numerical-valued) credence function as the best guesser, she could argue that a rational agent should regard her own plausibility ordering as the best guesser. I won’t pursue any of these applications in depth here. I mention them only to highlight the flexibility of the guessing framework, and the number of purposes that it could be used for. I take it to be a virtue of the guessing framework that it does not force our hand in a number of debates, such as whether to adopt Probabilism or Dempster-Schafer. Such debates should take place in our theory of rationality, not our theory of accuracy. But the arguments for or against various positions should be articulable in terms of accuracy. What I’ve shown here is that the guessing picture has promise as a framework in which those debates can play out.

27

Thanks to Kenny Easwaran and Branden Fitelson for helpful discussion here.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Accuracy and Educated Guesses | 107

4. Alternative Approaches Let’s take stock. So far I have introduced my new framework and shown how it might be used to account for Immodesty and probabilism. In this section I will look very quickly at two alternatives to my proposal, each of which also offers a defense of these two requirements. Epistemic utility theory evaluates credences using special utility functions, or “scoring rules.” Another strategy, which I’ll call “the practical approach,” does away with truth and looks instead at which actions are rationalized by an agent’s credences. I will discuss these two approaches only briefly, to bring out some salient features of the educated guess picture in comparison to its competitors. 4.1. Epistemic Utility Theory Epistemic utility theory (EUT) starts off with what I referred to earlier as the common-sense notion of accuracy: the thought that credences are more accurate as they get closer to the truth.28 According to EUT, epistemically rational agents should adopt the credences that maximize expected “epistemic utility,” much as decision theory understands practically rational agents as taking actions that maximize expected utility. Epistemic utility function, or “scoring rules,” are functions of credences’ closeness to the truth. Many epistemic utility theorists aim to justify probabilism. Most canonically, Joyce (1998, 2009) argues that incoherent credences are “accuracy-dominated” by coherent credences. There are two important premises needed for this argument to go through. One is the assumption that dominance-style reasoning is appropriate in this context. The other is the acceptance of certain axioms that narrow down the range of permissible measures of accuracy. To deliver immodesty within the EUT framework, we must accept similar assumptions. First, EUT sees immodesty as a matter of maximizing expected epistemic utility from one’s own perspective (that is, as assessed by one’s own credences and scoring rule). This requires us to think about rationality in a decision-theoretic way, much as the earlier dominance assumption did. Second: immodesty enters the EUT framework as a constraint that narrows down the acceptable range of scoring rules. (The acceptable scoring rules are those that allow rational agents to regard their own credences as best—that is, as maximizing expected accuracy.) But in order to narrow down acceptable scoring rules in this way, we need to either accept certain strong axioms on acceptable accuracy measures, or else just build in immodesty— understood as expected-utility maximization—as its own very strong assumption from the start.

28

See, e.g., Joyce (1998) and (2009), Greaves and Wallace (2006), and Leitgeb and Pettigrew (2010).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

108 | Sophie Horowitz EUT is an interesting and powerful framework, and I don’t hope to argue definitively that the guessing framework is better. But I do think that the guessing framework has some important advantages, at least regarding the two assumptions that I’ve highlighted. Both assumptions have been questioned. To take a few examples: Selim Berker has argued against “consequentialist” or “teleological” reasoning in epistemology, and Jennifer Carr and (separately) Jason Konek and Ben Levinstein have argued against simple decision-theoretic interpretations of the EUT machinery.29 (Some, like Konek and Levinstein, argue for a subtle alternative interpretation of the rules in question; Carr argues for an alternative interpretation of the notion of epistemic utility. Others, like Michael Caie and Hilary Greaves, bite the bullet and accept strange results of decision-theoretic reasoning.30) The guessing approach, however, allows us to avoid this challenge, because it does not build in or require any consequentialist assumptions. We are free to supplement the guessing picture with decision-theoretic or consequentialist reasoning, but we aren’t forced to do so; we are also free to be non-consequentialists. All we need to say is that for any proposition, a rational agent should have the credence that gives her the best shot, given her evidence, at guessing truly on questions regarding that proposition. We can, as Selim Berker puts it, “respect the separateness of propositions.”31 The second assumption—in particular, the specific axioms that EUT requires to deliver its strong results—has been questioned as well. For instance, Joyce’s “Normality” axiom says roughly that if two credences are equally far from the truth they must have equal accuracy. Allan Gibbard objects to Normality, arguing that someone should count as purely concerned with the truth, or purely concerned with accuracy, even if she values closeness to truth much more highly than distance from error. Patrick Maher gives a similar objection to Joyce’s “Symmetry” axiom. He also argues that the whole EUT account of accuracy is implausible because it rules out the “Absolute Distance” measure, according to which the accuracy of one’s credence is equal to the absolute value of its distance from the truth.32 According to Maher and Gibbard, the scoring rules that EUT works with are too far from our ordinary conception of accuracy; we should be suspicious, then, that EUT’s arguments for probabilism and immodesty are really “purely alethic.” The guessing framework largely avoids this problem as well. The account of accuracy it offers, including its basic rules for when guesses are licensed, is very simple and intuitive. As we saw, these intuitive pieces were all we needed to see that the guessing framework delivers immodesty. Probabilism, of course, required some stronger norms on rational guessing, which may be challenged.

29 30 31 32

See Berker (2013a) and (2013b); Carr (2017); Konek and Levinstein (2017). See Caie (2013) and Greaves (2013). Berker (2013b). See Maher (2002) and Gibbard (2008).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Accuracy and Educated Guesses | 109 But the guessing framework allows this debate to take place in a simple and intuitive setting—norms regarding when guessing is rational and irrational. 4.2. The Practical Approach An alternative family of arguments tries to justify rational requirements such as probabilism and immodesty by looking at practical value. These arguments don’t appeal directly to any particular understanding of accuracy, or any other way of evaluating credences directly in relation to truth. Instead, the practical approach builds on the connection between credences and rational action, as understood by decision theory. The practical argument for probabilism is the Dutch Book argument. If you have incoherent credences, the Dutch Book argument says, you will be licensed to accept a series of bets that, together, guarantee a sure loss (of money, utility, of whatever you’re betting on).33 The practical argument for immodesty, given by Gibbard (2008), involves imagining a continuum of bets at various odds. Your credences will license taking some bets, and rejecting others. Gibbard argues that you should take your own credences to be best for the purposes of betting, or acting more generally—other credences will recommend taking bets that, by your current lights, look bad, or rejecting bets that look good. Practical arguments provide an economical way of accounting for requirements of epistemic rationality. They don’t require us to posit a special kind of epistemic utility; instead, they piggyback on practical utility, which has independent uses in the theory of practical rationality. But there is reason to think that we should try to do better. Most obviously, the phenomena that these practical arguments attempt to explain are, at face value, purely epistemic. Why should epistemic rationality be held hostage to practical concerns, such as how much money you’re likely to make? (We don’t generally think that you should adopt one belief over another because of monetary gain—so how are these arguments different?) For those who want to maintain that the practical and the epistemic as distinct normative realms, practical arguments for epistemic requirements miss the mark. There is much more to be said here. For instance, defenders of “depragmatized” Dutch Book arguments interpret them as manifestations of an epistemic phenomenon, rather than taking on the practical aspects at face value.34 I will come back to this in the last section. (It’s worth noting, though, that not all defenders of the practical approach want to adopt this sort of understanding. Gibbard explicitly abandons hope for a purely epistemic argument for immodesty.) But once again, the guessing approach allows us to avoid this challenge.

33 The Dutch Book Argument originates in Ramsey (1926). See Vineberg (2011) for an overview. 34 See, for instance, Christensen (1996) and Howson and Urbach (1993).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

110 | Sophie Horowitz The guessing arguments for immodesty and probabilism don’t need to be “depragmatized”—they are already non-pragmatic. For those who find the practical strategy unsatisfying, the guessing framework offers an improvement. It allows us to say that a rational agent should be coherent and immodest, not because this will make her happy or rich, but because she takes her credences to give her the best shot at representing the world as it is.

5. Some Final Comments on Guessing and Accuracy In this last section I’ll return to our original question: what makes credences more or less accurate? Does the guessing framework give us the kind of answer we were looking for? We started with the intuitive thought that credences are more accurate as they get closer to the truth—for example, it’s better to have 0.9 credence in a true proposition than 0.8. It may not be immediately obvious how the guessing picture does justice to this thought. After all, if you and I are both asked to guess whether P, your true guess is as good as mine—regardless of whether one of our credences is much closer to the truth. One might object that this is the wrong result. We should be able to explain why your 0.9 credence is more accurate than my 0.8. The objector gets one thing right: the guessing picture doesn’t allow us to differentiate credences of 0.9, 0.8, and 0.50001 if we only look at one guess, the one between P and ~P. But we can do justice to the original thought that “closer is better” if we look at all of the guesses that our credences license. Someone with 0.9 credence in a true proposition P will guess correctly not just about P, but about lots of other questions as well—questions which someone with 0.50001 credence will often get wrong. Think back to the continuum of weighted coins we imagined in the argument for Immodesty. You should expect to “beat” a given coin if you’re more opinionated than the coin. So as your credence gets closer to a proposition’s truth-value, the space of possible coins that are better guessers than you gets smaller and smaller, and the space of possible coins that are worse guessers gets bigger and bigger. Understanding the coins as representatives for all of the possible guesses the your credences license—better than coin A, worse than coin B, etc.—we can see that in general, the space of possible questions you can expect to answer correctly gets larger and larger as your credence gets closer to the truth. Greater accuracy, on the guessing account, corresponds to getting more and more true guesses. So credences do get more accurate as they get closer to the truth, and less accurate as they get farther away. Another important aspect of our everyday conception of accuracy is that it is an alethic notion. In this respect, the guessing framework captures our notion of accuracy much better than the practical picture. It is decidedly less pragmatic than Gibbard’s (explicitly pragmatic) “guidance” account. Unlike Gibbard’s story, the guessing account essentially involves the connection between credences and truth. It is also less pragmatic than the simple,

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Accuracy and Educated Guesses | 111 straightforward interpretation of the Dutch Book account: it appeals to the desire for truth, rather than utility or money. But is the guessing picture completely free from pragmatic concerns? Guessing is, after all, an action. And in any real exam, whether it’s rational to guess one way or another is going to be subject to all kinds of practical concerns. This raises the worry that the guessing account isn’t purely alethic after all. I claimed earlier that the guessing arguments, unlike Dutch Book arguments, do not need to be “depragmatized.” But depragmatized Dutch Book arguments nevertheless give us useful guidance for how the guessing arguments should be interpreted. What we’re interested in isn’t the all-things-considered rationality of guessing, or preparing to guess—after all, guessing falsely might be all-things-considered rational in some cases, and in other cases we might know that we won’t have to guess at all. Rather, we should look at the guesses sanctioned by our credences, and see them as illustrations of underlying properties of those credences. In fact, this aspect of the depragmatization strategy seems to me to work better for the guessing account than for Dutch Book arguments. For the guessing account, we need only look at one particular kind of action (answering whether propositions are true or false), and one desire (to answer truly), to illustrate the epistemic phenomena we are interested in. This action and desire are much more directly connected to epistemic concerns than betting behavior is. As I mentioned before, guessing is already an action with familiar correctness conditions, which are the same as those of full belief. It is natural to think, therefore, that credences that license true guesses are better, epistemically speaking, than credences that license false ones. And if your credences unnecessarily license guaranteed-false guesses—or if your credences are more likely, by your own lights, to license false guesses than other credences you could have—it is irrational to hold those credences.

6. Conclusion We now have a new formal notion: educated guessing. I have argued that it gives us a natural and plausible way to think about accuracy for credences. The framework is simple and non-committal, and it fits together with many other things we might independently want from our theory of epistemic rationality. It vindicates Immodesty, and with a couple of plausible norms, can be used to argue for probabilism as well. In addition, I have pointed to a number of other potential applications that the framework might be used for, and highlighted some advantages that it has over other popular approaches. I hope to have established the guessing framework as a valuable new addition to our epistemic picture: one that can do much of the work that we need, and which has some significant advantages over its competitors. Alongside these advantages, the guessing account also raises some questions. How do educated guesses fit in with a plausible philosophy of mind? Are there implications for other epistemological issues, like knowledge or full belief? Morality? Practical rationality? Do the norms for rational guessing give

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

112 | Sophie Horowitz us strong enough constraints on rational credence? And so on. As with any new philosophical tool, we can begin to answer these questions as we see what the tool can be used for. It is my hope that educated guesses can do quite a lot.35

References Berker, Selim. (2013a) “The Rejection of Epistemic Consequentialism.” Philosophical Issues 23: 363–87. Berker, Selim. (2013b) “Epistemic Teleology and the Separateness of Propositions.” Philosophical Review 122: 337–93. Caie, Michael. (2013) “Rational Probabilistic Incoherence.” Philosophical Review 122 (4): 527–75. Carr, Jennifer. (2017) “Accuracy or Coherence?” Philosophy and Phenomenological Research 95 (3): 511–34. Christensen, David. (1996) “Dutch-Book Arguments Depragmatized: Epistemic Consistency For Partial Believers.” The Journal of Philosophy 93: 450–79. Elga, Adam. (2007) “Reflection and Disagreement.” Noûs XLI (3): 478–502. Fitelson, Branden and McCarthy, David. (2014) (“Toward an Epistemic Foundation for Comparative Confidence,” ms. Available at: http://fitelson.org/cc_handout.pdf. Gibbard, Allen. (2008) “Rational Credence and the Value of Truth.” In Tamar Szabó Gendler and John Hawthorne, ed. Oxford Studies in Epistemology, vol. 2. Oxford: Oxford University Press, pp. 143–64. Greaves, Hilary. (2013) “Epistemic Decision Theory.” Mind 122 (488): 915–52. Greaves, Hilary and Wallace, David. (2006) “Justifying conditionalization: Conditionalization maximizes expected epistemic utility.” Mind 115 (459): 607–32. Hájek, Alan. (2011) “A Puzzle About Partial Belief,” ms. Horowitz, Sophie and Sliwa, Paulina. (2015) “Respecting All the Evidence.” Philosophical Studies 172 (11): 2835–58. Howson, Colin and Urbach, Peter. (1993) Scientific Reasoning: The Bayesian Approach, second edition. La Salle, IL: Open Court. Joyce, James. (1998) “A nonpragmatic vindication of probabilism.” Philosophy of Science 65 (4): 575–603. Joyce, James. (2009) “Accuracy and Coherence: Prospects for an Alethic Epistemology of Partial Belief.” In Franz Huber and Christoph Schmidt-Petri (eds) Degrees of Belief. Berlin: Springer.

35 Earlier versions of this chapter were presented at the 2015 Formal Epistemology Workshop, the 2015 Central Division meeting of the APA, the Fall 2014 Formal Epistemology Seminar at Rutgers University, the UT Austin Epistemology Group, UMass Amherst, Dartmouth College, and Western Washington University, as well as various work-in-progress venues at Rice University and MIT. I am grateful to audiences there for their helpful comments, and especially to my commentators at the APA and FEW: James Joyce, Sinan Dogramaci, and Brian Knab. For helpful discussion and comments, thanks also to Jennifer Carr, David Christensen, Nilanjan Das, Ryan Doody, Kenny Easwaran, Brian Hedden, Brendan de Kenessey, Jack Marley-Payne, Hanti Lin, Miriam Schoenfield, Paulina Sliwa, and Dennis Whitcomb, as well as others I’ve probably left out. Special thanks to Branden Fitelson and Bernhard Salow for lots of help on some important parts.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Accuracy and Educated Guesses | 113 Konek, Jason and Levinstein, Ben. (2017) “The Foundations of Epistemic Decision Theory.” Mind, fzw044. Available at https://doi.org/10.1093/mind/fzw044. Leitgeb, Hannes and Pettigrew, Richard. (2010) “An Objective Justification of Bayesianism I: Measuring Inaccuracy.” Philosophy of Science 77 (2): 201–35. Lewis, David. (1971) “Immodest Inductive Methods.” Philosophy of Science 38 (1): 54–63. Lin, Hanti and Kelly, Kevin. (2011) “A geo-logical solution to the lottery paradox, with applications to conditional logic.” Synthese 186 (2): 531–75. Maher, Patrick. (2002) “Joyce’s Argument for Probabilism.” Philosophy of Science 69 (1): 73–81. Ramsey, Frank. (1926) “Truth and Probability.” Reprinted (1964) in Henry E. Kyburg and Howard E. Smokler (eds) Studies in Subjective Probability. New York, NY: John Wiley & Sons. Schaffer, Jonathan. (2004) “From Contextualism to Contrastivism.” Philosophical Studies 119: 73–103. Schoenfield, Miriam. (2014) “Permission to Believe: Why Permissivism is True and What it Tells Us about Irrelevant Influences on Belief.” Noûs 48 (2): 193–218. Skyrms, Brian. (1987) “Dynamic Coherence and Probability Kinematics.” Philosophy of Science, 54: 1–20. Vineberg, Susan. (2011) “Dutch Book Arguments.” In Edward N. Zalta (ed.) The Available at: http://plato.stanford.edu/archives/sum2011/entries/dutch-book/. White, Roger. (2009) “On Treating Oneself and Others as Thermometers.” Episteme 6 (3): 233–50.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

5. Who Wants to Know? Jennifer Nado The year is 1947. A normal, well-trained doctor examines a patient who has come to his office complaining of a mild fever and headache. After ruling out any serious illness, the doctor recommends that the patient take a tablet of aspirin three times a day to reduce the fever and control the pain, and that she check back in if symptoms continue. Does the doctor know that aspirin is effective for the treatment of mild pains and fevers? It’s not a trick question. I’m expecting you to say ‘yes’. Barring some severe deficit in their education, any doctor in 1947 would most certainly know the basic properties of aspirin. 1947, however, is one year before the first published randomized controlled trial of a medication (Bhatt 2010); it is also eight years before the publication of Henry Beecher’s “The Powerful Placebo,” which popularized the idea of placebo-controlled studies (Beecher 1955).1 Modern experimental methods in clinical trials are in fact a surprisingly recent phenomenon; a 1951 analysis of 100 then-current trials found that 45% lacked even a basic control group, and another 18% employed inadequate controls (Ross 1951). Whatever evidence doctors had for the effectiveness of aspirin in 1947, then, would almost certainly not pass muster for (say) approval of a treatment for use in the United States by the current Food and Drug Administration. Notice what this implies. When medical researchers investigate the properties of a new drug, the use of double-blinded, randomized, placebo-controlled trials is in many cases treated as obligatory2—quite plausibly, epistemically obligatory, though we’ll return to that claim later. But our hypothetical doctor shows that it is clearly possible to know propositions of the form ‘drug x is effective for treatment of symptom y’ in the absence of such obligations having being met. I’d go further still—I’m quite comfortable with the idea that Hippocrates knew the medicinal properties of willow bark (the botanical source of aspirin) in the fifth century BCE., more than a millennium before the emergence of modern medicine. Again, whatever evidence Hippocrates possessed is woefully insufficient from the perspective of, say, the methodological standards demanded by modern medical journals. If all this is correct, then it seems to me to suggest the following claim: current-day medical researchers don’t aim their inquiries at the production of knowledge. Not as an ultimate goal, and not even as a means to an ultimate 1 The existence of placebos had by then been recognized for some time, but they were primarily used for non-research purposes—e.g., to mollify hypochondriacal patients. 2 There are of course exceptions under circumstances that would prevent use of RCT’s—for instance, if withholding treatment from a control group would be unethical.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Who Wants to Know? j 115 goal of true belief. Instead, medical researchers appear to aim for a more stringent, ‘higher’ epistemic state—one that reduces the chance of error far below the normal thresholds required for knowledge. More carefully, then, these researchers do not aim at the production of knowledge qua knowledge, but instead at a brand of ultra-high-quality knowledge—five-star knowledge, if you will. A bit of reflection will indicate that a similar claim could be made for many other branches of inquiry that involve proprietary, specialized methodological procedures, such as the various sciences, mathematics, law, journalism, and quite plausibly philosophy. For most academic disciplines and other inquiry-centered professional fields, knowledge simpliciter is simply not what inquirers want. That’s what I’ll argue, at least. Obviously, there are other reactions one might have to the apparent ability of doctors to know medical truths prior to the midtwentieth century. The first thought that’s likely to come to mind would be to account for the apparently elevated standards of the modern medical community via one of the many available shifting-standards accounts of knowledge— some variety of contextualism or subject-sensitive invariantism, presumably. Or one could find some sort of argument to deny that the 1947 doctor and the modern medical community face different epistemic requirements, despite appearances. But I hope to convince you that such moves are much less plausible than they might initially appear. A much simpler option—and one which I see no real reason to resist—is to embrace a form of epistemic pluralism. Knowledge is not the be-all and end-all of human cognitive activity, but only one category of epistemic good among many. And for many epistemic undertakings, knowledge alone is just not good enough.

1. Our Knowledge-Centered Epistemology I think it would be fairly uncontroversial to claim that knowledge has been, and continues to be, the most central epistemic concept within traditional, nonformal epistemology. Indeed, ‘epistemology’ itself is most commonly defined as the study of knowledge. The focus is plausibly rooted in Plato, among others, but is most fully evident in contemporary epistemology; as any undergraduate philosophy major knows, twentieth-century epistemology was littered with endless attempts to analyze the concept of knowledge, particularly in the years immediately following Gettier’s counterexamples to the classic JTB analysis.3 Of course, there was also much attention paid to such notions as justification, but justification itself has generally been considered to inherit its importance from its role in the analysis of knowledge. In these early years of the twenty-first century, there has been increasing skepticism about the prospects for a successful analysis of knowledge; yet we’ve also seen the growth of ‘knowledge-first’ epistemology. Knowledge continues to hold center stage.

3

Here and throughout the chapter, I restrict my attention to non-formal epistemology.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

116 | Jennifer Nado 1.1. The Value of Knowledge There have been numerous attempts to elucidate just why knowledge should seem so important, and so central. Typically, these attempts focus on the question of why knowledge should be more valuable than mere true belief. The locus classicus for such accounts is in Plato’s Meno, in which Plato first raises the problem by noting that true belief appears just as good as knowledge for achieving, say, the goal of finding one’s way to the city of Larissa. Why, then, should we value knowledge as highly as we do? Plato’s answer is that knowledge, unlike true belief, is ‘tethered’—less apt to disappear when one is questioned or presented with reasons to doubt. Contemporary responses to the ‘Meno problem’—and to the more general question of why knowledge deserves special attention4—are varied. Virtue epistemologists have suggested that their views can account for the value of knowledge by casting knowledge as a sort of cognitive achievement arising from epistemic virtue; this sort of achievement, in turn, is held to be intrinsically valuable (Zagzebski 2003, Greco 2003). Reliabilists like Alvin Goldman have located the value of knowledge in the fact that the reliability that accompanies knowledge makes our future beliefs likely to be true (Goldman and Olsson 2009). Edward Craig (1991) suggests that the concept of knowledge developed in order to help us to identify good sources of testimony. Timothy Williamson (2000) claims that knowledge holds interest owing to its status as the most general factive mental state. Another popular way to account for the centrality of the knowledge concept in epistemology is to claim that it plays a role in certain distinctively epistemic norms. Williamson (2000), for instance, provides extensive argument in support of the claim that knowledge is the norm of assertion, in the sense that one should only assert p if one knows that p. Fans of subject-sensitive invariantism have suggested that knowledge is the norm of action, in the sense that one should only act upon p if one knows that p (Hawthorne 2004, Stanley 2005, Hawthorne and Stanley 2008). Either of these suggestions, if true, would clearly show knowledge to merit philosophical attention. Finally, a few authors have suggested that knowledge is also the “aim” of belief; we might think of this as claiming that a belief ‘gets it right’ if and only if it qualifies as knowledge (Peacocke 1999, Williamson 2000). More traditionally, the aim of belief has been seen as not knowledge, but mere truth (see e.g. Velleman 2000). Yet even if one accepts the latter claim, knowledge may still occupy a quite central role—Ralph Wedgewood, for instance, claims that as inquirers we aim at knowledge as a means for achieving the ultimate aim of true belief, noting that “we almost never aim to have true belief without at the same time aiming to know” (Wedgewood 2002, 289).

4 Pritchard (2007) helpfully distinguishes between the ‘primary’ value problem posed in the Meno and a ‘secondary’ value problem—why should knowledge have more value than any proper subset of its parts, such as justified true belief?

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Who Wants to Know? j 117 Nonetheless, a few contrary souls deny that knowledge has any special value that justifies its central role in epistemology. Jonathan Kvanvig (2003), for instance, argues that the value of knowledge does not exceed that of its parts. He argues that epistemological theorizing would do better to focus on the epistemic state of understanding—noting that this is not a mere species of knowledge, since one may understand without knowing and vice versa. Understanding, Kvanvig argues, does have unique value, and is thus more worthy of philosophical attention. Though I won’t argue the case here, I’m at least a little inclined to think that knowledge is a reasonably useful concept for epistemology, whether owing to its intrinsic value or for some other reason. Many of the arguments discussed above seem to me to be quite plausible—knowledge does seem to play some sort of role in regulating proper assertion and practical decision-making, for instance, and the concept of knowledge also quite plausibly plays a crucial role in helping us to identify reliable sources of testimony. Or at least, all this seems to me to be true in normal everyday contexts, which are the contexts that epistemologists have standardly focused on (when they are not considering bizarre skeptical scenarios, that is). Whether or not the considerations above show that knowledge has some distinctive theoretical value, what’s important for my purposes is this: none of them rules out the possibility of other types of epistemic state which also have epistemic interest or value—possibly even more interest or value than knowledge. Understanding may be one of these; but it’s certainly not the only possible non-knowledge state of theoretical interest. I’ll be aiming to convince you that there may well be many. 1.2. Epistemic States Higher than Knowing No matter what one’s view on the value of knowing, it seems clear enough that knowledge is not the highest possible epistemic standing one might achieve (nor the lowest, for that matter). Though the traditional view of knowledge held it to require complete certainty or infallibility (for instance according to Plato and Descartes), contemporary epistemologists typically concede that such requirements lead to skepticism. We possess vanishingly few beliefs, if any, that meet such stringent criteria. Most epistemologists seem to take the following as constraint on a successful account of knowledge: one’s account must not be so stringent that it implies that none of us possess any knowledge at all. So if a theory of knowledge leads to skepticism, this will be considered by many to be fatal to the theory. But would it be any better if a theory of knowledge implied that the vast majority of humans almost never achieve knowledge? Most of us would not be satisfied with such an account. We want our theory of knowledge to do justice to the Moorean sentiment that most of us know quite a lot—we know our own names, we know where we left our keys (usually), we know that water is wet, we know that the sky is blue. So the real constraint seems to be this: a theory of knowledge must not be so

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

118 | Jennifer Nado stringent as to rule out a significant proportion of those everyday true beliefs which we commonly class as known. Knowledge, then, will presumably need to be compatible with epistemic standings that fall quite a bit short of certainty. There’s a further reason to think that the bar for knowing must be fairly low. If knowledge does play a role in norms governing assertion and action, then a high-standards view of knowledge will imply that most of our assertions and actions are unwarranted (Hawthorne 2004). This is implausible, to put it mildly. We’re inclined to think that large proportions of our practical reasoning and of our acts of assertion are perfectly acceptable. Note too that given the time-sensitive nature of much of our practical reasoning, it must be possible in many cases to get at knowledge rather quickly and easily. We are not generally at leisure to gather all possible evidence that might be relevant to determining the facts—we quite frequently must act on less than total information. All this suggests that knowledge is a less demanding state than total certainty. But those two types of state only scratch the surface of the possible carvings of epistemic space. Understanding, of course, has already been mentioned. But why stop there? Suppose, as many epistemologists do, that knowing does not require knowing that one knows. Then knowing that one knows that P is presumably a more demanding (and presumably a more valuable) state than merely knowing that P; yet it’s arguable that one can know that one knows P without total certainty that P. Or, consider the plausible claim that some level of reliability in the belief-generating process is necessary for knowing (sufficiency being a separate question). So long as the required reliability is less than 100%, we can speak of categories of epistemic state which involve a higher level of reliability than is strictly required for knowing, while falling short of utter certainty.5 The possibilities are nigh-endless—depending on one’s views on closure, safety, sensitivity, and all the rest, one can define up all sorts of epistemic-state types that exceed the requirements for knowing by demanding whatever features knowledge fails to necessitate.6 So let’s put aside the question of whether and why knowledge is more valuable than mere true belief; I’m more interested in asking the question why knowledge should be so uniquely of interest given all the potential epistemic standings that exceed merely truly believing. My claim will be that knowledge is simply not the only such state of interest. Knowledge may be the concept which regulates appropriate epistemic behavior in most ‘ordinary’ circumstances, but it does not reflect the elevated epistemic standards that we have implemented in specialized, professional fields of inquiry. Within these fields, inquirers appear to aim at epistemic states that exceed knowing. In the next section, I’ll begin to explore this idea via the framework of epistemic normativity. 5 Note that if you object to reliability’s being even a necessary condition for knowing, then this is even more obviously true. 6 Indeed, epistemic logic recognizes a variety of possible axiom systems, corresponding to a scale of epistemic states of different levels of demandingness.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Who Wants to Know? j 119

2. Obligations in Ethics and in Epistemology Discussions of the value of knowledge naturally tend to invoke a notion of epistemic normativity—a notion of ‘oughtness’ which is parallel to, but at least prima facie distinct from, the notion of normativity familiar from moral theory. A notion of epistemic normativity is of course directly implied by those who claim knowledge to be the norm of assertion or action, or the aim of belief. But quite generally, philosophers and laypersons alike are comfortable making normative evaluations regarding matters epistemological. We will say that an agent ought to believe such and so, or to infer in such and so ways. We will say that someone made a mistake in reasoning. We quite regularly evaluate others’ epistemic performance, and distribute praise and blame on the basis of said evaluation.

2.1. Characterizing Epistemic Normativity I will be making use of various normative terms in what follows, in particular the notions of epistemic obligation, epistemic permission, and the oft-neglected epistemic supererogation. I’ll be claiming, in short, that professional inquirers are under heightened epistemic obligations. Before I do so, however, a number of clarifications and caveats regarding epistemic normativity are in order. First: I don’t intend to commit myself, here, to any particular position on the metaphysical status of norms. As a card-carrying naturalist, I’d prefer to avoid any characterization of norms that would leave them irreducible to more familiar ontic categories—fortunately, there are many naturalistically acceptable interpretations of normative talk available. One possibility would be to cast the norms I’ll be discussing as merely expressing claims about the epistemic expectations individuals in our society have of one another. Another would be to express them as mere facts about means–ends relationships—as in ‘if an agent desires to achieve x, this desire will be most successfully achieved by doing y’. Even the staunchest fan of naturalized epistemology should be comfortable, I would hope, with ‘normative’ claims of those sorts. Ultimately, however, I don’t think that these issues need to be settled for current purposes; nothing that is to come will hinge on any particular assumptions on the nature of the normative. I also don’t intend my use of terms such as ‘epistemic obligation’ to imply any particular kind of view on knowledge or justification. I don’t mean to imply that the correct theory of knowledge is ‘deontological’, for instance. As William Alston (1988) has noted, we must take some care in characterizing epistemic normativity. If we claim that a subject is obligated/ permitted/forbidden to believe such-and-so, then if we are doxastic involuntarists these obligations will appear to run afoul of the principle that ‘ought implies can’. This seems to be a general issue when one speaks of sort of epistemic obligations as placing restrictions on the sorts of beliefs we may hold.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

120 | Jennifer Nado To avoid this, rather than talking of obligations and permissions with regard to belief, we can instead talk of obligations and permissions with regard to things we do clearly have control over—our gathering of evidence, our use of inference procedures, our reflective consideration of our belief set, and so forth (see also Kim 1994, Feldman 2000). Call these ‘epistemic actions’. When I speak of epistemic obligation, then, I have in mind not the obligation to believe suchand-so but the obligation to perform such-and-so epistemic action. None of this requires us to take any particular position on the nature of justification or knowledge. A die-hard reliabilist, for instance, might accept that the limited reliability of human cognition frequently leaves agents with unjustified beliefs despite their having fulfilled all their epistemic obligations—in other words, despite having performed all the epistemic actions that could reasonably be expected of them. Nonetheless, it is plausible that there is some link. If knowledge requires (say) beliefs formed by a process that is at least 80% reliable, then it might be obligatory for me to perform any epistemic action within my power until reaching that level of reliability (or something along these lines). Or perhaps my obligations are linked to my own assessment of my epistemic state—if I take myself to not yet know whether p, then I am obligated to investigate or reason or reflect further.7 A great many ways of cashing out the link between epistemic action and knowledge will leave open the possibility that I might fulfill my obligations and achieve true belief, yet fail to know; or vice versa. For the former proposal, I might perform all epistemic actions available to me, yet achieve my true belief via a merely 79% reliable process; or for the latter proposal, I might mistakenly take myself to know and hence decide that further epistemic actions are unnecessary. Again, nothing that follows will hinge on any particular take on the correct relationship between epistemic obligations and knowledge. In any case, it seems to be epistemic actions (and not beliefs per se) that serve as the ultimate locus of epistemological praise and blame in ordinary discourse. When I criticize a friend for falsely believing that a sham medical treatment would cure her chronic illness, I would explain my disapproval by noting that she ought to have done more research before wasting her money. When I criticize another’s belief, what I really criticize is normally the fallacious reasoning that caused them to hold it, or their lack of reflection on a belief their parents instilled in them, or their cherry-picking of evidential sources, or what have you. Ultimately there must be some sense to be made of the familiar, ordinary talk regarding what one epistemically ‘ought’ to do, regardless of one’s philosophical positions on the various issues surrounding epistemic normativity. I don’t have any particular horse in those races; my terminology is ultimately born from convenience and expository ease. My use of various normative terms should be interpreted in whichever way one prefers to interpret

7

Assuming, that is, that the truth or falsity of p is of interest to me.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Who Wants to Know? j 121 the normative language used by non-philosophers in ordinary attributions of epistemic evaluation. 2.2. Epistemic Supererogation and Special Epistemic Obligations With these preliminary caveats in place, let’s consider what moral normativity might suggest to us about epistemic normativity. Ethicists standardly recognize several deontological statuses an action may have: it may be obligatory or forbidden; it may be permissible; it may be supererogatory.8 It is obligatory for me to refrain from murdering for pleasure; it is permissible for me to buy myself a cup of coffee; it is supererogatory for me to give up coffee and donate the money I save to charity. As we’ve noted, parallel categorizations feel quite natural within the realm of epistemology. It is obligatory for me to gather a certain amount of information before endorsing a presidential candidate; it is permissible for me to trust my senses; and so forth. Strangely enough, however, the category of supererogation is rarely discussed within epistemology. Tidman (1996) makes brief use of the idea; Hedberg (2014) is a rare instance of a paper-length treatment of the notion. But epistemic supererogation nonetheless remains largely unexplored territory. We’ve already noted that there are many possible epistemic states that exceed the requirements for knowing. It seems equally clear that there will be epistemic actions that exceed our epistemic obligations—especially if our epistemic obligations are in some way tied to the requirements of knowing. Imagine again that knowledge requires one’s belief to be formed by a process that is 80% reliable. If I select and employ a method which is 98% reliable, then prima facie this looks to be a case of epistemic supererogation—I have exceeded my epistemic duties. For a more concrete example, suppose that I check ten separate sources before resting content with my belief that Mogadishu is the capital of Somalia. It seems obvious that I did not need to do that; surely one or two would have been enough (indeed, in actual fact I only checked one while writing this paragraph). But it also seems obvious that I have improved my epistemic position at least a little by the additional checking—though, all things considered, my time might have been better spent in other ways. The epistemic action of decuple-checking was epistemically supererogatory. Within ethics, philosophers often discuss circumstances under which one incurs special moral obligations for one reason or another. It’s intuitive, for instance, that I have greater obligations to my immediate family than to strangers. Another quite plausible idea is that members of certain professions are obligated to perform actions that would ordinarily be merely supererogatory. A professional firefighter, for instance, is obligated to enter a burning building (at potential personal risk) to save a trapped inhabitant. Yet for the civilian passerby, such an act is merely supererogatory. Similarly, doctors are

8

And of course, actions can belong in more than one of these categories.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

122 | Jennifer Nado under moral obligations that do not apply to ordinary folks, such as the obligation to treat, to maintain confidentiality, and to adhere to the particular procedures surrounding informed consent. Professors, as well, are under certain special moral obligations—e.g. to their students. Plausibly, the members of these professions voluntarily take on these extra moral obligations when they join their field. Perhaps somewhat more speculatively, the very existence of these professions serves to increase instances of morally supererogatory acts within society; we pay firefighters so that we do not need to rely on the fickle benevolence of strangers. Might there be a similar phenomenon within the realm of the epistemic? I claim that there is: professionals in many academic and inquiry-centered fields take on extra epistemic obligations in virtue of their membership of their profession. In other words, what would ordinarily be epistemically supererogatory becomes, for the professional, epistemically obligatory. Recall, for instance, the aforementioned stringent testing procedures that regulate the adoption of new clinical treatments; I would argue that this indicates that medical researchers are under extra epistemic obligations with regard to propositions involving the efficacy of various drugs. They are obligated to perform more epistemic actions with regard to those propositions than the rest of us are, and they incur this special obligation as a result of their professional role. Medical testing is far from the only example. There exists a vast array of discipline-specific methodological requirements that prima facie suggest the existence of extra, non-standard obligations to perform epistemic actions. The most obvious examples come from the sciences. Consider for instance the measurement of significance employed in analyzing experimental data—the p-value. A scientist cannot infer directly from the raw experimental data to the support of her hypothesis; her experimental results are evidence for the hypothesis only if it can be shown that the likelihood of such results manifesting given the ‘null hypothesis’ is sufficiently low. Interestingly, the required threshold differs from field to field. In the social sciences, data are generally required to meet a significance threshold of 0.05, reflecting a 95% confidence level. Yet within the field of particle physics, researchers were unwilling to announce the discovery of the Higgs-Boson particle until a reaching a five-sigma confidence level, corresponding to a p-value of 0.0000003. Now, it is obvious that we do not expect anything like this analysis of significance from ordinary cognizers in everyday circumstances, even for information to which it could be applied. For one, the vast majority of cognizers have no idea how to perform such analyses. But even for those that possess the relevant skill, it would be extraordinarily strange to apply it to, say, determine whether one ought to believe that a San Antonio Spurs loss tends to cause one’s husband to be in a foul mood. It’s simply not worth the time or effort to achieve the increase in epistemic standing that would result from an organized, methodical gathering and analysis of post-game mood data. Casual observation is sufficient.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Who Wants to Know? j 123 Further examples of extra-stringent requirements in science are easy to generate. The standards of experimental design, for instance, incorporate many procedures and restrictions that serve to lessen the impact of various cognitive errors and biases. Double blinding is a familiar example; it serves to lessen the biasing effect of the experimenters’ and subjects’ expectations. The practices of randomization and replication, the use of control conditions, and so on also serve to reduce the possibilities of error. Scientists also take greater care than laypersons to minimize the fallibility of perception, as evidenced by the use of precise measuring apparatus and repeated measurements, as well as e.g. the use of video recording, multiple independent coders for data, and so forth. And even after all this methodological care, scientists frequently perform meta-analyses to reduce the risk of error yet further. It goes without saying that these practices are not standard during everyday cognitive activity, even in circumstances where they could be applied—and even for those few who are trained in their use. The sciences are the clearest example of elevated methodological requirements, but they are far from the only case of apparent special epistemic obligations. Consider the epistemic practices enshrined in our legal systems. The US federal courts, for example, make use of a complex system of rules of evidence which regulate admissible evidence in court proceedings. ‘Hearsay’, in the technical sense of a statement based on testimony received outside the courtroom, is prohibited as admissible evidence in court; it is, of course, a perfectly normal source of evidence in daily belief-formation. Similarly, evidence regarding a defendant’s character is admissible only in certain well-defined circumstances. Many readers will be familiar with the goal of demonstrating guilt ‘beyond a reasonable doubt’; in fact, this is but one standard of evidence among many in use across various US legal contexts.9 Most of us, of course, tend to form the vast majority of our beliefs without meeting this rigorous standard. I won’t belabor the point, other than to note briefly a few other fields that strike me as employing elevated standards. Journalists are plausibly under special obligations regarding fairness, careful sourcing and verification of evidence, fact-checking and the like. Mathematicians may well be under an obligation to seek proof of mathematical propositions in any case where this is possible. Note, for instance, the continuing quest to prove Goldbach’s conjecture, despite the fact that computers have provided inductive evidence that the conjecture holds by verifying it for all primes below 4 x 1018.10 Academics in all 9 Interestingly, civil cases frequently require only a ‘preponderance of evidence’, which consists in the proposition in question’s being more likely true than false. This is quite plausibly a lower standard than is required for knowing. To my eyes, this simply indicates an even greater variety of possible epistemic standings of interest. 10 There’s a thorny question regarding whether inductive evidence can ever grant even ordinary knowledge of mathematical claims. I’m inclined to say that we are justified in believing Goldbach’s conjecture to be true; readers who disagree may simply ignore this example.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

124 | Jennifer Nado fields, including philosophers, are under all sorts of obligations regarding clarity and rigor of argumentation, pursuit of relevant literature, and so forth. Consider for a moment the vastly differing standards to which you hold an undergraduate and a colleague—the very same paper that would receive an A in an introductory class would provoke scathing criticism, even reproach, when presented by a senior professor. Professor McX really ought to be ashamed; he should have been more familiar with the work of so-and-so; he should have considered such-and-such obvious objection. 2.3. Explaining the Phenomenon So let’s suppose that I have convinced you that there is at least a prima facie difference between the epistemic activities that ordinary knowers standardly perform, and the stringent methodological requirements that professional inquirers hold themselves to. There are of course a number of ways in which we might explain this difference in expectations; I’ve mentioned shifting-standard accounts of knowledge, but another obvious route would be to claim that the extra obligations faced by the scientist are moral or practical, rather than genuinely epistemic. Finally, one might claim that everyday cognizers are, despite appearances, obligated to e.g. reach the level of statistical significance demanded in the sciences. We’ll deal with all these possibilities shortly. For the moment, let’s focus on a question that arises if we take the phenomenon at face value—why should this difference exist? To my eyes, it is for essentially the same reason as in the moral case. In ethics, a common objection to naïve utilitarianism is that it is obscenely overdemanding; it would require each of us to give up on all of life’s small luxuries, forgoing our daily lattes so that we might donate every scrap of non-essential income to the alleviation of world poverty. Even utilitarianism’s patron saint Peter Singer falls massively short of such a demand. We are mere mortals, and the moral obligations that hold of us must respect our weaknesses. Nonetheless, in certain professional contexts we demand at least a somewhat higher ethical standard; we pay firefighters, doctors, and the like to perform actions that are not morally required of the layperson. If this is true in the moral case, then surely the same may be true in the epistemic case. Consider again the p-value threshold required of data in the sciences. Even meeting the relatively lax 0.05 value required in the social sciences often requires dozens of subjects; the ordinary man on the street simply does not have the luxurious quantities of free time that would be needed to gather that much data. If doing so were obligatory before forming beliefs about causal dependencies, most working folk would be in flagrant dereliction of duty. If the link between epistemic obligation and knowledge is reasonably tight, this would further imply that most of us know very little. Similar issues arise with the knowledge norms of assertion and action. If we refused to assert causal dependencies when bereft of scientific-grade evidential wealth, communication would be extraordinarily tricky. And if we refused

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Who Wants to Know? j 125 to act on causal beliefs when so bereft, we’d soon go extinct. We are creatures of limited time and means. We have to make do with moderate levels of epistemic risk. Yet the limitations that we face as individuals are much less relevant when the scientific community as a whole is considered. Ordinarily, we can’t expect the average man on the street to implement the stringent methodology of modern science; but we can if we pay him to spend 40-plus hours a week doing so. For certain tricky propositions it may take years, decades, or even longer to reach scientific-grade knowledge; but since science is not the task of a single individual, time constraints largely fall by the wayside. Similar points could be made about other resources that limit inquiry, such as cognitive energy, access to information and education, funds to purchase needed equipment, and so forth. We can reasonably expect quite a bit more from a group of well-trained professionals who devote much of their waking hours to inquiry than we can from a solitary individual who must balance inquiry with the other demands of daily living. By introducing professions whose charge is to conduct high-quality inquiry, we make the unreasonable reasonable—and the supererogatory obligatory.

3. Special Epistemic Obligations and the Aims of Professional Inquiry As noted earlier, I think it is plausible that the epistemic activities that we consider obligatory in ordinary cognizing are linked, in some way, to the goal of achieving knowledge. Perhaps knowledge is our ultimate goal, or perhaps this goal is merely instrumental to the ultimate goal of achieving truth. But we might say that, in ordinary inquiry, it is at least plausible that the proximal aim is to know, and we expect cognizers to take appropriate epistemic actions toward achieving that goal. The epistemic obligations present in inquirycentered professions, however, appear to be different from those active in everyday contexts. This suggests that the goal may be different, too. I’d like to suggest that professional inquirers don’t aim at mere knowledge, any more than a philanthropist aims at (say) merely avoiding causing harms. Given the plausible importance of knowledge, there is an obvious temptation to assume that all of our activities of inquiry aim for it (if only instrumentally, as a means to get at truth). Those who take knowledge to be the aim of ordinary epistemic activity are thus likely to hold that knowledge is also the aim of science, journalism, law, philosophy and so forth; they will likely be less than happy with the proposal I have just made. Consider for instance Alexander Bird’s statement about scientific inquiry: “as Aristotle tells us in the first line of the Metaphysics, ‘All men by nature desire to know’. Science is the institutional embodiment of this desire” (Bird 2008, 281). As Moti Mizrahi (2013) notes, this knowledge-based conception of scientific progress seems to reflect the way scientists themselves talk about their work; and it was plausibly the default view among philosophers until Kuhn’s (1996) Structure of Scientific Revolutions.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

126 | Jennifer Nado Bird’s arguments for the knowledge view, however, are aimed not at the antirealist who denies scientific progress, but at the realist who conceptualizes it solely in terms of accumulation of truths. Bird (2007) notes that we would not consider science to have ‘progressed’ if the scientific community began to believe a truth for irrational reasons; he takes this to imply that scientific progress consists in accumulation of knowledge. Assuming that the aim of science is to achieve that which constitutes scientific progress, then, if Bird is right, the aim of science is knowledge rather than true belief. One could easily imagine similar arguments for the knowledge aim in other professional fields.11 There is, however, a certain failure to consider the full range of possibilities here. Why should the realist’s options be limited to truth or knowledge? Suppose we accept Bird’s argument against truth as the standard of progress; why think that scientists should be satisfied with mere knowing? Why shouldn’t their goal be (say) certainty, or knowing that one knows? Perhaps all men do desire to know; but don’t some of us also want more? Can’t we, like the philanthropist, aim higher than the average man on the street? Bird takes science to be the institutional embodiment of the desire for knowledge; I’d suggest it is an institution whose existence serves to promote epistemically supererogatory practices of inquiry. Nonetheless, there are multiple ways one could explain the epistemic practices of professionals without departing from the idea that knowledge is the aim of science (and journalism, and philosophy, and so on). There seem to be two broad options. First, one might deny that the epistemic obligations of professionals genuinely differ from those of laypersons. Second, one might accept that the obligations differ but hold that these obligations can be reconciled with the view that even professional inquiry aims at knowing. Let’s examine each of these in turn. 3.1. Objection: The Obligations are not Epistemic Taking the first of these aforementioned alternate explanations first, one might wonder whether the apparent special obligations described in the last section are genuinely epistemic. Perhaps they are merely pragmatic obligations, or even moral obligations. And indeed in the field of medicine the moral option has prima facie plausibility; there is presumably a moral obligation to prevent the release of harmful pharmaceuticals to the public, for instance. But most other fields of inquiry aren’t like this. There are no plausible moral obligations surrounding the study of, say, genetic inheritance patterns in insect populations in Namibia. Are the oughts that drive the professional inquirer’s epistemic actions merely prudential or pragmatic? Again, much of the work done by professional 11

See for instance Kelp (2014) for somewhat similar arguments in favor of the view that knowledge is the aim of inquiry generally.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Who Wants to Know? j 127 inquirers has no immediate practical benefit; philosophers know this better than anyone. Of course, there are special pragmatic reasons for an individual researcher to follow the then-current expectations regarding blinding and the like—if she does not, she will not succeed in publishing, which will impede her chances for funding and so forth. But researchers do not just follow current methodological expectations. They also alter their methodological expectations in response to new information. The introduction of placebo-controlled trials is a case in point; this alteration of methodology would make little sense if researchers were concerned only with ‘following the pack’. We should also recall that everyday inquiry frequently has quite a bit of pragmatic utility. If the obligations that surround ordinary epistemic activity are epistemic rather than pragmatic, then surely those surrounding professional inquiry are as well. 3.2. Objection: The Obligations are Universal A second way to deny the existence of special epistemic obligations for professionals is to claim that all inquirers, even laypersons, are obligated to meet the standards professional inquirers hold themselves to. This need not involve claiming that all inquirers are obligated to run double-blinded experimental studies; instead, it might simply involve the claim that all inquirers are obligated to ensure that their beliefs are free from cognitive bias and other reasoning errors. We do, after all, criticize people who form beliefs in biased ways. A biased belief, so the story would presumably go, is unjustified and thus patently not knowledge. Given the plausible link between knowledge and epistemic obligation (in ordinary contexts), it would then be a short step to the claim that ordinary inquirers are generally obligated to act so as to avoid bias. Of course, it is true that cognitive biases and other errors are frequently incompatible with knowledge. If the gambler’s fallacy leads me to believe that I will win the next round of blackjack, I do not know that I will win the next round of blackjack—even if it turns out that my belief happened to be true. But a bit of reflection will show that at least some cognitive flaws must clearly be compatible with knowledge, if our ordinary ascriptions of knowledge are largely on the right track. Take for instance base-rate neglect. Textbook examples of base-rate neglect typically involve medical tests which have a certain rate of false positives; in order to calculate the probability that one has disease x given a positive test result, one must make use of information about the base rate of disease x in the population. Even doctors, however, are prone to ignore this information, thereby putting too much weight on positive test results. Now the average patient has presumably never given a single thought to how base rates affect the evidential value of a clinical test; yet we display absolutely no hesitation in attributing knowledge to those who have formed a (true) belief that they have such-and-so disease on the basis of such tests. Other common biases, such as confirmation bias, bandwagon effects, overconfidence, and use of non-optimal heuristics such as availability and representativeness strike me as equally compatible with knowledge, at least in some

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

128 | Jennifer Nado cases. What’s more, some of these ‘errors’ are even arguably adaptive, enabling faster decision-making with less expenditure of cognitive effort. Indeed, given the ubiquity of such cognitive behaviors, withholding knowledge attributions from any belief that had been ‘tainted’ by said behaviors would leave most cognizers with a rather impoverished knowledge base. Finally, consider the following: obviously there will be many cases where a person has formed a belief while influenced by bias, and where that belief is therefore not justified. But for any such case we can imagine a parallel case where the bias is not present, but where the agent has performed no particular epistemic actions to prevent the operation of bias, either. For instance, imagine a case where an agent’s judgment of her colleague’s competence is not in fact influenced by the colleague’s ethnic background, but where the agent has not made any particular effort to shield herself from possible tacit racial bias. I’d argue that cases of this sort are compatible with justified belief. Most people are generally in this latter position—very few people are even aware of the existence of cognitive biases, much less actively working to prevent their influence. Yet note that in fields of professional inquiry, it is not just obligatory to be free of bias; it’s obligatory to make an active effort to ensure one is free of bias. This obligation simply does not hold (at least to the same degree) in ordinary cognitive activity. 3.3. Objection: The Obligations Reflect an Attempt to Maximize Knowledge This might lead to another thought: perhaps scientists and other such inquirers are aiming at knowledge, but are simply going to extra lengths to make absolutely certain that they have in fact met their aim? This might be a version of the second strategy mentioned above—we might accept the existence of elevated obligations but maintain that the aim of all inquiry is still knowing. But notice that, if this hypothesis is correct, professional inquirers would be aiming at a higher state than knowing—they would be aiming at ‘being certain that one knows’.12 More plausibly, perhaps professional inquirers are simply organizing their epistemic activities such that they maximize the amount of knowledge they produce. Surely, by reducing the chances of error professional inquirers improve their chances of achieving knowledge. But it is not at all clear to me that the stringent error-reducing procedures in place in these fields actually succeed in maximizing knowledge. Consider two hypothetical inquirers, Alfred and Betty. Both desire to maximize their knowledge. Alfred proceeds by devoting extraordinary amounts of time and energy to eliminating every possible risk of error for each of the propositions he considers. Since this takes so much time, Alfred investigates only three propositions over the course of a year. Betty is also quite scrupulous about avoiding error, but not nearly so much so as 12

And presumably, being certain that one knows that p entails being certain that p. Which, as we have agreed, is a much more demanding state than mere knowledge that p.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Who Wants to Know? j 129 Alfred. She permits a reasonable risk of error for each proposition that she investigates, and as a consequence she investigates hundreds of propositions over the course of the same year. Chances are quite good that Betty will end up with more knowledge than Alfred at the end of the year. (Alfred, by contrast, will end up with a lower quantity of higher-grade epistemic states). Given our limited resources, there is a tradeoff between quality and quantity of epistemic goods over any given period of time. Scientists and other professional inquirers do of course allow for some degree of error, but it is in no way obvious that they are Bettys rather than Alfreds. Alternatively, but relatedly, it might be suggested that professional inquirers employ more stringent methods solely because they are inquiring into trickier matters. Certain propositions are, after all, harder to come to know than others—it is more difficult to know that asbestos causes cancer than it is to know that the cat is on the mat (leaving cases of testimony aside, that is). Indeed, a great many propositions may be unknowable without use of the particular methods of science—from such obvious cases as ‘electrons have negative charge’ to less-obvious cases involving, say, small effect sizes in experimental contexts. Thus, an objector might claim that professional inquirers always aim at (mere) knowledge, but that the difficulty of their subject matter necessitates the use of the stringent bias and error reduction procedures we have discussed. In fact, I do think that the existence of differing levels of ‘epistemic demandingness’—that is, the level of difficulty inherent in coming to know a proposition—partially explains the existence of special obligations within contexts of professional inquiry. Nonetheless, this cannot be the whole story, for there are many propositions which are both knowable by everyday means and subject to the special elevated standards we’ve been discussing. A substantive proportion of the propositions studied by medicine and by psychology fall into this category; particularly those bits of ‘common sense’ that are later verified scientifically, such as e.g. the claim that fatty foods contribute to obesity or the claim that financial hardship causes psychological stress. It goes without saying that philosophy, as well, vigorously and carefully investigates a large number of claims that fall within the domain of common-sense knowledge. So why do professionals bother investigating these sorts of common-sense claims with the full rigor of their fields’ methods, if they are already known? In part, because of the very large number of ‘common-sense’ claims that are later shown to be false. Examples here from the sciences include the claim that cold weather causes colds, or that ‘letting it out’ is a beneficial way to cope with feelings of anger. These bits of common sense are false, but that should not lead us to claim that all common sense fails to be knowledge; again, that would be too skeptical. A better conclusion is simply that we do not always know which bits of common sense are known—in line with the quite plausible claim that knowing does not entail knowing that one knows. In order to separate the wheat from the chaff, it is worth the professional’s time to double-check even propositions that might seem obvious. On this perspective,

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

130 | Jennifer Nado professional inquirers may well be aiming at knowledge of knowledge rather than ‘mere’ knowledge. But note that we need not be forced into shoehorning all goals of professional inquiry into iterations of knowledge. Perhaps certain professions aim to be certain that a proposition is known, or to be 99% confident that it is known, or even simply to be more justified than would be required in ordinary contexts. For whatever reason, propositions that are plausibly known prior to professional inquiry are frequently investigated in a more rigorous manner by professionals. This suggests that the special obligations of professional inquirers are not directly determined by the epistemic demandingness of the investigated propositions. 3.4. Objection: The Obligations Reflect Shifting Standards for Knowing Let’s turn to the second strategy mentioned above—that of admitting obligations while maintaining that the aim of all inquiry is knowledge. The most obvious option here is to invoke a shifting-standards account of knowledge. Let’s begin with contextualism. Contextualist accounts of knowledge claim that knowledge attributions—that is, statements of the form “S knows that P”—express different propositions depending upon the context in which they are uttered. In this way, terms such as ‘knowledge’ and ‘knows’ are said to resemble other familiar context-sensitive terms such as ‘here’, ‘big’, and so on. What determines the meaning of ‘knows’ in a given utterance is the features of the attributor’s conversational context. If a skeptical scenario has recently been mentioned, for instance, then ‘knowledge’ comes to express a more demanding epistemic property than it would in a more everyday context. One might think that the elevated standards operative in the fields we’ve been discussing could be accounted for quite easily by a contextualist theory of knowledge. The story would go something like this: during conversations in these fields, the possibility for error is typically made salient, thus resulting in a shift to a more demanding standard, which then generates special epistemic obligations. There are a few difficulties in taking this route, however. First, there are certain sentences which should turn out infelicitous on this sort of account which nonetheless strike me as fairly natural. Suppose, for instance, a scientist discussing possible future research projects with a colleague says something like the following: “We’ve known for a long time that long work hours cause mental fatigue in employees, but we don’t yet have scientific evidence demonstrating this effect”. On a contextualist account, the scientific context of this conversation should raise the standard for knowing such that the scientist’s knowledge claim is inapt. Yet it seems perfectly normal to claim, in a single breath, that a piece of ‘everyday’ knowledge has not yet been subjected to rigorous investigation that meets the standards of scientific inquiry. Second, it’s not clear that there is a single standard in effect in a given scientific context in the first place. Within, say, a psychology laboratory, the investigation of certain propositions will be subject to extra-stringent

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Who Wants to Know? j 131 methodological requirements; but other propositions will be held to essentially the same standards that are operative in everyday contexts. The psychologist will not feel obligated to perform any extra epistemic actions before, say, making a claim about the number of subjects in her control group. This complexity is difficult to accommodate on a contextualist approach. A further difficulty seems to arise from the fact that, on a contextualist account, it is the conversational context of an attributor that determines the standards. Thus, suppose researcher Smith has performed a fairly shoddy experiment to test the effectiveness of drug X; she has neglected to employ a placebo, and she did not properly randomize the assignment of subjects to experimental and control groups. Nonetheless, let’s suppose that her research provides better evidence than mere casual observation of the drug’s effectiveness during clinical practice. If we try to extend the contextualist account to cover obligations to perform epistemic actions, it seems that there should be conversational contexts in which it would be true to claim that Smith has done nothing epistemically blameworthy (conversations between disinterested laypersons, perhaps). But it seems quite clear that any such claims would be false, no matter what the conversational context. She does deserve blame; she has neglected her duties as a researcher. Ultimately, the contextualist solution is problematic because it is at heart a semantic thesis rather than a thesis about epistemic norms. Contextualism tells us about the meaning of the word ‘know’; it does not obviously tell us how we ought to go about forming beliefs. On a contextualist account, any change in standards is an artifact of conversation; but not so for the scientific standards we’ve been discussing. The epistemic obligations of the scientist are what they are. One can’t change them through conversational maneuvering.13 A more promising approach would be to appeal to features of the inquirers themselves, rather than any particular knowledge-attributor, to account for the shifting standards. The main competitor with contextualism within multiple-standards accounts of knowledge, which is standardly known as either Subject-Sensitive Invariantism or Interest-Relative Invariantism, therefore suggests itself as a plausible explanation for the phenomena at hand. According to SSI accounts (as I’ll call them), the meaning of ‘knows’ does not change in different contexts. Instead, whether an agent knows is simply sensitive to non-epistemic features of the agent—typically, what is at stake for the agent. If it is terrifically important that the agent have a true belief regarding proposition p, then then level of justification the agent must have before knowing p is elevated.

13

I have here largely assumed a version of contextualism upon which the standards are determined by conversational salience. This is not the only possible contextualist approach, of course—however, insofar as contextualist theories hold that the truth conditions of knowledge attributions are in part determined by the attributor, this last argument stands. The norms of inquiry simply do not appear to be attributor-sensitive.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

132 | Jennifer Nado Oddly, proponents of SSI accounts have tended to argue for them on the basis of semantic considerations, just as proponents of contextualism often do. From my perspective, however, whether or not SSI accounts properly capture our usage of the English word ‘knows’ is of very little interest. Of much greater interest is the overwhelmingly plausible observation that one’s epistemic obligations are affected by what is at stake. If your life depends on depositing a check on Saturday, you ought to double check the bank times. An agent who fails to do so in such a case deserves quite a bit of blame; an agent who rests content with her memory of previous bank visits while under no particular practical pressures seems, by contrast, blameless. Regardless of whether we agree that ‘knows’ picks out a property whose presence is sensitive to the practical stakes of a subject, everyone should admit that what epistemic actions one should perform is sensitive in such a way.14 Supposing that there is at least a moderately close link between epistemic obligation and the nature of knowledge, the sensitivity of epistemic obligations to stakes naturally suggests the sensitivity of knowledge to stakes. But what of the unconventional epistemic obligations of professional inquirers? Here, stakes-based SSI accounts don’t seem to be quite general enough to capture the phenomena. One obvious difficulty is that scientific curiosity and high practical stakes don’t correlate very closely. Medical research obviously involves high stakes owing to the risk of harming a patient population; but the vast majority of scientific research is not accompanied by such immediate and obvious practical consequences. If physicists are wrong about the Higgs-boson, very few people will be harmed. There may, of course, be very high stakes for an individual researcher with regard to a given scientific proposition; her professional career might rest on getting things right with regard to that proposition, particularly if she is not yet tenured. But SSI gets things wrong here, too—the methodological standards of scientific inquiry aren’t higher for the fresh assistant professor than they are for the jaded emeritus researcher. As Jonathan Schaffer has noted, “One cannot gain a competitive advantage in scientific inquiry . . . by not caring about the result” (Schaffer 2006, 96). Nonetheless, perhaps one could imagine some form of shifting-standards account of knowledge that would mesh comfortably with the apparently elevated epistemic obligations of professional inquirers. Perhaps even some version of a deontological view of knowledge, to the effect that one knows p when and only when one has a true belief that p and in addition one has fulfilled all one’s epistemic obligations with regard to p. On such an account, 14 One could, however, argue that the ‘should’ here has merely practical and not specifically epistemic force. Nonetheless, it seems to me uncontroversial that one is obligated, for one reason or another, to perform certain extra epistemic actions under circumstances of increased practical stakes. In any case, if the ‘should’ in high-stakes cases is merely practical, then presumably it is merely practical in low-stakes cases as well (else, we face the thorny question of exactly how high the stakes must be before our obligations become practical rather than epistemic); this, then, would suggest that all epistemic obligations are merely practical.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Who Wants to Know? j 133 the standards for knowing would be elevated when one has special epistemic obligations. However, even leaving aside the issues surrounding doxastic voluntarism mentioned earlier, this way of accounting for the phenomena has a particularly undesirable consequence. It implies that there are a number of propositions known by laypeople but not by specialists, despite the specialists having done a much more thorough job of investigating such propositions. A layman with anecdotal evidence that such-and-so herb is effective against colds might possess knowledge, while a scientist mid-way through a meta-analysis of the herb’s effectiveness might lack said knowledge, despite possessing a belief based on much greater evidence. Indeed, it seems that every shifting-standard account of knowledge will be forced to accept this sort of conclusion—even versions of contextualism and SSI that might avoid the other objections raised above. But suppose we were to bite that particular bullet. Ultimately, I am arguing for a ‘higher-than-knowledge’ aim because I believe that there are epistemic categories beyond knowledge (and the other ‘usual suspects’ like true belief) which are worthy of philosophical attention. This, I think, is correct regardless of debates over aim. Even if one manages to defend the claim that professional inquirers aim at (mere) knowledge, it will remain true that knowledge is not the most theoretically interesting epistemic category when it comes to epistemological questions surrounding these professions. Thus, it will remain true that epistemologists should concern themselves with epistemic categories beyond those that are currently standard within traditional epistemology. Here’s why. Note first that even if a shifting-standards account of knowledge is maintained, we can still speak of ‘the epistemic state one has when one meets the standards for knowing active in scientific contexts’ (e.g.). Call this ‘knowledgeS’. ‘KnowledgeS’ is a type of knowledge, just as red is a type of color. And just as there might be circumstances where one is interested in the nature of red rather than in the nature of color generally, there might easily be circumstances where it is ‘knowledgeS’, rather than knowledge generally, that is of interest. Similarly for the subtypes of knowledge that correspond to other areas of professional inquiry. As an example dear to my heart: suppose one is concerned about the use of intuitions in philosophy. One might ask whether intuitions are reliable enough to produce knowledge. But it’s entirely possible that intuitions are sufficiently reliable to generate knowledge in everyday contexts, while being insufficiently reliable to meet the elevated standards active in philosophical contexts. What one should really be interested in is whether intuitions are sufficiently reliable to generate the sub-category of knowledge that philosophical contexts demand—‘knowledgeP’, say. Defending the use of intuitions in philosophy by defending their status as a source of knowledge is rather like arguing against the necessity of telescopes in astronomy by defending naked-eye perception’s status as a source of knowledge. Sure, unaided perception is a sufficiently highquality epistemic source to generate knowledge—but in certain contexts of inquiry, it is just not good enough.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

134 | Jennifer Nado

4. Conclusion I’ll end with a few final clarifications and elaborations on the view I’ve outlined. Broadly, the view I’ve defended is that professional inquirers face more stringent epistemic obligations than layfolk; I’ve argued, further, that this strongly suggests that they aim at a higher epistemic status than mere knowing. I don’t, however, have much of a position on whether or not this higher aim is merely in service of an ultimate goal of achieving true belief. It’s possible that the higher epistemic standard is only instrumental. I’m satisfied if I convince you that the proximal goal of scientific activity exceeds the requirements for knowing. In fact, as just mentioned, I’m satisfied if I haven’t even done that—so long I’ve convinced you that, for some epistemological purposes, epistemic states that exceed knowing may be the appropriate target of philosophical investigation, and that contemporary epistemology’s focus on knowing is therefore problematic. Some readers might wonder whether the goal of professional inquiry might be something like understanding. I have no problem with the idea that understanding is a valuable epistemic state; however, I don’t think that the arguments I’ve given in this chapter support the idea that understanding is the aim of these vocations. The methodological obligations I’ve discussed aim at reducing the chance of error; but presumably one could have even total immunity to error while lacking understanding. What’s more, on the current picture talk of ‘the’ goal of professional inquiry is rather misleading. As we’ve seen, there are different epistemic expectations corresponding to different fields and even to different questions or projects within a field. This implies that the epistemic states that satisfy e.g. physicists are different from those that satisfy e.g. journalists, and that the epistemic goal of a physicist may vary depending upon the particular project upon which she is embarked or the particular proposition she is considering. The picture all this suggests is much more pluralistic—vastly more pluralistic than is usual in ‘traditional’ epistemology. There are exceptions—Stephen Hetherington (2001), for instance, has advocated viewing knowledge as a spectrum, thereby recognizing that one might know one proposition ‘better’ than one knows another—in other words, recognizing a variety of possible epistemic standings. But largely, traditional epistemologists have had a puzzling tendency to neglect the vast majority of possible epistemic states in favor of a few supposedly crucial distinctions. Contrast traditional epistemology’s ‘on-or-off ’ approach to belief and knowledge with e.g. a Bayesian framework’s spectrum of credences; the latter type of approach fits much more comfortably with the complex, diverse array of norms and obligations that appear to hold in fields of professional inquiry. In fact formal epistemology, quite generally, seems much more amenable to the sort of pluralistic picture I have in mind; my complaint in this chapter is restricted to the over-emphasis on knowing present in traditional epistemological projects.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

Who Wants to Know? j 135 A few final comments in a somewhat more speculative vein. If the picture I’ve defended is roughly correct, one consequence is that the correct characterization of knowledge simply cannot be got at a priori. ‘Knowledge’, I’ve urged, must pick out an epistemic threshold that we mere mortals can fairly regularly attain in our everyday cognitive lives. Else, the epistemic norms that are associated with knowledge would be uselessly overdemanding. But this threshold is obviously dependent on contingent, empirical facts about our psychological capacities as a species, about the environment we find ourselves in, and so forth. We cannot know what knowledge requires without knowing a great deal about, for instance, the various shortcomings of our perceptual capacities. What’s more, depending on one’s semantic views, the fact that knowledge is of any interest at all to us may turn out to be utterly contingent. It seems plausible that if our cognitive abilities had been different, we would have been interested in a different epistemic standing. For instance, if we had possessed god-like intelligence and phenomenally long lifespans, perhaps we might have ‘hit upon’ a state closer to certainty as our core epistemic concept, rather than the more modest sort of state that ‘knowledge’ picks out in the actual world. By contrast, if a single act of seeing took hours rather than a mere instant, we’d be forced to form beliefs on less evidence; in such a scenario our core epistemic concept would presumably pick out a state less demanding than knowledge. Thus, if ‘knowledge’ rigidly designates, then it seems knowledge is only contingently of interest.15 This is not to say that knowledge is of no interest. What one epistemically ought to do is, I claim, constrained by contingent facts about one’s abilities and environment. What most of us epistemically ought to do, in most contexts, may well be to aim to know—but in certain contexts, such as that of professional inquiry, some of us want, and plausibly ought to strive for, more.16

References Alston, W. (1988). The deontological conception of epistemic justification. Philosophical Perspectives, 2, 257–99. Beecher, H. K. (1955). The powerful placebo. Journal of the American Medical Association, 159(17), 1602–6. 15 A reviewer notes that this claim at least prima facie sits poorly with views like Williamson’s, according to which knowledge is of special interest owing to its status as the most general factive mental state; it is the most ‘general’ in the sense that the possession of any factive mental state entails the possession of knowledge. A full response is beyond the scope of this chapter, but a rough suggestion for a reply is this: perhaps knowledge is only the most general factive mental state for creatures like us. Creatures with vastly less reliable perceptual abilities might not be able to ‘see that p’—the closest factive mental state that they might achieve might be one for which we have no concept, and which might not entail knowledge, but instead a factive epistemic state weaker than knowing (in the sense of involving a lower level of justification). 16 The work described in this chapter was fully supported by a Grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. LU 359613).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

136 | Jennifer Nado Bhatt, A. (2010). Evolution of clinical research: a history before and beyond James Lind. Perspectives in clinical research, 1(1), 6. Bird, A. (2007). What is scientific progress? Noûs, 41(1), 64–89. Bird, A. (2008). Scientific progress as accumulation of knowledge: a reply to Rowbottom. Studies in History and Philosophy of Science Part A, 39(2), 279–81. Craig, E. (1991). Knowledge and the State of Nature: An Essay in Conceptual Synthesis. Oxford: Clarendon Press. Feldman, R. (2000). The ethics of belief. Philosophy and Phenomenological Research, 60(3), 667–95. Goldman, A. and Olsson, E. (2009). Reliabilism and the Value of Knowledge. In A. Haddock, A. Millar, and D. Pritchard (eds), Epistemic Value (pp. 19–41). Oxford: Oxford University Press. Greco, J. (2003). Knowledge as Credit for True Belief. In M. DePaul and L. Zagzebski (eds), Intellectual Virtue: Perspectives from Ethics and Epistemology (pp. 111–34). Oxford: Oxford University Press. Hawthorne, J. (2004). Knowledge and Lotteries. Oxford: Oxford University Press. Hawthorne, J. and Stanley, J. (2008). Knowledge and Action. Journal of Philosophy, 105, 571–90. Hedberg, T. (2014). Epistemic supererogation and its implications. Synthese, 191(15), 3621–37. Hetherington, S. (2001). Good Knowledge, Bad Knowledge: On Two Dogmas of Epistemology. Oxford: Clarendon Press. Kelp, C. (2014). Two for the knowledge goal of inquiry. American Philosophical Quarterly, 51(3), 227–32. Kim, K. (1994). The deontological conception of epistemic justification and doxastic voluntarism. Analysis, 54(4), 282–4. Kuhn, T. (1996). The Structure of Scientific Revolutions (3 ed.). Chicago: The University of Chicago Press. Kvanvig, J. L. (2003). The Value of Knowledge and the Pursuit of Understanding. Cambridge: Cambridge University Press. Mizrahi, M. (2013). What is scientific progress? Lessons from scientific practice. Journal for General Philosophy of Science, 44(2), 375–90. Peacocke, C. (1999). Being Known. Oxford: Clarendon Press. Pritchard, D. (2007). Recent work on epistemic value. American Philosophical Quarterly, 44(2), 85–110. Ross, O. B. (1951). Use of controls in medical research. Journal of the American Medical Association, 145(2), 72–5. Schaffer, J. (2006). The irrelevance of the subject: Against subject-sensitive invariantism. Philosophical Studies, 127(1), 87–107. Stanley, J. (2005). Knowledge and Practical Interests. Oxford: Oxford University Press. Tidman, P. (1996). Critical reflection: An alleged epistemic duty. Analysis, 56(4), 268–76. Velleman, J. D. (2000). On the aim of belief. In D. Velleman (ed.), The Possibility of Practical Reason (pp. 244–81). New York, NY: Oxford University Press. Wedgewood, R. (2002). The Aim of Belief. Philosophical Perspectives, 16, 267–97. Williamson, T. (2000). Knowledge and its Limits. Oxford: Oxford University Press. Zagzebski, L. (2003). The Search for the Source of Epistemic Good. Metaphilosophy, 34, 12–28.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

6. On the Accuracy of Group Credences Richard Pettigrew We often ask for the opinion of a group of individuals. How strongly does the scientific community believe that the rate at which sea levels are rising increased over the last 200 years? How likely does the Monetary Policy Committee of the Bank of England think it is that there will be a recession if the country leaves the European Union? How confident is the scholarly community that William Shakespeare wrote Hamlet? Suppose you ask me one of these questions, and I respond by listing, for each member of the group in question, the opinion that they hold on that topic. I list each scientist in the scientific community, for instance, and I give their credence that sea-level rise has accelerated in the past two centuries. By doing this, I may well give you enough information for you to be able to calculate the answer to the question that you asked; but what I give you does not amount to that answer. What you were asking for was not a set of credences, one for each member of the group; you were asking for a single credence assigned collectively by the group as a whole. What is this group credence? And how does it relate to the individual credences assigned by the members of the group in question? In this chapter, I’d like to explore a novel argument for a familiar partial answer to the latter question. In particular, given a group of individuals, I’d like to say that any account of how we aggregate the credences of those individuals to give the credences of the group must have a particular property—the group credences should be a weighted average of the individual credences. Now, a weighted average is sometimes called a linear pool, and this constraint on the aggregation of credences is usually called linear pooling. I will not have much to say about how we should set the weightings when we first take our linear pool of the individual credences. But I will have something to say about how those weightings should evolve as new evidence arrives. I will also have something to say about the two standard objections to linear pooling as a constraint on the aggregation of credences.

1. Group Opinions and Group Knowledge Before I present my argument, I’d like to say a little more about the first question from above. This was the more ontological of the two. It asked: What is a group credence? Indeed, it turns out that there are at least two different notions that might go by that name. I will be concerned with the notion of a group credence function at a time as a summary of the credal

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

138 | Richard Pettigrew opinions of the individuals in the group at that time. But we might also think of the notion of a group credence function at a time as a summary of the potential knowledge that is distributed throughout the credal opinions of the individuals in the group at that time. Let’s see these different notions in action. Suppose two historians, Jonathan and Josie, are researching the same question, but in two different archives. Both know that there may be a pair of documents, one in each archive, whose joint existence would establish a controversial theory beyond doubt. Jonathan finds the relevant document in his archive, but doesn’t know whether Josie has found hers; and Josie finds the relevant document in her archive, but doesn’t know whether Jonathan has found his. Indeed, each assigns a very low credence to the other’s finding their document; as a result, both have a very low credence in the controversial theory. According to the first notion of a group credence function as a summary of the credal opinions of the group’s members, the group credence in the controversial theory should remain low. After all, both members of the group assign it a low credence. However, according to the second notion of a group credence function as a summary of the knowledge that is distributed throughout the credal opinions of the individuals in the group, the group credence in the controversial theory should be high. After all, between them, they know that both documents exist, and they both agree that this would give extremely strong evidence for the controversial hypothesis. These two notions of a group credence function are not independent. It seems natural to think that we arrive at the second sort of group credence function—the sort that summarizes the knowledge distributed throughout the group—as follows: we take the credences of each of the members of the group; we let them become common knowledge within the group; we allow the members of the group to update their individual credences on this common knowledge; and then, having done this, we take the first notion of a group credence function—that sort that summarizes the credal opinions of the individuals in the group—with these updated individual credence functions as its input. That is, group credences as summaries of distributed potential knowledge are simply summaries of the opinions of the individuals in the group once those individuals are brought up to speed with the opinions of the other individuals.1 Thus, once Jonathan learns that Josie has very high credence that the relevant document from her archive exists, and once Josie learns that

1 Of course, much thought has been given to how you should respond when you learn of someone else’s credences; this is the focus of the peer disagreement literature. For accuracybased analyses of this problem, see (Moss, 2011; Staffel, 2015; Levinstein, 2015; Heesen and van der Kolk, 2016). Also, if the members of the group share the same prior probabilities, Robert Aumann’s famous Agreement Theorem shows that there is just one rational way for members of the group to respond—they must all end up with the same credence function once they have updated themselves on the common knowledge of one another’s posterior probabilities (Aumann 1976). Thus, in such a case, there will be no disagreement to resolve. But of course there are many cases in which things are not so simple, because the members of the group do not share the same priors.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

On the Accuracy of Group Credences | 139 Jonathan has very high credence that the relevant document from his archive exists, they will both update to a high credence that both documents exist, and from there to a high credence that the controversial theory is true. Taking a summary of those updated credences gives us a high group credence that the controversial theory is true—and of course that is the correct summary of the potential knowledge distributed between Jonathan and Josie. Throughout, I’ll be interested in the first sort of group credence function. That is, I’ll be concerned with defending a norm for group credences understood as summaries of the opinions of the individuals in the group.

2. Linear Pooling Having pinned down the notion of group credence function that will concern us here, let me introduce an example to illustrate the account of group credence functions that I will defend here, namely, linear pooling. Adila, Benicio, and Cleo are climate scientists. They all have opinions about many different propositions; but all three have opinions about the following proposition and its negation: The sea level rise between 2016 and 2030 will exceed 20cm. We’ll call this proposition H;  and we’ll write H for its negation. The following table gives the credences that Adila ðcrA Þ, Benicio ðcrB Þ, and Cleo ðcrC Þ assign to the two propositions: crA crB crC H 0.2 0.4 0.5 H¯ 0.8 0.6 0.5 But what credences does the group, Adila-Benicio-Cleo, assign to these two propositions? There are two standard proposals in the literature: linear pooling and geometrical pooling. On both, each member of the group is assigned a real number as a weighting—thus, we might let αA be Adila’s weighting, αB for Benicio, and αC for Cleo. We assume that these weightings are non-negative real numbers; and we assume that they sum to 1. According to linear pooling, the group credence in a particular proposition is given by the relevant weighted arithmetic average of the individual credences in that proposition. For instance, CrLP ðHÞ ¼ αA crA ðHÞ þ αB crB ðHÞ þ αC crC ðHÞ ¼ 0:2αA þ 0:4αB þ 0:5αC So, for example, if each weighting is the same, so that αA ¼ αB ¼ αC ¼ 13, then       1 1 1 þ 0:4  þ 0:5   0:37 CrLP ðHÞ ¼ 0:2  3 3 3 and

 Þ ¼ CrLP ðH

More generally:

     1 1 1 þ 0:6  þ 0:5   0:63 0:8  3 3 3

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

140 | Richard Pettigrew Linear Pooling Suppose: • • • • •

G is a group of individuals; the credence functions of its members are cr1 ; . . . ; crn ; cri is defined on the algebra of propositions ℱ i ; the credence function of the group is CrG ; CrG is defined on the algebra of propositions ℱ ¼ ∩ni¼1 ℱ i . P Then there should be real numbers α1 ; . . . ; αn  0 with ni¼1 αi ¼ 1 such that CrG ðÞ ¼

n X

αi cri ðÞ

i¼1

If this holds, we say that CrG is a weighted (arithmetic) average or mixture or linear pool of cr1 ; . . . ; crn . Notice that, if the individuals’ credence functions are probability functions, then so is any linear pool of them. We will assume throughout that the individuals’ credence functions are indeed probability functions. And we will assume that each of the sets of propositions ℱ i on which cri is defined is finite— thus, ℱ ¼ ∩ni¼1 ℱ i is also finite. What’s so bad about a group credence function that is not a mixture of the individual credence functions held by the members of the group? That is, what is so bad about violating linear pooling? And what’s so good about CrLP ? Roughly, our answer will be based on the following pair of claims: 1. If CrG is not a mixture of cr1 ; . . . ; crn , then there is an alternative group credence function Cr*G such that, by the lights of each member of the group, the expected epistemic value of Cr*G is strictly greater than the expected epistemic value of CrG 2. If CrG is a mixture of cr1 ; . . . ; crn , then there is not even an alternative group credence function Cr*G whose expected epistemic value is at least that of CrG by the lights of each member of the group. Consider, for instance, the candidate group credence function CrABC that  assigns 0.1 to H and 0:9 to H. It is not a linear pool of crA , crB , and crC . What is wrong with it? Well, according to this argument, the following is true: there is some alternative group credence function Cr* that Adila, Benicio, and Cleo all expect to do better than Cr. And Cr* is itself a linear pool of crA , crB , and crC . And there is no alternative group credence function Crʹ that Adila, Benicio, and Cleo all expect to do better than Cr* . In the next few sections, we make this more precise. We start, in section 3, by making precise how we measure epistemic value. Then, in section 4, we make precise why the claim just stated would, if true, establish linear pooling as a rational requirement; and we observe that a precise version of that claim is true. Finally, in sections 5 and 6, we address two common objections to linear pooling—it does not preserve probabilistic independences; and it does not commute with Bayesian conditionalization.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

On the Accuracy of Group Credences | 141

3. Epistemic Value and Accuracy The account of epistemic value that I favour is monistic. That is, I think there is just one fundamental source of value for credences. Following James M. Joyce (1998) and Alvin Goldman (2002), I take the epistemic value of a credence to be its accuracy. A credence in a true proposition is more accurate the higher it is; a credence in a false proposition is more accurate the lower it is. Put another way: the ultimate goal of having credences is to have maximal credence (i.e. credence 1) in truths and minimal credence (i.e. credence 0) in falsehoods; the epistemic value of a credence is given by its proximity to that ultimate goal. This is a credal version of veritism. How should we measure the accuracy of a credence? In fact, in keeping with other writings in this area, we will talk about measuring the inaccuracy of a credence rather than its accuracy; but I take the accuracy of a credence to be simply the negative of its inaccuracy, so there is nothing substantial about this choice. Now, the inaccuracy of a credence in a proposition should depend only on the credence itself, the proposition to which it is assigned, and the truth value of the proposition at the world where its inaccuracy is being assessed. Thus, we can measure the inaccuracy of a credence using what has come to be called a scoring rule. A scoring rule s is a function that takes a proposition X, the truth value i of X (represented numerically, so that 0 represents falsity and 1 represents truth), and the credence x in X that we will assess; and it returns a number sX ði; xÞ. So sX ð1; xÞ is the inaccuracy of credence x in proposition X when X is true; and sX ð0; xÞ is the inaccuracy of credence x in proposition X when X is false. What features should we require of a scoring rule s if it is to be a legitimate measure of inaccuracy? We will consider two. First, we will require that the inaccuracy of a credence varies continuously with the credence. That is: Continuity sX ð1; xÞ and sX ð0; xÞ are continuous functions of x: I take this to be a natural assumption. Secondly, we will require that the scoring rule s should be strictly proper. Take a particular credence p in a proposition X—so p is a real number at least 0 and at most 1. We can use p to calculate the expected inaccuracy of any credence x, including p itself. The expected inaccuracy of x from the point of view of p is simply this: psX ð1; xÞ þ ð1  pÞsX ð0; xÞ That is, it takes the inaccuracy of x when X is true—namely, sX ð1; xÞ—and weights it by p, since p is a credence in X; and it takes the inaccuracy of x when X is false—namely, sX ð0; xÞ—and weights it by ð1  pÞ, which is of course the corresponding probabilistic credence in the negation of X. A scoring rule is strictly proper if, for any such credence p in a proposition X, it expects itself to

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

142 | Richard Pettigrew have lowest inaccuracy out of all the possible credences in X. That gives us the following principle: Strict Propriety For any proposition X and any 0 ≤ p ≤ 1, psX ð1; xÞ þ ð1  pÞsX ð0; xÞ is minimized uniquely, as a function of x, at p ¼ x. This demand has been justified in various ways: see (Gibbard, 2008), (Joyce, 2009), or (Pettigrew, 2016, Chapters 3 and 4). Let me briefly sketch Joyce’s argument for Strict Propriety. Suppose you are a veritist about credences: that is, you hold that the sole fundamental source of epistemic value for credences is their accuracy; and you hold that facts about the rationality of a credal state are determined by facts about its epistemic value. Then reason as follows: let X be a proposition and p a credence. Then, intuitively, there is evidence you might receive to which the unique rational response is to set your credence in X to p. For instance, you might learn with certainty that the objective chance of X is p. Or your most revered epistemic guru might whisper in your ear that she has credence p in X. Suppose you do receive this evidence; and suppose that you do set your credence in X to p, in accordance with your evidence. Now, suppose that, contrary to Strict Propriety, there is some other credence x 6¼ p such that p expects x to be at most as inaccurate as p expects p to be: that is, psð1; xÞ þ ð1  pÞsð0; xÞ ≤ psð1; pÞ þ ð1  pÞsð0; pÞ. Then, from the point of view of veritism, there would be nothing epistemically reprehensible were you to switch from your current credence p in X to a credence of x in X without obtaining any new evidence in the meantime. After all, we typically think that it is rationally permissible to adopt an option that you currently expect to be at most as bad as your current situation. And, by hypothesis, p expects credence x in X to be at most as inaccurate—and thus, by veritism, at most as bad, epistemically speaking—as it expects credence p in X to be. But of course making such a switch would be epistemically reprehensible. It is not rationally permissible for you to shift your credence in X from p to x without obtaining any new evidence in the meantime; for, by hypothesis, having credence p in X is the unique rational response to your evidence.2 Therefore, there can be no such credence x in X. Thus, in general, for any credence p in any proposition X, there can be no alternative credence x 6¼ p in X that p expects to be at most as inaccurate as it expects itself to be. And that is just what it means to say that the measure of inaccuracy must be strictly proper.

2 Note that it is here that we appeal to the assumption that a credence of p in X is the unique rational response to your evidence. If your evidence merely made that credence one of a number of rational responses, we could not establish Strict Propriety. That would leave open the possibility that another credence q 6¼ p in X is also a rational response to your evidence. And that would mean that we could not rule out the possibility that psð1; qÞ þ ð1  pÞsð0; qÞ ≤ psð1; pÞ þ ð1  pÞsð0; pÞ.

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

On the Accuracy of Group Credences | 143 What do these strictly proper and continuous scoring rules look like? Here is an example. It is called the quadratic scoring rule: qX ð1; xÞ: ¼ ð1  xÞ2 qX ð0; xÞ: ¼ x2 In other words: qX ði; xÞ ¼ji  xj2 , where i ¼ 0 or 1. So the quadratic scoring rule takes the difference between the credence you have and the credence you would ideally have—credence 1 if the proposition is true and credence 0 if it’s false—and squares that difference to give your inaccuracy. A little calculus shows that q is strictly proper and continuous. Here is an example of a scoring rule that is continuous, but not strictly proper. It is called the absolute value measure: aX ð1; xÞ: ¼ 1  x aX ð0; xÞ: ¼ x In other words: aX ði; xÞ ¼ j i  x j. So the absolute value measure takes the difference between the credence you have and the credence you would ideally have and takes that to give your inaccuracy. If p ¼ 12, every credence has the same expected inaccuracy from the vantage point of p; if p< 12, credence 0 minimizes expected inaccuracy from the vantage point of P; and if p> 12, credence 1 does so. So, while a is continuous, it is not strictly proper.3 A scoring rule measures the inaccuracy of a single credence. But the groups of individuals whose individual and group credences we will be assessing will typically have more than one credence; they will have credences in a range of different propositions. Adila, Benicio, and Cleo all have credences in H and in  H and no doubt also in many more propositions besides these. How are we to assess the accuracy, and thus epistemic value, of a credal state composed of more than one credence? We will adopt the natural answer. We will take the inaccuracy of a credence function—which of course represents a credal state consisting of credences in a number of propositions—to be the sum of the inaccuracies of the individual credences it assigns. That is, given a scoring rule s, we can define an inaccuracy measure ℑ as follows: ℑ takes a credence function cr and a possible state of the world w, and it returns the inaccuracy of that whole credence function at w; it takes the inaccuracy of cr to be the sum of the inaccuracies of the individual credences that cr assigns. In symbols:  X  sX wðXÞ; crðXÞ ℑðcr; wÞ ¼ X∈ℱ

Here, ℱ is the set of propositions on which cr is defined; and wðXÞ gives the truth value of proposition X in state of the world w—so wðXÞ ¼ 0 if X is false at w, and wðXÞ ¼ 1 if X is true at w. Thus, for instance, if we take the quadratic scoring rule q to be our scoring rule, it would generate this inaccuracy measure:

3

There are also strictly proper scoring rules that are not continuous (Schervish, et al., 2009).

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

144 | Richard Pettigrew ℑq ðcr; wÞ ¼

X

  X qX wðXÞ; crðXÞ ¼ j crðXÞ  wðXÞj2

X∈ℱ

X∈ℱ

This, then, gives us our measure of epistemic disvalue. Pick a continuous and strictly proper scoring rule s; generate from it an inaccuracy measure ℑ in the way outlined above. ℑðcr; wÞ is then the inaccuracy—and thus epistemic disvalue—of the credence function cr when the world is w. Given this, we can define the expected epistemic disvalue of one credence function from the point of view of another. Suppose cr is one credence function, and crʹ is another. Then we can define the expected inaccuracy of crʹ from the vantage point of cr as follows: X crðwÞℑðcrʹ ; wÞ Expℑ ðcrʹ j crÞ ¼ w∈W ʹ

When this value is low, cr judges cr to be doing well; when this value is high, cr judges crʹ to be doing badly. Since s is strictly proper, and since ℑ is generated from s in the way laid out above, it follows that there is an analogous sense in which ℑ is also strictly proper: every credence function cr expects itself to be doing best. That is, for any two credence functions cr 6¼ crʹ , Expℑ ðcr j crÞ 0 3. crðCch1 Þ ¼ 12 ¼ crðCch2 Þ

First, note that: crðH j EÞ ¼

crðHEÞ 12 ch1 ðHEÞ þ 12 ch2 ðHEÞ chi ðHEÞ ¼ 1 ¼ ¼ chi ðH j EÞ 1 crðEÞ chi ðEÞ 2 ch1 ðEÞ þ 2 ch2 ðEÞ

Next, if we let b ¼ ch1 ðH j EÞ  ch1 ðHÞ ¼ ch2 ðHÞ  ch2 ðH j EÞ, then crðHÞ

1 1 ch1 ðHÞ þ ch2 ðHÞ 2 2   1  1 ¼ ch1 ðH j EÞ  b þ ch2 ðH j EÞ þ b 2 2 ¼

¼

chi ðH j EÞ ¼ crðH j EÞ

OUP CORRECTED PROOF – FINAL, 27/12/2018, SPi

152 | Richard Pettigrew judgement that those two propositions are stochastically independent: I can know that H and E are stochastically independent without my credence function rendering them probabilistically independent; and I can know that H and E are stochastically dependent while my credence function renders them probabilistically independent. Perhaps, then, there is some other sort of independence that we judge to hold of H and E whenever our credence function renders those two propositions probabilistically independent? Perhaps, for instance, such a fact about our credence function encodes our judgement that H and E are evidentially independent or evidentially irrelevant? I think not. If you think that there are facts of the matter about evidential relevance, then these are presumably facts about which an individual may be uncertain. But then we are in the same position as we are with stochastic independence. We might have an individual who is uncertain which of two probability functions encodes the facts about evidential relevance. Each of them might make E epistemically relevant to H; but it might be that, because of that individual’s credences in the two possibilities, her credence function renders H and E probabilistically independent. If, on the other hand, you do not think there are facts of the matter about evidential relevance, it isn’t clear how facts about my credence function could encode judgements about evidential relevance; nor, if they could, why we should care to preserve those judgements, even when they are made unanimously. Remember: we learned in section 4 that there will always be some features shared by all members of a group that cannot be shared with the group credence function. Elkin and Wheeler (2016) try to dramatize the objection we are considering by presenting a Dutch Book argument against groups whose group credences fail to preserve independences shared by all members of the group. Their idea is this: Suppose that, relative to the credence function of each member of a group, propositions H and E are probabilistically independent. And suppose that, relative to their group credence function Cr, H and E are not probabilistically independent—that is, CrðHEÞ 6¼ CrðHÞCrðEÞ Then, according to Elkin and Wheeler, there are two ways in which we can calculate the price at which the group will be prepared to buy or sell a £1 bet on the proposition HE—that is, a bet that pays £1 if HE turns out to be true, and which pays £0 if HE is false. First, the group will be prepared to buy or sell a £1 bet on HE at £CrðHEÞ, since that is the group credence in HE. Second, Elkin and Wheeler claim that the group should also be prepared to buy or sell a £1 bet on HE at £CrðHÞCrðEÞ, since CrðHÞ is the group credence in H, CrðEÞ is the group credence in E, and the group judges H and E to be independent. But, by hypothesis, £CrðHEÞ 6¼ £CrðHÞCrðEÞ, and if an agent has two different prices at which they are prepared to buy or sell bets on a given proposition, it is possible to Dutch Book them. Suppose that CrðHÞCrðEÞ