144 85 2MB
English Pages 284 Year 2019
OUP CORRECTED PROOF – FINAL, 27/4/2019, SPi
Reasoning
OUP CORRECTED PROOF – FINAL, 27/4/2019, SPi
OUP CORRECTED PROOF – FINAL, 27/4/2019, SPi
Reasoning New Essays on Theoretical and Practical Thinking
Magdalena Balcerak Jackson and Brendan Balcerak Jackson
1
OUP CORRECTED PROOF – FINAL, 27/4/2019, SPi
3
Great Clarendon Street, Oxford, OX2 6DP, United Kingdom Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries © the several contributors 2019 The moral rights of the authors have been asserted First Edition published in 2019 Impression: 1 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by licence or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above You must not circulate this work in any other form and you must impose this same condition on any acquirer Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America British Library Cataloguing in Publication Data Data available Library of Congress Control Number: 2018966754 ISBN 978–0–19–879147–8 Printed and bound by CPI Group (UK) Ltd, Croydon, CR0 4YY Links to third party websites are provided by Oxford in good faith and for information only. Oxford disclaims any responsibility for the materials contained in any third party website referenced in this work.
OUP CORRECTED PROOF – FINAL, 27/4/2019, SPi
Contents List of Contributors 1. Questions about the Nature and Value of Reasoning
vii 1
Magdalena Balcerak Jackson and Brendan Balcerak Jackson
Part I. The Nature of Reasoning Reasoning as a Mental Process 2. Inference without Reckoning
15
Susanna Siegel
3. A Linking Belief is Not Essential for Reasoning
32
John Broome
4. Attitudes in Active Reasoning
44
Julia Staffel
Reasoning and Agency 5. The Question of Practical Reason
71
Nicholas Southwood
6. Is Reasoning a Form of Agency?
91
Mark Richard
7. Inference, Agency, and Responsibility
101
Paul Boghossian
Part II. The Value of Reasoning Rules for Reasoning 8. Isolating Correct Reasoning
129
Alex Worsnip
9. Small Steps and Great Leaps in Thought: The Epistemology of Basic Deductive Rules
152
Joshua Schechter
10. With Power Comes Responsibility: Cognitive Capacities and Rational Requirements Magdalena Balcerak Jackson and Brendan Balcerak Jackson
178
OUP CORRECTED PROOF – FINAL, 27/4/2019, SPi
vi
Reasoning and Reasons 11. When Rational Reasoners Reason Differently
205
Michael G. Titelbaum and Matthew Kopec
12. The Epistemic Innocence of Optimistically Biased Beliefs
232
Lisa Bortolotti, Magdalena Antrobus, and Ema Sullivan-Bissett
13. Sovereign Agency
248
Matthew Noah Smith
Index
271
OUP CORRECTED PROOF – FINAL, 27/4/2019, SPi
List of Contributors M A independent scholar B B J University of Miami M B J University of Miami P B New York University L B University of Birmingham J B Oxford University M K Australian National University M R Harvard University J S Brown University S S Harvard University M N S Northeastern University N S Australian National University J S University of Colorado, Boulder E S-B University of Birmingham M G. T University of Wisconsin, Madison A W University of North Carolina at Chapel Hill
OUP CORRECTED PROOF – FINAL, 27/4/2019, SPi
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
1 Questions about the Nature and Value of Reasoning Magdalena Balcerak Jackson and Brendan Balcerak Jackson
Few interesting and significant pieces of knowledge are easily acquired. Sure, we can know some things just by taking our perceptual experience at face value: that the walls are beige, that the dining table we are sitting at is dirty, or that the laptop keys click. But taking experience (whether perceptual or otherwise) at face value will be insufficient to answer many of the questions that are of real importance to us: Will Mueller’s investigation into Trump’s role in the Russian election interference lead to impeachment? How many degrees will the temperature around Miami rise in the next ten years? Is reading fiction a way of becoming a better person? What should we get our daughter for her sixth birthday? What can we make for the upcoming dinner party with both vegan and meat-loving guests? In order to answer questions like these we need to engage in cognitive labor, and especially cognitive labor in the form of reasoning. Reasoning is important. Reasoning is ubiquitous. Reasoning is diverse. But reasoning as a distinctive cognitive capacity has not yet received the systematic attention it deserves in philosophy. Perhaps one reason is that reasoning is like baking: something that we all do (at least from time to time), and something that is so familiar that we seldom stop to notice how little we understand about how it really works. But expert bakers, maîtres patissiers, actually know quite a bit about the science of baking: about the nature and structure of different baking processes, and about the conditions under which they successfully yield delicious results. They know, for example, that pretty much all cakes are based on a balanced combination of four main ingredients: flour, eggs, fat, and sugar. They also know about the processes that turn these ingredients into delicious cakes, and they know why these processes have the results that they do. For example, they know to vigorously whip the fat and the sugar, because this creates air bubbles encased in a layer of fat, and they know that We would like to thank Peter Momtchiloff at Oxford University Press for his invaluable support in making this volume happen; David DiDomenico for his excellent work preparing the index; and the authors for generously contributing their time and talent to the volume.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
carefully adding beaten egg adds a layer of protein that ensures the air bubbles do not collapse when the batter is heated. A lot can go wrong when you bake, even as a maître patissier. But you know that when you competently select the ingredients and you competently apply the relevant baking techniques, you are very likely to get a good result; and you understand why this is so. Philosophers certainly do not understand reasoning as well as patissiers understand baking. Philosophers agree on a couple of platitudes: reasoning consists in transitions between mental states, from “premise attitudes” to a “conclusion attitude”; and under the right conditions, reasoning done well is a way of coming to know what to believe or what to do. But hardly anything else is clearly agreed upon. What are the main ingredients that go into reasoning? Are they only beliefs, or beliefs and intentions? What about credences, degrees of belief? Can reasoning involve other kinds of mental states with propositional content, such as perceptual experiences, imaginings, and emotional states? And how are we to distinguish the processes that are genuine processes of reasoning from other transitions in thought that are not reasoning, such as spontaneous reactions, processes of association, or what John Broome calls “mental jogging”? Is it essential to reasoning, in contrast with these other processes, that it is something we actively do rather than something we passively endure? Philosophers also do not agree on the techniques we ought to employ in reasoning. Are we supposed to deliberately construct chains of reasoning that directly correspond to patterns of logical entailment? Or are we supposed to let our cognitive systems simply go to work according to their own rules and laws, with our job being merely to oversee and approve or reject the results? And even where philosophers agree about the process, they do not agree about why—or in virtue of what—it yields the results it aims for, namely rationally appropriate beliefs and decisions. Disagreement among philosophers is of course hardly surprising. More surprising, perhaps, is that—unlike in some other areas of philosophy—we have yet to find a unified way to ask the relevant questions. There are historical reasons that help explain why reasoning has not been explored as extensively as, say, perception. For a very long time, the philosophical investigation of reasoning tended to focus on reasoning of a very specific kind: explicit basic deductive reasoning that is governed by formal rules, the kind of reasoning exercised by a logician or mathematician explicitly spelling out a proof. There is a certain prejudice amongst philosophers to view this kind of reasoning as the ideal towards which all forms of reflection or deliberation aspire. And even reasoning in this pure and rarified form has proven a challenge for philosophers to understand. One of the most famous problems philosophers have grappled with when trying to understand the epistemology of deductive reasoning arises from a seemingly very simple question: how can we be justified in using the most basic and obvious rules of inference, such as modus ponens? As Lewis Carroll (1895) taught us, we certainly cannot say that our justification for reasoning from the belief that p and the belief that p implies q to the belief that q requires an explicit background belief that our
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
premise attitudes support our conclusion attitude; if we do, a regress ensues. There are solutions to the Carrollian regress worry, although each of them comes with costs and commitments that some are unwilling to accept. But whatever one’s thoughts about this, the important observation for present purposes is that they leave the majority of our everyday reasoning, and even of reasoning by experts, entirely untouched. This is because most of it is not explicit, conscious reasoning in accordance with basic deductive rules, but is rather some more or less complicated form of reasoning that is not fully explicit, and whose relationship to deductive patterns of inference is not at all clear. We know which students will come late to class today, because we have a visual track record of them entering class late stored in our memory, and this forms a basis from which we infer what will happen tomorrow. We recognize that Paul Ryan’s comment that people in inner cities have a poor work ethic is a racial dog-whistle because this is the best interpretation of his comment given the relevant background information we have. We decide to go on a family trip to Vietnam because, after considering the alternatives, we conclude that this will be the best experience for everyone in the family. Comprehensive answers to questions about what reasoning is, and about what makes it good or successful, should be able to capture these messier but much more common sorts of cases. And this means that the enterprise is more difficult—and much richer—than historical philosophical treatments of basic deductive reasoning would suggest. To help give a sense of the sorts of complexities that a comprehensive account of reasoning needs to deal with, here are three examples. The first is what Susanna Siegel calls “reasoning in the basement.” We know that many aspects of our conscious experience and thinking are influenced by information processing that goes on below conscious awareness and outside of our direct control. How should such sub-personal processes be incorporated into our account of reasoning? And how can their role be reconciled with the intuitive idea that reasoning is something that we do, and as such, something that we can be held responsible for? The second has to do with how we are to evaluate reasoning that does not fit the historical ideal of explicitly following deductive rules. How can we make sense of reasoning like this in light of the traditional—and intuitively powerful—idea that reasoning is a rule-governed activity? How can we characterize the difference between good reasoning and bad, or between correct and incorrect, when it departs from the deductive paradigm? The third concerns rules or requirements for reasoning more generally. What are the rules for correct reasoning? Do they go beyond minimal coherence constraints that one must arguably meet to qualify as a rational thinker, such as the constraint against believing contradictions? Do the same rules apply to all of us? Are there rules that govern how to respond to perceptual experience? And whatever the correct rules may turn out to be, what justifies them? In what sense, if any, is it true that we ought to reason according to them? We mention these examples not only to indicate how little is clearly
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
understood about reasoning, but also to give some sense of why more and more philosophers are becoming fascinated by it. This collection was born at a conference that we organized in 2014 at the University of Konstanz in Germany.¹ We too were becoming more and more fascinated by reasoning, and so we decided to assemble a group of philosophers who were doing exciting new work on the topic. We were guided selfishly by our own interests: we wanted to talk about everyday reasoning, whether formal or informal, deductive or non-deductive, conscious or non-conscious; and we were interested in a mixture of descriptive and normative questions that cut across the professional divide between theoretical and practical reasoning, which in our view is artificial and often unhelpful. To our enormous gratitude, the group that came together in Konstanz was incredible: sharp and constructive in discussions, and collegial and collaborative throughout the conference. The event as a whole made us excited about the prospects for real philosophical progress on reasoning. The present volume is intended to contribute towards realizing those prospects. The volume includes contributions from several of the original conference participants, as well as several additional contributions that further expand on and complement the themes explored there, and that help connect strands of current philosophical thinking about reasoning that we believe ought to be connected. Our hope is that some of the energy of that original conference in Konstanz is captured in this volume. The volume is divided into two parts: The Nature of Reasoning (Part I), and The Value of Reasoning (Part II). Part I, in turn, is divided into two subsections, each with three essays. The first subsection, Reasoning as a Mental Process, focuses on questions about the structure and components of reasoning. The subsection begins with Susanna Siegel’s contribution, “Inference without Reckoning.” Siegel identifies what she calls the canonical reckoning model of inference (which for her is a paradigmatic type of reasoning), according to which the thinker registers some piece of information, reckons that the information supports some conclusion, and reaches the conclusion because she reckons that it is supported by her information. Siegel argues that inference does not always work according to the canonical reckoning model, because there are cases of inference where the thinker is not aware of the factors that lead her to reach her conclusion. (The cases thus also violate the selfawareness condition on inference, advocated by Boghossian (2014) and others, that motivates the canonical reckoning model.) In place of the canonical reckoning model, Siegel offers a view according to which inferring is a distinctive way of responding to information by forming a conclusion attitude, a way that is not reductively analyzable, but that can be evaluated as rational or irrational even in the absence of a state of reckoning that one’s conclusion is supported by the information to which one is responding. ¹ The conference was generously funded by the Emmy Noether Programme of the Deutsche Forschungsgemeinschaft (German Research Council).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
The question of whether reasoning involves something like a reckoning state is also taken up in John Broome’s contribution, “A Linking Belief is Not Essential for Reasoning.” Earlier Broome (2013, chapter 13) defended an affirmative answer to this question, arguing that reasoning necessarily involves a linking belief that represents the thinker’s premise attitudes as an appropriate basis for his conclusion attitude. Broome now rejects this view, however. He argues that linking beliefs are not necessary for reasoning in general, because a linking belief is not always required for reasoning that concludes in an attitude other than belief, such as an intention to act. Moreover, Broome argues, even if reasoning that concludes in a belief (rather than an intention or some other attitude) does necessarily entail the presence of a linking belief, the linking belief is not essential—it is not part of what makes the process in question qualify as a case of reasoning. Rather, in such cases a linking belief is necessarily present because it is, in effect, constituted by the very dispositions that the thinker manifests in going through the reasoning process itself. The linking belief is posterior to the reasoning in the order of explanation. In “Attitudes in Active Reasoning,” Julia Staffel takes up the question of what kinds of attitudes reasoning operates on. She focuses on theoretical reasoning in particular, and on whether it always involves only outright beliefs, or whether it sometimes or always involves degrees of belief instead. Staffel begins from a maximally permissive position that allows that reasoning operates on both outright beliefs and degrees of belief. She considers several strategies for arguing for more restrictive positions that rule out one or the other, but finds none convincing. In particular, she focuses on four features that are often ascribed to paradigmatic instances of person-level reasoning—that the thinker is consciously aware of it, that it involves language, that its operations are sensitive to the contents of attitudes, and that it utilizes working memory—and argues that all of these features are compatible with the maximally permissive position. One reason this result matters, Staffel argues, is that it can help to defuse the dispute between those who think degrees of belief should be understand as graded attitudes towards non-probabilistic propositional contents and those who think they should be understood as outright attitudes towards probabilistic contents. The second subsection of Part I, Reasoning and Agency, includes three essays that explore issues having to do with the idea that reasoning is an expression of the thinker’s agency. In “The Question of Practical Reason,” Nicholas Southwood focuses on our capacity for practical reasoning, which he sees as a capacity to answer the question of what one is to do or of how one is to act. Southwood argues that an adequate conception of this capacity must capture two aspects of it that might at first appear to be in tension. First, it must capture the correct responsiveness aspect, the fact that part of the aim of practical reasoning is to discover the correct answer to the question of what to do. But second, it must also capture the authorative aspect, the fact that our capacity includes the power to settle the question—to make it the case that the answer one arrives at is in fact the thing to do in the situation. The dominant
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
theoretical conception of practical reasoning sees it as aiming to settle theoretical questions about what the thinker will do, or should do, or has reason to do. But Southwood argues that this conception is unable to do justice to both aspects of practical reasoning. Rather, Southwood argues, we need to recognize that practical reasoning is concerned with questions that are irreducibly practical, in the sense that they cannot be settled by settling theoretical questions about what one will do, or should do, or has reason to do. Indeed, they cannot be settled by forming a belief at all, but only by forming an intention to act in a certain way. Southwood argues that this practical conception is able to capture both the correct responsiveness and the authoritative aspects of practical reasoning. Since the authoritative aspect is intimately bound up with the thinker’s agency, Southwood’s account of the nature of practical reasoning gives a central place to agency. Mark Richard’s contribution, “Is Reasoning a Form of Agency?” focuses on theoretical reasoning, and directly targets the idea that agency should have any place in our account of its nature. For Richard, reasoning is not necessarily—or even typically—something that the thinker does. Actual processes of reasoning, for the most part, happen outside the thinker’s conscious awareness, and are carried out in whole or in part by sub-personal processes over which the thinker has little (if any) control. This might seem to conflict with our deeply held conviction that a thinker is responsible for her reasoning in a way that makes her an appropriate target of epistemic evaluation. But Richard argues otherwise: a factory manager can be held responsible for the production of widgets in her factory even though she does not personally operate the machines or control the workers, and in the same way a thinker can be responsible for beliefs of hers that are produced by processes that are outside her direct awareness or control. Richard also takes up the suggestion that reasoning is agential because it must satisfy what Boghossian (2014 and in this volume) calls a taking condition—that is, that something counts as reasoning only if it is a process whereby one takes one’s conclusion to be supported by one’s premises, and arrives at the conclusion (in part) because one takes this to be so.² Richard argues that it is not plausible that reasoning in general does meet the taking condition—at least, not in any sense of that condition that plausibly supports the conclusion that reasoning is a form of agency. Paul Boghossian’s contribution, “Inference, Agency, and Responsibility,” also draws a connection between agency and the taking condition, but in the other direction. According to Boghossian, there are at least two related features of reasoning that reflect its agential nature. One feature is the one already noted above, that it is appropriate to hold thinkers responsible for their reasoning. The other feature is that reasoning is distinct from mere associative thinking; in cases of genuine ² One way for a thinker to satisfy the taking condition is by being in one of Siegel’s reckoning states, and another is by having one of Broome’s linking beliefs, as long as the reckoning state or linking belief plays the right role in bringing about the conclusion attitude.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
reasoning, but not associative thinking, the thinker deliberately establishes one belief as the epistemic basis of another. Boghossian argues that both of these aspects of agency show that an adequate account of the nature of reasoning must include a taking condition. Boghossian’s discussion includes responses to the challenges posed by Richard in his contribution—both his challenge to the inference from responsibility to agency, and his challenge to the taking condition itself.³ Boghossian also addresses doubts about the taking condition that are raised by what he calls “reasoning 1.5,” conscious, person-level reasoning in which the thinker is not consciously aware of taking her premises to support her conclusion; the category of reasoning 1.5 plausibly includes at least some of the cases raised by Richard and by Siegel as problematic for the taking condition. Part II, The Value of Reasoning, is also divided into two subsections of three essays each. The chapters in the first subsection, Rules for Reasoning, all focus in one way or another on rules or rational norms for reasoning—what rules or norms are there, and what accounts for their status as such? The subsection begins with Alex Worsnip’s “Isolating Correct Reasoning.” Worsnip focuses on the intuitive notion of rules for correct reasoning. For example, intuitively one reasons correctly when one reasons according to modus ponens, but not when one reasons according to a rule of affirming the consequent. What makes a given rule a genuine rule for correct reasoning? Worsnip first considers an answer in terms of justification: perhaps a rule for correct reasoning is one that preserves justification, in the sense that a thinker who reasons according to the rule never moves from justified to unjustified beliefs (or intentions, in the practical case). Against this proposal, however, Worsnip describes cases where, he argues, reasoning according to correct rules fails to preserve justification. He then turns to answers in terms of structural requirements on rationality rather than justification. For example, if one believes that p and believes that if p then q, but also believes that not-q, then one is “not entirely as one ought to be” rationally speaking (to borrow a phrase from MacFarlane 2004). Perhaps the corresponding modus ponens rule is correct because reasoning in the way it prescribes brings the thinker into conformity with this structural requirement on rationality. But Worsnip argues that this account ultimately fails as well, as does one that is formulated in terms of rational permission rather than rational requirement. Worsnip concludes by indicating how one might go about developing a positive conception of correct reasoning as a sui generis notion not reducible to justification or structural rationality. Joshua Schechter’s discussion in “Small Steps and Great Leaps in Thought: The Epistemology of Basic Deductive Rules” asks not about the rationality of reasoning according to certain rules, but about one’s epistemic justification for doing so—and in ³ The essays by Boghossian and Richard have a related genesis: Boghossian presented an earlier version of the current essay at the 2014 reasoning conference in Konstanz, and Richard presented an earlier version of his essay as a reply to Boghossian.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
particular, about our justification for taking some deductive rules, but not others, as basic. For example, we are plausibly within our epistemic rights to reason according to the modus ponens rule above, and to do so without having derived it from some other, more basic rules. But some deductive rules correspond to “great leaps” in reasoning that we are intuitively not justified in treating as basic. One of Schechter’s main examples here is a rule corresponding to Fermat’s theorem: From the belief that x, y, z, and n are positive integers with n > 2, infer that xn + yn 6¼ zn. What explains the fact that we are justified in using the modus ponens rule, but not the Fermat rule? Schechter considers and dismisses several ways of trying to answer this question. In particular, he argues that we cannot give a satisfactory explanation in terms of the idea that certain rules are (in some sense) built into the concepts that figure into the attitudes to which the rule applies. Rather, Schechter argues, the most promising way to account for the difference is in terms of pragmatic notions such as usefulness and indispensability. Our contribution to the volume, “With Power Comes Responsibility: Cognitive Capacities and Rational Requirements,” is concerned with the question of what sorts of rules for reasoning there are—and in particular, with the question of whether the same rules apply to all rational thinkers. We argue that they do not, because certain specialized cognitive capacities bring with them certain distinctive rules for reasoning. For a thinker who has the capacity to understand German, for example, it is rationally appropriate to reason about German utterances in ways that would not be rationally appropriate for a non-speaker; likewise, a medical specialist who is experienced in interpreting ultrasound images should draw different conclusions from her observations of an ultrasound image than a novice should from hers. In our view, these differences in what kinds of reasoning are appropriate are symptomatic of the fact that rationality imposes different requirements on thinkers who have a certain cognitive capacity than on those who lack it. We argue that this picture is more plausible than cognitivist accounts that try to capture the rational differences entirely in terms of subject-invariant structural coherence requirements on rationality (of the sort that Worsnip considers in his contribution; such requirements plausibly correspond to the kinds of “small steps” in reasoning that Schechter sees as basic in his contribution). We argue that our picture is also more plausible than perceptualist accounts that try to capture the rational differences entirely in terms of a subject-invariant rule of taking at face value how things consciously seem to one to be. The final subsection of the volume, Reasoning and Reasons, contains chapters that are concerned most directly with the connection between reasoning and the rational appraisal of individual attitudes such as beliefs and intentions. In “When Rational Reasoners Reason Differently,” Michael G. Titelbaum and Matthew Kopec defend a version of permissivism, which denies that for any given evidence E and proposition p, E always uniquely justifies either belief in p, disbelief in p, or suspension of judgment. They thus reject the uniqueness thesis. (In fact, Titelbaum and Kopec
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
identify several distinct versions of the uniqueness thesis, all of which they reject.) Titelbaum and Kopec’s permissivism follows from two commitments: first, that a given body of evidence justifies an attitude towards a proposition only relative to a method of reasoning, a way of responding to the evidence; and second, that there are distinct, non-equivalent but fully rational methods of reasoning that a thinker could adopt. Thus, evidence E might make one thinker justified in believing that p relative to her method of reasoning, while E makes another thinker justified in disbelieving that p (or in suspending judgment) relative to his distinct method of reasoning. Titelbaum and Kopec address several arguments against permissivism and in support of the uniqueness thesis. Chief among these are concerns that permissivism will make it impossible for thinkers who disagree to rationally arrive at a consensus, and concerns that permissivism implies that what we believe in light of the evidence is arbitrary. In their response to both sorts of worries, Titelbaum and Kopec utilize the thought experiment of “the reasoning room” to illustrate how a group of permissivist thinkers can rationally arrive at a consensus and avoid charges of arbitrariness in their beliefs. The second contribution in this subsection is “The Epistemic Innocence of Optimistically Biased Beliefs,” by Lisa Bortolotti, Magdalena Antrobus, and Ema SullivanBissett. The core notion of their essay is that of epistemic innocence, modeled after the legal concept of an innocence defense. Just as an agent can be held non-liable for an otherwise wrongful act because it helped prevent some serious harm—as when an agent causes physical harm in defense of herself or others—an irrational belief can be epistemically innocent when its epistemic benefits outweigh its irrationality. More specifically, an irrational belief is epistemically innocent when it confers some epistemic benefit on the thinker that could not be achieved by adopting an epistemically preferable belief instead. With this notion in hand, the authors turn to the phenomenon of optimistically biased beliefs—for example, cases in which the thinker overestimates her ability to control events, overrates herself or her performance relative to others, or makes unrealistic predictions about the likelihood of positive events in the future. Such beliefs are (or can be) irrational, and they reflect the influence of motivational factors on the reasoning processes whereby the beliefs are formed and maintained. Nevertheless, Bortolotti, Antrobus, and Sullivan-Bissett argue, optimistically biased beliefs are often epistemically innocent. One reason is that they contribute to the thinker’s ability to socialize with others, which leads to better access to new information and feedback. Another reason is that they help sustain the thinker’s motivation in pursuit of her epistemic (and other) goals. In short, optimistically biased beliefs are epistemically valuable, according to the authors, because they contribute to the acquisition and use of information, and to the exercise of epistemic virtues, in ways that could not easily be done by beliefs that are more realistic and more rational. The final essay of the volume, Matthew Noah Smith’s “Sovereign Agency,” returns to the topic of practical reasoning. Smith aims to defend the claim that when one
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
arrives via correct reasoning—what he calls sound deliberation—at an intention to perform an act Φ, one’s reasoning itself constitutes a reason not to reopen deliberation on the question of whether to Φ.⁴ More surprisingly, Smith also argues that when one arrives by sound deliberation at an intention to Φ one thereby has a reason to Φ. This sort of “bootstrapping”—where one seems to get a reason to Φ just by deciding to Φ—has been taken by many to be impossible.⁵ Smith’s strategy in support of it is to argue, first, that deliberation has the function of authorizing the agent’s action, as well as the intention to act that is the result of deliberation and the proximal cause of the action; here, authorizing an action is roughly a matter of playing the functional role of being the agent’s reason for acting. So far this is a descriptive claim about the functional role of deliberation in the agent’s mental life. But Smith argues, second, that deliberation ought to play this role—that it is rationally appropriate for it to do so. From this Smith concludes that the agent’s deliberation, when it is sound, constitutes a normative reason for her to (intend to) act in the way her deliberation recommends. Smith goes on to address the worry that deliberation, on his view, can sometimes give an agent a reason to act wrongly. He argues that this worry can be defused in the same way as analogous worries that arise concerning other sources of reasons for action, such as promises and the law. Smith’s discussion here is illustrative of one of the broader strategies of the essay, which is to explore parallels between the authority of one’s own deliberations and the authority of the law.
References Broome, John. 2013. Rationality through Reasoning. Oxford: Wiley Blackwell. Carroll, Lewis. 1895. “What the tortoise said to Achilles.” Mind v. 14: 278–80. MacFarlane, John. 2004. “In what sense (if any) is logic normative for thought?” Unpublished manuscript. .
⁴ If this claim is correct, it could perhaps help capture the authoritative aspect of practical reasoning that Southwood observes in his contribution. ⁵ The analogous claim in the epistemic domain would probably strike many as absurd: that merely having reasonably arrived at the conclusion that p gives one a reason to believe that p.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
PART I
The Nature of Reasoning
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Reasoning as a Mental Process
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
2 Inference without Reckoning Susanna Siegel
Inference is a paradigm of person-level reasoning that redounds well or badly on the subject. Inferences can be epistemically better or worse, depending on the epistemic status of the premises and the relationship between the premises and the conclusions. For example, if you infer from poorly justified beliefs, or from experiences or intuitions that fail to provide any justification (e.g., you know they are false, or have reason not to endorse them), your conclusion will be poorly justified. The hallmark of inference is that the conclusions drawn by inferrers epistemically depend on the premises from which they are drawn. We can be more exact about the inputs and upshots of inference than is allowed by the terms “premises” and “conclusions.” It can be natural to use these words to label either psychological entities or propositions that are their contents. Both uses can be useful. But the inputs and upshots of inference are psychological entities, and these are the things that stand in relations of epistemic dependence of the sort characteristic of inference. If you infer a state with content Q (a Q-state) from a state with content P (a P-state), then your Q-state epistemically depends on your P-state. But if the inference is poor, then the proposition Q may not depend logically, semantically, or in any other way on the proposition P.¹ Other relations of epistemic dependence For helpful discussion, thanks to Paul Boghossian, Alex Byrne, Lauren Davidson, Hannah Ginsborg, Eric Hiddleston, Brendan Balcerak-Jackson, Zoe Jenkin, Rob Long, Janet Levin, Eric Mandelbaum, Antonia Peacocke, Jim Pryor, Jake Quilty-Dunne, Mark Richard, and audiences at Jean Nicod Institute, the 2017 NYU Mind and Language seminar, the 2016 UCSD winter perception workshop, Wayne State University, and Universidad Alberto Hurtado in Santiago, Chile. This chapter further develops the idea in chapter 5 of Siegel (2017). ¹ I’m talking here and elsewhere as if psychological entities involved in inference are psychological states, rather than being either states or events. If judgments are events rather than states, this usage would suggest misleadingly that judgments are never relata of inferences. I’m omitting mention of events merely for brevity. Conclusion-states include beliefs, but other psychological states can be conclusion-states as well. Some inferences begin from suppositions, and issue in conclusions in which one merely accepts a proposition, without believing it. Here one uses inference to explore the consequences of a supposition, when it is combined with other things you accept or believe. In other cases, one might accept a proposition for practical purposes. That kind of acceptance can be either a premise-state or a conclusion-state.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
could be defined for propositions, but those dependence relations are not necessarily the kind that is established by inference.² Some phenomena aptly labeled “inference” don’t redound on the subject’s rational standing at all. For instance, inferences in which the premise-states are states of early vision with no epistemic power to justify beliefs fall into this category. Here I set those phenomena aside. What makes a mental transition redound on the subject’s rational standing in the specifically inferential way? According to a natural and forceful answer to this question, inference constitutively involves a kind of self-awareness. For instance, Paul Boghossian holds that inference is a form of person-level reasoning, which he says meets the following condition: Self-awareness condition. “Person-level reasoning [is] mental action that a person performs, in which he is either aware, or can become aware, of why he is moving from some beliefs to others.” (Boghossian 2014, p. 16) If inference meets the self-awareness condition, then inferrers are never ignorant of the fact that they are responding to some of their psychological states, or why they are so responding. What is inference like when it satisfies the self-awareness condition? Consider the proposal that one draws an inference by registering some information (where information can include misinformation) and reckoning that it supports the conclusion, with the result that one reaches the conclusion. On this model, the inferential route to drawing a conclusion has three components: the premise-states from which one infers, a reckoning state in which one reckons that the premise-states support the conclusion, and a “becausal” condition according to which one reaches a conclusion from the premise-states because one reckons that they support it. If this picture of inference had a slogan, it might be that in inference, one concludes because one reckons. The reckoning model is arguably found in Frege and discussed by many thinkers after him.³ The reckoning model of inference can specify the structure and components of inference that ensure that thinkers meet the self-awareness condition. Thanks to the reckoning state, reasoners do not infer in ignorance of what they are responding to. And thanks to the reckoning state’s role in producing the inference’s conclusion, the things to which reasoners respond are also reasons for which they draw their conclusions.
² For instance, Laplace (1814) and Chalmers (2012) explore relationships of knowability, in order to probe what else you could know a priori, if you knew all the propositions in a carefully defined minimal subclass. ³ Frege (1979) writes: “To make a judgment because we are cognisant of other truths as providing a justification for it is known as inferring.” As Boghossian (2014) and others point out, Frege’s formulation would restrict inference only to cases in which the judgment is justified by truths and one knows this.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
In principle, the reckoning model could be divorced from the self-awareness condition on person-level reasoning. But since the reckoning model is motivated by the self-awareness condition, and since it serves the self-awareness construal of person-level reasoning so well, I’ll say the reckoning model is canonical when it meets the self-awareness condition. Via the self-awareness condition, the canonical reckoning model entails that inferrers can become aware of why they are moving from some beliefs to others, if they aren’t so aware already. The canonical becausal condition is therefore a firstperson rationalization of why the conclusion is drawn—not a merely causal condition. For example, if you infer that the streets are wet from the fact (as you see it) that it rained last night, then on the canonical reckoning model, you’re aware that you believe that it rained last night, and that you take that fact to support the conclusion that the streets are wet. If asked why you believe the streets are wet, you could answer, correctly, that you believe this because it rained last night. The premise-states from which you draw your conclusion are accessible to reflection. In the rest of this chapter, I argue that subjects can draw inferences in ignorance of the exact factors they are responding to. Inference can fail to satisfy the selfawareness condition, and therefore the canonical reckoning model is not true of all person-level reasoning. My argumentative strategy is to present putative cases of inference in which subjects fail to meet the self-awareness condition. If these situations are cases of inference, the canonical reckoning model cannot recognize them as such. If these are cases of inference that the canonical reckoning model cannot account for, a natural next question is whether the fault is with the reckoning model per se, or with the self-awareness condition that makes the model canonical. To address this question, I consider non-reckoning models that keep reckoning but divorce it from the self-awareness condition. I will argue that non-canonical reckoning models are either poorly motivated or else they face internal instabilities. The best way to account for the broad range of cases exemplified by my examples may then be by analyzing inference without appeal to reckoning. To bring such an alternative into focus, I present an approach to inference that leaves reckoning behind, and identify the family of rational responses to which it belongs.
1. Inference without Self-awareness: Examples Sometimes when one categorizes what one perceives, one is not aware of which features lead one to categorize as one does. Consider the following example of categorizing a behavioral disposition. Kindness. The person ahead of you in line at the Post Office is finding out from the clerk about the costs of sending a package. Their exchange of information is interspersed with comments about recent changes in the postal service and the
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
most popular stamps. As you listen you are struck with the thought that the clerk is kind. You could not identify what it is about the clerk that leads you to this thought. Nor could you identify any generalizations that link these cues to kindness. Though you don’t know it, you are responding to a combination of what she says to the customer, her forthright and friendly manner, her facial expressions, her tone of voice, and the way she handles the packages. By hypothesis, there are some features of the clerk (facial expressions, manner, etc.) such that you reach the judgment that she is kind by responding to those features. And let’s assume that kindness is not represented in your perceptual experience of the clerk. If it were, the judgment would be a case of endorsing an experience, rather than an inference made in response to one’s perceptual experience.⁴ In other cases, one forms a judgment in response to a set of diverse factors, without being aware of everything one is responding to. Consider this example: Pepperoni. Usually you eat three slices of pizza when it comes with pepperoni. But tonight, after eating one slice, you suddenly don’t want any more. Struck by your own uncharacteristic aversion, you form the belief that the pizza is yucky. Though you don’t know it, you’re responding to the facts that (i) the pepperoni tastes very salty to you, (ii) it looks greasy, (iii) it reminds you of someone you don’t like, who you recently learned loves pepperoni, and (iv) you have suddenly felt the force of moral arguments against eating meat. If the next bites of pepperoni were less salty, the greasy appearance turned out to be glare from the lights, you learned that your nemesis now avoids pepperoni, and the moral arguments didn’t move you, the conclusion of your inference would weaken, and so would your aversion. You haven’t classified what you see and taste as: too greasy, too salty, reminiscent of your nemesis, or the sad product of immoral practices. Nor are you consciously thinking right now about any of these things. By hypothesis, there are features of the pizza (greasy, salty) and of your mind (you’re reminded of nemesis, you feel the force of moral argument) that you’re responding to, when you conclude that the pizza is yucky. On the canonical reckoning model, the kindness and pepperoni cases are therefore not cases of inference. This result seems implausible. Both cases meet the main diagnostic of inference: epistemic dependence. You could have better or worse reasons for the conclusion in each case, and that would make the conclusion better or worse. For instance, the fact that your nemesis likes it is a poor reason to take the pepperoni to be undesirable. It is generally irrational to avoid pepperoni because your
⁴ Arguably believing P on the basis of an experience with content P can be an inference, since one is drawing on information one has already in a rationally evaluable way, and that’s a hallmark of inference. But there isn’t any need to pursue this question here.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
nemesis likes it, but people often respond irrationally in just this way.⁵ Perhaps the grease, salt, and moral considerations are better. Epistemic dependence is also plainly evident in the kindness case. More description would be needed to determine how rational or irrational the response is, but it clearly has some status along this dimension. The features responded to in the kindness case might be poor grounds for concluding that the clerk is kind (who knows what she is like in other circumstances? Maybe she just moves carefully by habit unrelated to considering the value the package has for the sender or recipient). Alternatively, you might have good reason to take those features to indicate kindness. If the kindness and pepperoni cases are inferences, neither of them are conscious inferences, even thought they result in conscious judgments as their conclusion-state. But this feature of them does not preclude their being inferences, because in general the process of inferring doesn’t have to feel like anything. You don’t necessarily have to think anything to yourself, in inner speech or otherwise. You don’t have to rehearse the reasoning that brought you to the conclusion. For example, while walking along a rainy street, you might come to a puddle and think that it is too big to hop across, so you will have to go around it. You need not think to yourself that you have to walk around the puddle if you want to keep your feet dry. A child playing hide and seek might not look for her opponent on the swing set, because the swings provide no place to hide, and hiding in plain sight is an option she doesn’t consider.⁶ These inferences do not involve any more cognitive sophistication than what’s needed to play hide and seek, or to keep one’s feet dry. Yet it is clear that the thinkers in these cases end up drawing their conclusions by responding to information they have, and that their conclusion-states epistemically depend on the information they respond to. If you underestimate your own puddle-hopping abilities because you are excessively fearridden, your conclusion that you have to go around is ill-founded, and it is ill-founded because it is based on an ill-founded assumption that you can’t jump such long distances. This case differs psychologically from cases of inference in which one rehearses the premises or conclusion to oneself or someone else. But it issues in the same relationship of epistemic dependence of a conclusion-state on other psychological states. The only way for the reckoning model to respect the verdict that the kindness and pepperoni transitions are inferences is to adjust the reckoning state and its role in inference so that neither of them (alone or together) entails the self-awareness condition. Can the reckoning model be reinterpreted to account for them?
2. How to Lack Self-awareness in Inferring Q from P To see what non-canonical reckoning might look like, let us zero in more closely on the kinds of self-ignorance it would have to respect. To analyze these kinds of
⁵ Tamir and Mitchell (2012).
⁶ A similar example is given by Boghossian (2014).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
self-ignorance, it is useful to have labels for the features to which the inferrer responds. So let’s unpack the premise-states in each case further, starting with the kindness case. Let’s say that Q = the proposition that the clerk is kind, and F is the cluster of features F₁, F₂, F₃, that you respond to in concluding Q. Registering F and attributing it to the clerk amount to believing the premise the clerk has F. Being aware that you registered F and attributed it to the clerk is therefore a form of premise-state awareness. For the sake of argument, let’s assume that when you register features F, F = a cluster of features the clerk actually has. (In a more complex example, your representation of the features could be falsidical, rather than veridical.) We can then distinguish two main ways to fail to meet the self-awareness condition: premise-state unawareness and response-unawareness. The subject in the kindness case is response-unaware just in case she is unaware (and unable to become aware by reflection) that she concluded Q because she responded to F. And she is premise-state unaware just in case she is unaware (and unable to become aware by reflection) that she registers F and attributes it to the relevant thing(s). As I’ve defined these two forms of self-ignorance, premise-state unawareness entails responseunawareness. If one is unaware that one is in premise-state X, then one is unaware that one has responded to X.⁷ The kindness and pepperoni cases underdescribe the exact configurations of unawareness. There are several such configurations, but I’ll focus mainly on two of them. In the first configuration, premise-state awareness combines with responseunawareness. For example, you are aware that you register the clerk’s kind manner, but unaware that you are concluding that she’s kind because of her manner. Schematically: you are aware that you registered F and attributed those features to the clerk, but unaware that you responded to F in concluding Q. This configuration also characterizes a natural version of the pepperoni case in which one is aware that one has noticed that conditions (i) and (ii) hold, but unaware that one is responding to the features described in those conditions (saltiness and greasiness). In the second configuration, premise-state unawareness combines with responseunawareness if one is both unaware that one responded to F, and unaware that one registered F at all. In the pepperoni case, an inferrer may be unaware of conditions (iii) or (iv), by being unaware that pepperoni puts her in mind of the fact that her nemesis likes pepperoni. Similarly, the inferrer might be unaware that she has felt the force of arguments against eating meat. Assuming that the pepperoni inferrer is unaware that she’s registering (iii) and (iv), she is also unaware that she’s registering the conjunction of features (i)–(iv). Those features have no internal unity. They’re a mere aggregate. ⁷ Premise-state unawareness can also occur alongside a different form of response-awareness, in which you are aware that you have responded to something in drawing your conclusion, but unaware that it is premise-state X.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Analogously, if the inferrer in the kindness case registers each of the features in F taken individually, it’s a further claim that she attributes the conjunction of features to the clerk. What’s needed for premise-state awareness is awareness that she attributes the conjunction F to the clerk. If she doesn’t attribute the conjunction to the clerk, then she can’t be aware that she does. And in a natural version of the case, if she does attribute it, she’s unaware that she does. Besides premise-state unawareness and response-unawareness, there is also an intermediate form of premise-unawareness, which combines conceptual premiseunawareness with non-conceptual premise-awareness. Here, the pepperoni-refuser may register (i) and (ii)—the pizza’s being greasy and salty—without registering them as greasy and salty. This distinction lets us describe more exact versions of the kindness and pepperoni case. But those other versions aren’t necessary for making the case against the canonical reckoning model, so I leave them aside.⁸
3. Unawareness in the Hands of the Reckoning Model To account for cases of inference without self-awareness, the reckoning model has to adjust two of its main components: the reckoning state, in which the inferrer reckons that P supports Q; and the becausal condition, in which the inferrer concludes that Q because she reckons that P supports Q. The reckoning state in non-canonical reckoning must allow for premise-state unawareness, and the becausal state must allow for response-unawareness. What does the becausal condition look like when the self-awareness condition is dropped? The canonical “becausal” condition entails at a minimum that the premisestates (or their contents) figure in a correct first-person rationalization of the conclusion that the inferrer could provide. You can explain that you concluded Q because: P. And you can explain that, because you reckon that: P supports Q. If response-unawareness precludes any such first-person rationalization, then a different interpretation of the “becausal” condition is needed. The natural proposal is that the becausal condition is merely causal. Merely causal “becausal.” The inferrer concludes that Q because she is in a reckoning state. The fact that she reckons that P supports Q causes her to conclude Q in response to P. She has available no correct first-person rationalization of why she concludes Q. An internal instability arises when the merely causal becausal condition is combined with non-canonical reckoning states that I’ll call reckoning de dicto, as opposed to reckoning de re. ⁸ I will also set aside another kind of self-ignorance potentially present in the cases, in which you are aware that you registered F, but unaware that you attributed F to the clerk.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
4. Reckoning De Dicto It’s consistent with the kindness and pepperoni cases that the inferrer might correctly sense that there are some features to which she has responded, while being unable to identify what features those are. For instance, if asked why one judged that the clerk is kind, one might say something like, “I can’t quite put my finger on it, but she just seemed to act kindly.” Given our assumption for the sake of argument that kindness (the property) is not presented in the experience, this type of report wouldn’t be a report of perceptual experience. In the pepperoni case, one might invent some reason for which one judges that pepperoni is yucky (“It doesn’t taste right”—even if it tastes the same way it always does. Here too, we can assume that the contents of experience are unaffected by the conclusion). The reckoning model can analyze these mental states by invoking a reckoning state in which the inferrer existentially quantifies over features she responds to, and the reckoning has wide scope over this quantifier. Reckoning de dicto. S reckons that (for some G: having G supports Q). In reckoning de dicto, you believe that there are some features of the person such that she has those features, and the fact that she has those features supports the conclusion that she is kind, while having no beliefs (or other forms of opinion, such as intuition or suspicion) about which features play this role. As an analysis of what kind of reckoning might occur in the cases, this seems to respect the basic forms of response-unawareness and premise-unawareness, while still preserving a recognizable kind of reckoning. So invoking a de dicto reckoning state is a way for non-canonical reckoning to occur in inference without self-awareness. But de dicto reckoning fits poorly with the becausal condition in non-canonical reckoning. In the kindness and pepperoni cases, what causes you to draw the conclusion is (by hypothesis) that you respond to the particular features—F in the kindness case. They therefore do not conclude because they reckon de dicto, in a way that fails to specify the features. The non-canonical reckoning model predicts that if your reckoning state is de dicto, then you draw the conclusion because you reckon de dicto that some features or other support the conclusion. That prediction goes against a central feature of the cases, which is that there are specific features you’re responding to in drawing the conclusion. You are in the de dicto reckoning state because you are responding to the specific features that by hypothesis move you to the conclusion. Your reaching the conclusion is explained by that response, not by the de dicto reckoning state. When combined with a de dicto reckoning state, then, the becausal condition posits the de dicto reckoning state as the putative cause of drawing the conclusion. Here lies its mistake. The de dicto reckoning state is not proportional to the causal upshot of drawing the conclusion, and therefore lacks explanatory force. The explanatory weight is carried by the fact that you respond to specific features.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
In a non-canonical reckoning model, then, the only admissible reckoning states seems to be states of reckoning de re. Reckoning de re.
For some F (S reckons that: having F supports Q).
Like reckoning de dicto, reckoning de re can in principle respect the two kinds of selfignorance we’ve focused on: premise-state unawareness and response-unawareness. For instance, reckoning de re could be entirely inaccessible: Inaccessible reckoning. You are unaware and can’t become aware by reflection that you reckon that: P supports Q. On the reckoning model, reckoning must be inaccessible, when there is premise-state unawareness or response-unawareness. Unlike reckoning de dicto, inaccessible reckoning de re fits perfectly well with the merely causal becausal condition. Whereas reckoning de dicto would (ceteris paribus) make available a first-person rationalization, inaccessible reckoning de re does not make available that kind of becausal condition. Inaccessible reckoning de re is a way to preserve the reckoning model while accounting for inference without self-awareness. When the self-awareness condition is met, reckoning de re adds a lot. It precludes the sense that the inferrer proceeds in ignorance of what she is responding to. It also opens up a potential problem made vivid by Lewis Carroll: the threat that the reckoning state would be forced into the role of a premise, leading to a regress. That threat arises for any kind of reckoning state, accessible or otherwise. Proponents of the reckoning model have proposed various answers to the threat.⁹ De re reckoning, like reckoning in general, needs a solution to the regress problem. What, if anything, does inaccessible reckoning add to the fact that you respond to features F or (i)–(iv)? It is hard to say. Given that you respond to particular features, is it necessary to posit any further structure to have an illuminating account of inference? In the rest of this chapter, I give some reasons to think that the nature of inference may be illuminated even without positing any structure beyond what’s posited by the hypothesis that inferring is a distinctive kind of response to an informational state, or to a combination of such states, that produces a conclusion. I’ll call this hypothesis the response hypothesis. The distinctively inferential kind of response to information is formed when one reaches a conclusion. The reckoning model entails the response hypothesis, but the response hypothesis does not entail the reckoning model. We can understand quite a bit about what inference is by contrasting it with neighboring mental phenomena and reflecting on what underlies these contrasts. The remaining discussion is an exercise in illumination without analysis.
⁹ Recent examples include Chudnoff (2014) and Pavese (ms).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
5. What Kind of Response is Inferring? It is useful to begin by looking more closely as what kinds of response inferential responses could be. In English, “response” can denote a mental state that one comes to be in by a certain dialectical process. For instance, a response to a question can be an answer. X’s response to Y’s claim can be to deny it. A response to a line of reasoning can be a belief. For instance, suppose you rehearse for me your reasoning that the tree’s apricots are ripe, because apricots ripen when they’re pale orange, and the apricots on the tree are pale orange. In response to the part of your reasoning that follows “because,” I too, like you, might form the belief that the apricots on the tree are ripe. It would be natural to say that one of my responses to this part of your reasoning is the same as your response: it’s to believe that the apricots are ripe. These observations about English suggest that when we examine inference as closely as we have to, when trying to understand its nature, we will find several different things in the same vicinity, all of which are natural to call “responses.” First, there is the route by which one came to the belief, which in the apricot example is: inferring. Attempts to analyze the inference need a label for this route. In contrast, there is the conclusion-state at the end of this route, which in the apricot example is a belief. Finally, there is the conjunction of these two things, and this conjunction is arguably what’s denoted by the most natural uses of “response” in English. When we say “Y’s response to X’s claim that P was to deny it,” we are denoting not only Y’s claim that P is false, independently of what prompted it. We are also saying that Y claimed that P is false, in response to X. We are identifying a mental state in part by the type of route by which it was formed. In the apricot example, this kind of response is the belief that the apricots are ripe, together with the route by which that belief is formed. The response hypothesis is that inferring is a distinctive way of responding to information state that produces a conclusion. If the response hypothesis is true, then the distinctively inferential response is a locus of epistemic appraisal. An adequate theory of that type of response should identify the dimensions along which inferences can be epistemically better or worse. When the conclusion of inference is a belief, these will be dimensions of justification. In explicating the notion of a response, I’ll initially talk as if inputs are evidence. But the status of inputs as evidence is not essential to the notion of response. What’s important is that the inputs are informational states of the subject. What is it to respond to evidence that one has? Consider ordinary updating of beliefs. If you see someone in the room walk through an exit, normally you’ll believe they are not in the room anymore. This is an automatic adjustment of belief in response to changing perceptions. Responses to evidence are often less automatic when it takes some effort to recall the relevant facts (how far are you from your destination? How many miles per gallon does the car get?) and to think the matter through. In both cases, responses to evidence involve some ordinary sense in which you appreciate the force of the evidence you are responding to, even if the
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
“appreciation” takes the form of registering support rather than a representational state, such as belief or an intuition, that the evidence rationally supports the proposition you come to believe.¹⁰ It seems doubtful that the mental activity involved in responding to evidence can be explained in terms of any other psychological notion. The notion of a response can be brought further into focus by contrasting it with a range of different relations that a subject could stand in to psychological states, distinct from inferentially responding to them. These relations group into three kinds: failures to respond to informational states; responses to something other than informational states; and non-inferential responses to informational states.
5.1. Failures to respond: mental jogging and bypass The first relation is the relation of failing to respond to in any way at all to an informational state. First, suppose that after looking in three rooms for your passport, you form the belief that it isn’t anywhere else in the house. The mere sequence of searching and then forming the belief does not settle what kind of response the belief is to the information you got from searching, if it is any response at all. You could form the belief spontaneously, without its being any sort of response to the information you got from looking—not even an epistemically poor response in which you jump to the conclusion that your passport is lost. Two subjects could move from the same mental states to the same conclusions, where only one of them is inferring the conclusion from the initial mental states. The other one’s mind is simply moving from one set of states to another. Adapting an irresistible term from John Broome, we could call a transition from informational state A to informational state B “mental jogging” when state B is not any kind of response to state A.¹¹ What’s the difference between mental jogging and inferring? A natural suggestion is that whereas there is no response in mental jogging, there is in inference. If you drew an inference from the information you got while looking for your passport, perhaps together with some background assumptions, you were responding to the information and assumptions. In the case where information is evidence that a subject has, she could bypass that evidence, instead of drawing inferences from it. You could have some evidence that the café is closed on Mondays (for instance, by knowing that it is closed on ¹⁰ For discussions of other forms appreciation might in principle take, see Fumerton (1976), Audi (1986, 2001), Tucker (2012), and Boghossian (2014). Since “appreciation” is factive, these examples must be construed as ones in which the evidence does in fact support what you come to believe. ¹¹ Broome (2013) uses “mental jogging” to denote a more limited phenomenon, which is a foil for reasoning as he construes it. Broome writes: “Active reasoning is a particular sort of process by which conscious premise-attitudes cause you to acquire a conclusion-attitude. The process is that you operate on the contents of your premise-attitudes following a rule, to construct the conclusion, which is the content of a new attitude of yours that you acquire in the process. Briefly: reasoning is a rule-governed operation on the contents of your conscious attitudes” (p. 234). By contrast, for Broome, mental jogging is an inferencelike transition in which you reach the conclusion from them without following a rule.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Mondays), and yet nonetheless plan to have lunch at that café on Monday, failing to take into account your knowledge that the café will be closed then. You are not discounting that evidence, because you are not even responding to it at all in believing that you will have lunch at that café on Monday. Another example of bypassing evidence comes from a kind of change-blindness in which you fixate on an object that changes size, yet you fail to adjust your beliefs in response to the information about the size change that we may presume you have taken in, given your fixation on the object. This phenomenon is illustrated by an experiment that uses a virtual reality paradigm.¹² In the experiment, your task is to select the tall yellow blocks from a series of blocks that come down a belt, and move them off to one side. Short yellow blocks and blocks of other colors should stay on the belt. In the experiment, after you have picked up a tall yellow block but before you have put it in its place, the block shrinks (hence the virtual reality set-up). But many subjects keep on with their routine of putting the shortened block where it doesn’t belong—in the place designated for tall yellow blocks. They are fixating on the block, and for the sake of illustrating bypass, we can presume they are experiencing the block as short. But they are not discounting this information when they maintain their belief that the block belongs with the other tall yellow ones. They are not even responding to this information. Their belief that the block is (still) tall and yellow bypasses evidence that it is short. Bypass is a special case of mental jogging, as these relations have been defined. The concept of bypassing evidence is useful, since it highlights a form of mental jogging that is epistemically detrimental. So far, I’ve contrasted inferring with mental jogging from one informational state to the next, and in particular with bypassing the information in an informational state. The difference between inferring and these relationships is well captured by the idea that the subject is responding to information in inference, but is not responding to it in any way in the other cases. The next two relations highlight the differences between what one responds to in inference, and what one responds to in other cases: processes fueled by rhythm and rhyme, and association between concepts.
5.2. Responses to non-informational states: rhythm and rhyme, and association The second two relations are non-inferential responses to non-informational states. Suppose you say to yourself silently that sixteen people fit in the room. If you went on to hear yourself think that there are sixteen days till the next full moon, you might end up making this transition because these sentences (half-)rhyme and follow a rhythm (“Sixteen people fit in this room. Sixteen days till the next full moon”). In the
¹² Triesch et al. (2003).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
guise of inner speech, the second thought would be a response to the rhythm and sound of the first innerly spoken thought. By contrast, inferring is not a response to rhythm and rhyme. It is indifferent to rhythm and rhyme. Responding to information differs from responding to concepts. In associative transitions, one responds to the concepts in the informational state, rather than to any truth-evaluable portion of the state’s content. For instance, suppose that observing at dusk that the sky is growing dark, you recall that you need to buy lightbulbs. This transition from observation to memory is fueled, let’s suppose, by the fact that you associate the concepts “darkness” and “light.” Here, truth-evaluable states of observation and memory are linked merely by association. But we can distinguish these relata of the associative movement from the things to which one is responding. One is responding to the concept of darkness, not to the truth-evaluable observation of darkness in which it figures. Abstracting from the example, the transition from a thought involving a concept X (X-thoughts) to thoughts involving Y (Y-thoughts) puts no constraints on which thoughts these are. Whenever one thinks a thought involving the concept “salt”— such as that the chips are salty, or that the soup needs more salt, or that salt on the roads prevents skidding—one is disposed to think a thought—any thought— involving the concept “pepper.” Associations leave entirely open what standing attitudes the subject has toward the things denoted by the concepts, such as salt and pepper. A subject may have zero further opinions about salt and pepper. The concepts may be no more related in their mind than the words “tic,” “tac,” and “toe.” Which thoughts are triggered is constrained only by the linked concepts, not by any attributions a subject makes using the concept, such as attributing saltiness to the soup. In contrast, in inference, one responds to information that admits of predicative structuring.
5.3. Non-inferential responses: narrative and attention The third pair of relations are non-inferential responses. For instance, thinking that it is dark outside might make you imagine that you could turn on the sky by switching on a giant lightbulb. The image of tugging a chain to turn on the sky, in turn, makes you remember turning on your lamp, and finding that the bulb was burned out. You then recall that you need to buy lightbulbs. The transition in your mind from the dark-outside thought to the need-lightbulbs thought exploits what one knows about lightbulbs, darkness, and light.¹³ Rather than being an inferential response, it is a response to narrative possibilities generated by the states that one is in.
¹³ Boghossian’s depressive (2014), who is supposed to illustrate a transition that isn’t inference, has a wandering mind that creates a narrative depicting himself as isolated from those people in the world who are having fun, and resonant with suffering people. Upon thinking that he is having fun, the depressive goes on to think that there is much suffering in the world. The case is not described fully enough to identify
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
The example makes evident that you need not be drawing a poor inference from “It’s dark outside” to “I need to buy lightbulbs,” in order to respond to the information that it’s dark outside. The norms for generating narratives differ from the norms for responding inferentially, even though one could respond in either way to the same informational state, such as a thought that it is dark outside. A single transition could be a decent development of a narrative by the standards of vivid fiction, but poor by the standards of inference. A different relation to informational states is to direct one’s attention. For instance, suppose your belief that there are pelicans nearby heightens your awareness of potential pelicans. It puts you on the lookout for pelicans. You tend to notice pelicans when they’re there. When you notice them, your belief that pelicans are nearby does not affect how you interpret what you see. It simply directs your attention to places where pelicans are likely to be, without otherwise influencing which experiences you have when you attend to those places. In this kind of case, your belief that pelicans are nearby helps explain why you form beliefs that you’d express by saying “I am now seeing a pelican” or “There is another pelican.” This explanation, however, is mediated by your perception of pelicans. And those perceptions would normally give rise to the same beliefs, whether or not your attention had originally been directed to the pelicans by your prior belief that pelicans are nearby, and whether or not you had the prior belief that pelicans are nearby. In contrast, for you to infer that you’re seeing a pelican (or that X, which you can see, is a pelican) from the belief that pelicans are nearby, you’d have to respond to the information (or perhaps misinformation) that pelicans are nearby in a special way. This special way is neither necessary nor sufficient for the belief to direct your attention toward pelicans. Schematically, the contrasts drawn so far are between inferring Q from P, and these other transitions from a P-state to a Q-state: • mentally jogging from the P-state to the Q-state, for instance by bypassing the information in the P-state in forming the Q-state; • rhythm and rhyme: moving from P-state to Q-state because words used to express P and Q rhyme or follow a rhythmic groove; • association: moving from the P-state to the Q-state by associating a concept occurring in the P-state with a concept occurring in the Q-state; • constructing a narrative from a P-state using a Q-state; • attention: moving to the Q-state because the P-state directs your attention to a property that the Q-state is about.
what kinds of transitions the depressive is making, but on many natural elaborations, these transitions would include inferences made in response to aspects of his outlook, such as concluding that there is much suffering in the world from something like “the fact that I’m having fun is an anomaly.” He may already believe the conclusion but arrive at it freshly from this thought.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
On the face of it, what’s lacking from these cases is a distinctive way of responding to the P-state that produces the Q-state. These transitions fail to be inferences, because they lack this kind of response.
5.4. Epistemic differences between poor inference and non-inference A useful test for whether the contrasts I’ve drawn help illuminate the distinctively inferential response is to consider if they shed any light on the difference between poor inferences, and the various other non-inferential relations. Recall the example of bypass involving change-blindness. Suppose you do not respond to the change in the size of the block. You persist in believing that the block is tall, when in fact, the (virtual) block has shrunk, and you have taken in this information, but have not adjusted your belief or actions. Assuming that you have evidence that it is short, your belief that the block is tall is maintained in a way that fails to take account of some highly relevant evidence. This epistemic situation involves bypassing the information that the block is blue. Contrast bypassing that information with drawing a poor inference from it. You start out believing the block is yellow, and then, after the block changes color, you freshly infer that the block is yellow, irrationally discounting the blue appearance. Here, too, a belief is formed in a way that fails to give some highly relevant evidence its proper weight. There’s a level of abstraction at which the epistemic flaw in both cases is the same: one fails to take proper account of highly relevant evidence. The belief that P in both cases lacks propositional justification for P. Going with that difference, in both cases, the information that the block is blue defeats the belief that the block is yellow. And at the same high level of abstraction, in both cases, the subject’s ultimate belief that the block is yellow (after the block changes color) is illfounded: it is formed (in the inference case) or maintained (in the bypass case) epistemically badly. Alongside these similarities, there is also a major epistemic difference between the bypass and inference cases. According to the response hypothesis, the response to P is the locus of epistemic appraisability in inference. It’s the response to P that’s epistemically bad-making. The epistemic badness is found along a further dimension that is missing in the bypass case. Its badness is not just the negative feature of failing to be based on adequate propositional justification, or failing to take relevant evidence into account. Nor is it the generic feature of being badly based, simpliciter. Instead, the badness of the inference is located in the response. If one inferred from a blue-block experience that the block is yellow, without any assumptions that explain the disconnect between color and apparent color, that would be a poor inference. More generally, according to the response hypothesis, the epistemically relevant features of inference reside in the distinctively inferential responses.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
6. Intelligence without Reckoning Perhaps the most principled challenge to the response hypothesis is a dilemma. Either in inference, one appreciates or registers the rational relationship between inferential inputs and conclusions (or purports to do so) in the form of a reckoning state, or else one’s mind is merely caused to move from one state to another. If there is no such reckoning state, then the informational state can make a causal impact on the thinker, but cannot make a rational impact. The picture of inference without reckoning allows a third option. It is possible to respond rationally to an information state without a reckoning state that represents what makes that response rational. One’s acknowledgment of rational support consists in the response, rather than taking the form of a state that represents the support relation. In allowing self-ignorant inferences, inference without reckoning places them in a family of rational responses in which one cannot identify what one is responding to. This family arguably includes a range of emotional and aesthetic responses. For instance, arguably, anger or indignation can be fitting or unfitting, even when one cannot identify with any confidence what features of the situation are making one angry. One might walk away from an interaction indignant and confident that the situation merits that response, yet unable to articulate what about the situation has led one to feel that way. Similarly, the unbridled joy many people feel upon the birth of their children can intelligibly leave them wondering exactly what it is about the new configuration of life that makes them full of joy. In the domain of aesthetic responses, on some plausible analyses, finding jokes funny has the same feature. One might never be able to pinpoint what makes something funny when it is, yet for all that, the joke might merit amusement or not. Judgments of beauty as Kant construed them have something like this feature as well, in that even when judgments of beauty are fitting, they do not result from applying determinate concepts to the thing judged beautiful, or from following a rule that takes certain types of features of those things as inputs and delivers as the output a classification of it as beautiful. “There can be no rule according to which someone should be obliged to recognize something as beautiful.”¹⁴ And on one model of literary criticism, the task of criticism is in part precisely to articulate the features of a work that are responsible for the impact it has on its readers—both to develop those responses further and to explore which initial responses are vindicated.¹⁵ These kinds of emotional and aesthetic responses are arguably intelligent yet partly self-ignorant responses. In this respect, they are directly analogous to selfignorant inference without reckoning. Whereas the canonical reckoning model might be seen as identifying the pinnacle of intelligent responses with self-aware
¹⁴ Kant, Critique of Judgment, Book 1, section 8. ¹⁵ For instance, Richards (1924). For discussion, see North (2017).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
inferences, inference without reckoning allows that inference can tolerate the kinds of self-ignorance described here. Whatever epistemic improvements might result from being able to pinpoint what one is responding to and why, in aesthetic, emotional, or rational domains, the initial responses one makes prior to any such attempt can still reflect the intelligence of the responder.
References Audi, R. (1986). “Belief, Reason, and Inference.” Philosophical Topics 14 (1): 27–65. Audi, R. (2001). The Architecture of Reason: The Structure and Substance of Rationality. Oxford University Press. Boghossian, P. (2014). “What is Inference?” Philosophical Studies 169 (1): 1–18. Broome, J. (2013). Rationality Through Reasoning. Wiley Blackwell. Chalmers, D. (2012). Constructing the World. Oxford University Press. Chudnoff, E. (2014). “The Rational Roles of Intuition.” In Booth, A. and Rowbottom, D., eds., Intuitions, 9–35. Oxford University Press. Frege, G. (1979). “Logic.” In Hermes, H., Kambartel, F., and Kaulbach, F., eds., Long, P., and White, R., trans., Posthumous Writings. University of Chicago Press. Fumerton, R. (1976). “Inferential Justification and Empiricism.” Journal of Philosophy 74 (17): 557–69. Laplace, P. S. [1814] (1995). Philosophical Essay on Probabilities, trans. Dale, A. I. Springer. North, J. (2017). Literary Criticism: A Concise History. Harvard University Press. Pavese, C. (ms.) Reasoning and Presupposition. Richards, I. A. (1924). Principles of Literary Criticism. Routledge. Siegel, S. (2017) The Rationality of Perception. Oxford University Press. Tamir, D. and Mitchell, J. (2012). “Anchoring and Adjustment During Social Inferences.” Journal of Experimental Psychology: General. Advance online publication. Triesch, J., Ballard, D., Hayhoe, M., and Sullivan, B. (2003). “What You See is What You Need.” Journal of Vision 3 (1): 86–94. Tucker, C. (2012). “Movin’ On Up: Higher-level Requirements and Inferential Justification.” Philosophical Studies 157 (3): 323–40.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
3 A Linking Belief is Not Essential for Reasoning John Broome
1. Introduction Reasoning is a mental process through which you acquire a new attitude—the ‘conclusion attitude’—on the basis of attitudes you already have—the ‘premise attitudes’. It is very natural to think that, if a process is to be genuinely reasoning, you must believe that the conclusion attitude is linked to the premise attitude in some way that makes it appropriate to have the conclusion attitude on the basis of the premise attitudes. I adopted this natural view in my book Rationality Through Reasoning; I assumed you must have a ‘linking belief ’, as I call it. I now withdraw this view. In this chapter I shall argue it is not true for reasoning in general, though it may be true for the particular case of reasoning whose conclusion attitude is a belief. Moreover, even in cases where a linking belief is a necessary condition for a process to be reasoning, it is not an essential condition. It is not part of what makes a process reasoning.
2. A First-order Linking Belief: The Taking Condition You wake up in the morning and hear rain, so you believe it is raining. You have a standing belief that, if it is raining, the snow will melt. Bringing these two beliefs together, you conclude that the snow will melt. This latter process (not the perception but the drawing of the conclusion) is a piece of reasoning. What makes it so? What in general makes a process reasoning? Here are some essential features of a reasoning process. It is a mental process: it take place in the mind, and it starts and ends with mental states. These mental states This chapter results from long and useful discussions with Herlinde Pauer-Studer. I received other valuable comments from audiences at Bayreuth, Tübingen, and Canberra. Research for the chapter was supported by ARC Discovery Grant DP140102468.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
are, more specifically, propositional attitudes: they are relations you stand in to particular propositions. These propositions are commonly called the ‘contents’ of the attitudes. As convenient terminology, I shall refer to the ‘premise attitudes’ and ‘conclusion attitude’ of the process, and more specifically to a ‘premise belief ’, ‘premise intention’, ‘conclusion belief ’, or ‘conclusion intention’. The process of reasoning starts from some premise attitudes that you already have, and ends with a conclusion attitude that you acquire in the process. A further essential feature of a reasoning processes is that it is in some way causal: through reasoning your initial attitudes cause or give rise to your conclusion attitude. Not every process that satisfies the description I have so far given is reasoning. When some attitudes of yours give rise to another through some mental process, the process is not necessarily reasoning. For example, when you come to believe there is a spider in the room, this belief may cause you to intend to leave the room as quickly as possible. This may be just the causal result of your arachnophobia, and not a process of reasoning at all. So we need to know what distinguishes processes that are reasoning from other mental processes involving propositional attitudes. I am particularly concerned with active reasoning, which is to say reasoning that is an act—something you do. There may also be a kind of reasoning that is not an act. This would be a process that happens in you or to you, and has features that qualify it as reasoning, but is not something you do. If there is such a thing, I call it ‘passive reasoning’. It would be like digesting your food, whereas active reasoning is like eating your food. I am interested in active reasoning only, and in this chapter ‘reasoning’ refers only to active reasoning. The question I am asking is what makes a process active reasoning, when it is. Until Section 5 of this chapter, I shall concentrate on ‘belief reasoning’. By this I mean reasoning in which the premise and conclusion attitudes are beliefs. The snow reasoning is an example. In the case of belief reasoning, we may call the contents of the premise beliefs ‘premises’ and the content of the conclusion belief ‘the conclusion’. I shall come to other kinds of reasoning later. In my book Rationality Through Reasoning I asked the question I am asking now: What distinguishes reasoning from other mental processes? When discussing belief reasoning specifically, I offered as a necessary condition for a process to be belief reasoning that you have a ‘first-order linking belief ’, which I defined as follows: This is a belief that links together the contents of your attitudes. In the case of belief reasoning, it is specifically the belief that the premises imply the conclusion. By that I mean simply that you believe a conditional proposition. When the premises are p, q, r and so on, and the conclusion is t, you believe that, if p, q, r and so on, then t.¹
This definition specifies how I use the word ‘imply’. I do not use it for logical implication only, but for any conditional relation. ¹ Broome (2013), p. 229.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
I added as a further necessary condition that this linking belief plays a causal role in the reasoning.² I describe this linking belief as first-order because its content directly links together the contents of the premise attitudes and conclusion attitude—the premises and the conclusion, that is to say. You might also have a second-order linking belief, whose content links together the attitudes themselves. An example is the belief that you ought to believe the conclusion if you believe the premises. I shall mention secondorder linking beliefs in Section 7, but until then I shall consider only first-order linking beliefs. Paul Boghossian also offers a necessary condition for a process to be belief reasoning. He calls it ‘the taking condition’ and presents it this way: Inferring necessarily involves the thinker taking his premises to support his conclusion and drawing his conclusion because of that fact.³
I think Boghossian’s condition and mine are effectively the same; the differences are only apparent. Boghossian speaks of inferring where I speak of reasoning, but belief reasoning can equally well be called ‘inferring’. Boghossian speaks of the premises’ supporting the conclusion, whereas I speak of their implying the conclusion. But I think there is no real difference here. Support may be weak or strong, and I assume that Boghossian has in mind support that is as strong as implication. If you are to believe a conclusion on the basis of premises, you must think that the conclusion is true if the premises are true. It would not be enough to think merely that they give some weaker support to the conclusion—for instance, that they increase the probability of the conclusion. Finally, Boghossian refers to taking rather than believing. This is because he means to allow for an attitude that is less than explicit belief. But I shall explain that I also allow the linking belief to be only implicit, so there is no difference here either.
3. What Reasoning Is In Rationality Through Reasoning, I claimed that the existence of a first-order linking belief is necessary for a process to be belief reasoning. I no longer make this claim but I do not deny it either. There is an argument for it that I shall describe in Section 4. I find it plausible but not conclusive. In any case, this claim is not required by my account of reasoning. I never claimed that a first-order linking belief is essential to belief reasoning. That is to say, I never claimed that a first-order linking belief is part of what makes a process belief reasoning.⁴ If it is necessary for belief reasoning, that is because it is the consequence of a different necessary condition. That different condition is essential to belief reasoning. ² Broome (2013), p. 229. ³ Boghossian (2014), section 3. ⁴ For the difference, see Fine (1994).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
The different condition, which is also presented in Rationality Through Reasoning, is that you operate on the contents of your attitudes, following a rule.⁵ In the snow example, the contents of your two premise beliefs are the proposition that it is raining and the conditional proposition that if it is raining the snow will melt. The first of these premises is the antecedent of the second. You operate on these contents, following the modus ponens rule, which tells you to derive the proposition that is the consequent of the second premise. You end up believing that proposition. That is, you end up believing that the snow will melt. In general, the modus ponens rule is to derive the proposition that q from the proposition that p and the proposition that if p then q. Possibly, the rule you follow is not the modus ponens rule. For example, you might instead follow the rule of deriving the proposition that q from the proposition that p and the proposition that if p then q, provided these propositions are about the weather. This rule is narrow but correct. Alternatively, you might follow the rule of deriving the proposition that q from the proposition that p and the proposition that if p then q provided the date is before 2100, and deriving the proposition that not q at later times. This rule is incorrect. My essential condition for reasoning requires you to follow a rule, but it does not specify which rule. Nor does it require the rule to be correct. If you follow an incorrect rule, you reason all the same, though you reason incorrectly. The correctness of reasoning is not an issue in this chapter. The core of my account of reasoning is that in reasoning you follow a rule. Since the notion of following a rule is difficult, this needs more explanation. It is because you follow a rule that reasoning is something you do. The rule does not merely cause you to behave in a particular way, as a program causes a computer to behave in a particular way. The rule guides you and you actively follow it. What is this guidance? Part of it is that you behave as you do because of the rule; the rule explains your behaviour. More than that, it explains your behaviour in a particular way. The rule sets up a standard of correctness, and your recognition of this correctness is part of the explanation. When you are guided by a rule, your behaviour seems to you correct relative to the rule or, if it does not, you are disposed to correct your behaviour. A disposition to correct your behaviour is essential to being guided. To follow a rule is to manifest a particular sort of disposition that has two components.⁶ The first component is a disposition to behave in a particular way. Here I use ‘behave’ very generally, to include mental processes such as reasoning and coming to have a belief. The second component is a disposition for the behaviour to ‘seem right’ to you, as I put it.⁷
⁵ Broome (2013), p. 234. ⁶ This dispositional account of following a rule is set out in Broome (2013), section 13.4. It is intended to overcome difficulties raised by Paul Boghossian (2008 and 2014). ⁷ The phrase comes from Wittgenstein’s Philosophical Investigations (1968), remark 258.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Seeming right is not a phenomenal state, though it may be associated with one. Compare a different example. When a proof seems right to you, you may be in no particular phenomenal state; your state may simply be that you can find no fault with the proof. Seeming right is an attitude towards your behaviour. An essential part of it is being open to the possibility of correcting your behaviour. When a process seems right to you, you are open to the possibility that it may no longer seem right to you if a certain sort of event were to occur. We may call the event ‘checking’. Checking may consist simply in repeating the process, or it may consist in a different process. If you are asked ‘Three fours?’, you will probably spontaneously answer ‘Twelve’, and this will seem right to you. You may check your conclusion by calling up a spontaneous response once again, or you may count on your fingers. Your openness to correction is a disposition. You are disposed to lose the attitude of seeming right in particular circumstances—specifically if you check and your checking produces a different result. This is often a counterfactual disposition, since you often do not check. You may not be disposed to check, perhaps because you are confident of your conclusion. Nevertheless, you have the counterfactual disposition to change your attitude if you were to check and if the checking produced a different result. In sum, in following a rule you manifest a complex disposition to behave in a particular way and for this to seem right. When you follow a rule in reasoning, you manifest a particular rule-following disposition, which I shall call a ‘reasoning disposition’. In the snow reasoning, your reasoning disposition is the disposition to believe the snow will melt on the basis of believing it is raining and believing that if it is raining the snow will melt, and for this to seem right to you.
4. An Implicit Linking Belief In Rationality Through Reasoning, I argued that this reasoning disposition constitutes an implicit belief that, if it is raining and if it is the case that if it is raining the snow will melt, then the snow will melt. My reason was simply that, if you did not have this belief, you would not believe the snow will melt on the basis of your premise beliefs, or if you did, it would not seem right to you. Just because you have the rulefollowing disposition, it is therefore correct to impute this implicit belief to you. It is a first-order linking belief. That is why I took a first-order linking belief to be a necessary condition for reasoning. Following a rule is essential to reasoning, and following this particular rule manifests a disposition that is implicitly a linking belief. That was my argument. A belief is a bundle of dispositions. It typically includes dispositions to behave in particular ways. It also typically includes a disposition to assert the belief ’s content in some circumstances. The reasoning disposition includes some but not all of the dispositions that typically constitute a belief. In the example, it includes the disposition to believe the proposition that the snow will melt on the basis of the proposition
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
that it is raining and the proposition that if it is raining the snow will melt. This makes it plausible to impute to you the linking belief whose content is that, if it is raining and if it is the case that if it is raining the snow will melt, then the snow will melt. For that reason I find the argument plausible. However, a reasoning disposition does not include a disposition to assert this content. Let us call a belief ‘explicit’ if you are disposed to assert its content. A reasoning disposition is not an explicit belief. At best, having a reasoning disposition licenses us to impute an implicit belief to you. But it could be said that it does not license us to impute to you any linking belief at all, because a reasoning disposition does not include enough of the dispositions that constitute a typical belief. For that reason I think the argument is not conclusive. So it may be that you can do belief reasoning without having even an implicit firstorder linking belief. That would not matter to me. It would mean that, not only is a linking belief not essential for belief reasoning, it is not even necessary. In any case, a reasoning disposition is essential for reasoning, but a linking belief is not. If a reasoning disposition is necessarily a linking belief, how can a linking belief fail to be essential for reasoning, given that a reasoning disposition is essential? Because ‘is’ here denotes predication rather than identity. To say that a reasoning disposition is necessarily a linking belief is to say that anything that has the property of being a reasoning disposition necessarily also has the property of being a linking belief. The presence of something that has the property of being a reasoning disposition contributes to making a process reasoning, but the fact that this thing also has the property of being a linking belief does not. Similarly, having the property of being a human being is essential to having human rights, and anything that has the property of being a human being necessarily has the property of weight. But having weight is not essential to having human rights. It does not contribute to making it the case that something has human rights. My account of linking beliefs brings in train a useful benefit. Whether or not a reasoning disposition amounts to a linking belief, it is plainly not a premise attitude in the reasoning. Its role in reasoning is quite different; it is a disposition to reason from your premise attitudes in the way you do. Even if a reasoning disposition is implicitly a linking belief, the content of this belief is not a premise. This is fortunate, because if the content of a first-order linking belief were a premise, we would face an awkward regress. Suppose that, if you are to reason from premises to a conclusion, you must have a linking belief whose content is that the premises imply the conclusion. Suppose also that this content must be a premise in the reasoning. Then, when you reason from two premises to a conclusion, as you do in the snow example, there must be a further premise, which is the content of a linking belief. So you actually reason from three premises, not two. But then, you must have a more complicated linking belief whose content is that all three of these premises imply the conclusion, and this content too would be a premise. So you reason from four premises. Indeed, you must have an even more complicated linking
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
belief whose content is a premise. So you reason from five premises. And so on. You could not reason without an infinite hierarchy of premises. This problem of regress is well known from Lewis Carroll’s ‘What the tortoise said to Achilles’. It does not arise from the mere existence of a linking belief; it arises only if a linking belief is necessarily a premise belief. Moreover, it does not rule out your having a linking belief that is a premise belief. In the snow example, you may believe explicitly that if it is raining and if it is the case that if it is raining the snow will melt then the snow will melt, and this could be a premise. All we know is that, as we work up the hierarchy to more and more complicated linking beliefs, we must come to one that is not a premise belief. That stops the regress. If an explicit linking belief were necessary for belief reasoning, it would be difficult to explain how it differs from a premise belief, and the regress would be a problem. But an implicit linking belief of the sort I have described is plainly not a premise belief, so there is no problem of regress.
5. Intention Reasoning Now I turn to reasoning of other kinds than belief reasoning. Some reasoning concludes in an intention. I call this ‘intention reasoning’; it is a sort of practical reasoning. Instrumental reasoning is a specific sort of intention reasoning. Here is an example of instrumental reasoning. You intend to raise money for famine relief and believe that running a sponsored marathon is the best means of doing so. You reason from these two premise attitudes to a conclusion attitude, which is the intention to run a sponsored marathon. If this reasoning is an act, it meets most of the conditions for active reasoning I set out in Section 2. It is a causal mental process that starts with some premise attitudes and arrives at a conclusion attitude. Because it is an act, it is a conscious process involving conscious attitudes. But it does not exactly meet the condition that you operate on the contents of your attitudes, following a rule. Now we come to reasoning with other attitudes besides beliefs, this condition needs to be generalized. This is because attitudes of different sorts can have the same content. The content of your intention to raise money for famine relief is the proposition that you will raise money for famine relief. If instead you merely believed that you would raise money for famine relief, the content of your belief would be the same proposition that you will raise money for famine relief. In reasoning, the nature of your attitudes as well as their contents makes a difference. In the example, your reasoning brings you to intend to run a sponsored marathon only because you intend to raise money for famine relief. If you had merely believed you would raise money for famine relief, you would not have reasoned your way to an intention to run a sponsored marathon. So your reasoning must register the kinds of the attitudes you reason with, as well as their contents. I recognize this in my account of reasoning by adopting the notion of the marked content of an attitude. The marked content of an attitude is a pair,
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
consisting of the attitude’s content, which is a proposition, together with the kind of attitude it is. In the example, the marked contents of your premise attitudes are the pairs and . The marked content of your conclusion attitude is the pair . For clarity, I shall sometimes use the term ‘bare content’ for the proposition that is the content of an attitude. In the example, the bare contents of your attitudes are, respectively, that you raise money for famine relief, that running a sponsored marathon is the best means of raising money for famine relief, and that you run a sponsored marathon. In Section 3 I presented an essential condition for belief reasoning, which is that you operate on the contents of your attitudes, following a rule. Now I can extend this condition to reasoning in general. The extended condition is that you operate on the marked contents of your attitudes, following a rule. In the example, the rule you follow might be the rule of deriving from and .
6. An Implicit First-order Linking Belief? When you do intention reasoning, to follow a rule is to manifest a particular complex disposition, just as it is when you do belief reasoning. I continue to call it a reasoning disposition. It has two components. One is a disposition to behave in a particular way. The other is a disposition for your behaviour to seem right to you. In the example, you are disposed, first, to form the intention of running a sponsored marathon on the basis of your intention to raise money for famine relief and your belief that running a sponsored marathon is the best means of doing so. Second, you are disposed to see this behaviour as right. In Section 4, I presented an argument intended to show that the reasoning disposition you manifest in belief reasoning is an implicit first-order linking belief. Specifically, it is the belief that the bare contents of the premise attitudes imply the bare content of the conclusion attitude. More briefly: the premises imply the conclusion. Does this argument extend to intention reasoning? Could it be that a reasoning disposition manifested in intention reasoning is also an implicit firstorder linking belief ? The answer is ‘No’. There is nothing that could be the content of this first-order linking belief. First, the content could not be that the bare contents of the premise attitudes imply the bare contents of the conclusion attitude, as it is in belief reasoning. Take the example again. The bare contents of its premise attitudes are that you raise money for famine relief, which is the content of an intention, and that running a sponsored marathon is the best means of raising money for famine relief, which is the content of a belief. The bare content of the conclusion attitude, which is an intention, is that you run a sponsored marathon. The content of the linking belief would be that,
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
if you raise money for famine relief, and if running a sponsored marathon is the best means of raising money for famine relief, then you run a sponsored marathon. But you might not have this belief—at least not until you have completed your intention reasoning. You might doubt that you will take the best means to your end, or you might simply not have formed any belief about it. The absence of this belief would not stop you forming, through reasoning, the intention to run a sponsored marathon. Instead, could the content of the first-order linking belief link the marked contents of your attitudes rather than their bare contents? In the example, the content of this linking belief would be that, if and if , then . But this is just nonsense. Marked contents are not propositions, and they cannot be embedded under propositional connectives in sentences. Trying to embed them is a dead end; it leads to the well-known Frege-Geach problem.⁸ I conclude that, although belief reasoning plausibly requires a first-order linking belief, intention reasoning does not. A first-order linking belief is not even possible, since there is nothing that could be its content. In Rationality Through Reasoning I suggested that having a first-order linking belief is a necessary condition for reasoning in general, but that was a mistake.⁹
7. An Implicit Second-order Linking Belief? Could it be that a different sort of linking belief is a necessary condition for intention reasoning? Could a second-order linking belief be necessary? A second-order linking belief directly links the conclusion attitude to the premise attitudes, rather than linking the content of the conclusion attitude to the content of the premise attitudes. It is the belief that the premise attitudes support the conclusion attitude in some way. The support will have to be in some way normative or rational. For example, a second-order linking belief might be the belief that it is permissible for you to have the conclusion attitude on the basis of the premise attitudes. Or it might be the belief that rationality requires of you that, if you have the premise attitudes, you have the conclusion attitude. In the example, a second-order linking belief might be the belief that rationality requires of you that, if you intend to raise money for famine relief and you believe that running a sponsored marathon is the best means of raising money for famine relief, then you intend to run a sponsored marathon. Could such a belief be a necessary condition for reasoning? At first the answer ‘Yes’ may seem plausible. Why would you acquire the conclusion attitude by reasoning from the premise attitudes if you did not think the premise ⁸ See Broome (2013), pp. 260–1 and Geach (1960 and 1965). ⁹ Broome (2013), p. 229. Nadeem Hussain reveals the mistake in his ‘Practical reasoning and linking beliefs’ (2015).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
attitudes support the conclusion attitude in some way? So it seems we are entitled to impute a second-order linking belief to you if you reason. But actually this answer loses its plausibility when we probe further. A secondorder linking belief is sophisticated; its contents involve sophisticated concepts. The content of the belief I gave as an example involves the concepts of rationality, of requirement, of belief and of intention. A child can do instrumental reasoning before she has concepts like those, which means that second-order beliefs are still beyond her capacity. True, we sometimes impute an implicit belief to a person even if she does not have the concepts that would allow her to express it explicitly.¹⁰ But it is not plausible to do so here. In reasoning, you think about the contents of your attitudes; you do not think about the attitudes themselves. So we have no reason to impute to you any belief about the attitudes themselves. I am not saying that you cannot have a second-order linking belief. If you are a sophisticated reasoner, you have the concepts that are needed. I am saying that we are not plausibly entitled to impute a second-order linking belief to you just because you reason, because unsophisticated people can reason. So a second-order linking belief is not plausibly a necessary condition for reasoning. Does this claim conflict with the claim I made in Section 5 that in reasoning you must register the nature of the attitudes your reason with? No. It is true that you must in some sense be aware of the nature of the attitudes you reason with. But your awareness need not constitute even an implicit belief that you have these attitudes, and it does not require you to have the concept of a belief or the concept of an intention. Believing you will raise money for famine relief and intending to raise money for famine relief are quite different attitudes. Some philosophers think that an intention is a sort of belief, but even they do not think that intending to raise money for famine relief is the same attitude as simply believing that you will raise money for famine relief.¹¹ Other philosophers including me think that believing you will raise money for famine relief and intending to raise money for famine relief have one thing in common: they share the same bare content, which is a proposition. But this does not mean that, in being aware of your attitude, you are aware of the bare content and separately aware of the nature of your attitude to it. You simply have a believing attitude or, quite differently, an intending attitude towards the bare content. You view the bare content in a believing way or an intending way. Because the attitudes are quite different, in having a belief or an intention, you could not fail to register the sort of attitude it is. You could not mistake one for the other. This is the sense in which you are aware of the nature of the attitude. It does not require you to have the concept of the attitude. The nature of your attitude can register in your reasoning without your believing you have the attitude. A child can do it. ¹⁰ My thanks to Krisztina Orban for raising this point. ¹¹ For example, Velleman (1989), p. 109.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
For comparison: a first-order linking belief is less sophisticated. A first-order linking belief in the snow example is the belief that if it is raining and if it is the case that if it is raining the snow will melt, then the snow will melt. The concepts involved in it are, first, the concepts involved in the propositions you reason about, such as the concepts of snow and rain, and, second, the concept that is expressed by ‘if . . . then’. Call this latter the ‘consequence concept’. In order to reason you have to understand the propositions you are reasoning about, so you must have all the concepts in the first group. Also you must have the consequence concept in order to reason; if you do not have the concept of one thing’s following from another we could not understand you as reasoning from premises to a conclusion. So you must have the concepts involved in a first-order belief. It is therefore plausible to impute a first-order linking belief to you when you do belief reasoning. But I have explained that for intention reasoning, there is no first-order linking belief that could be imputed to you. And it is not plausible to impute a secondorder linking belief to you just because you do intention reasoning. I conclude that no linking belief is necessary for intention reasoning. My argument is that there is nothing that can plausibly be the content of this necessary linking belief. This argument might leave you still uneasy. If a person is to reason from premise attitudes to a conclusion attitude, surely she must have some sort of a belief that she should do so. Surely, at least, she must believe the premise attitudes normatively support the conclusion attitude in some way. If we cannot identify a content for this belief, perhaps we should try harder. Remember I do not deny that a reasoner may have a second-order linking belief. If she is sufficiently sophisticated, she may well believe the premise attitudes support the conclusion attitude. But I deny that this is a necessary condition for her to reason. I deny it because you can follow a rule without believing that doing so has any normative merit. Take this trivial example. Occasionally, as I walk down a street, I find myself following the child’s rule of not treading on the lines. I am genuinely following a rule. I am guided by the standard of correctness the rule sets up; when necessary I slightly adjust my pace in order to comply with it. But even as I do this, I do not believe I should or that I have any reason to or that my doing so has any normative merit. Indeed, I sometimes think the opposite: I should not be so childish. In reasoning from premise attitudes to a conclusion attitude, you follow a rule, and you can follow this rule without believing that your doing so has any normative merit. You might still be uneasy. You might ask how I could be guided by a rule unless I see some reason to comply with it. The answer is that the guidance is intentional rather than normative. I may intend to do something without believing that I have any reason to do it, and even so my intention guides me to do it. The function of an intention is to guide you to do what you intend. When I follow a rule I intend to comply with it, even if my intention is fleeting and not deliberate.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Your uneasiness may not yet be quelled. But I cannot pursue this worry any further here; I have done so in another paper.¹² I believe my argument has been sufficient. It cannot be necessary for you to have a linking belief when you reason, because there is nothing that can be the content of a linking belief that you necessarily have.
8. Conclusion A first-order linking belief may be necessary for belief reasoning. In other words, Boghossian’s taking condition may be true for this special sort of reasoning. However, in this chapter I have shown that no first-order linking belief is necessary (or even possible) for intention reasoning. I have also shown it is implausible that a second-order linking belief is necessary for intention reasoning. I think my arguments would generalize to other sorts of reasoning too. An essential condition for reasoning is that you operate on the marked contents of your attitudes, following a rule. That is to say: if a mental process is to be reasoning, it must satisfy this condition, and this condition contributes to making the process reasoning. In the special case of belief reasoning, this essential condition may entail that you have a first-order linking belief. But even in this case, the linking belief is not essential for reasoning; it does not contribute to making a process reasoning. In general, a linking belief is not essential for reasoning. This conclusion is reinforced by the fact that no linking belief is even necessary for sorts of reasoning other than belief reasoning.
References Boghossian, Paul, ‘Epistemic rules’, Journal of Philosophy, 105 (2008), pp. 472–500. Boghossian, Paul, ‘What is inference?’, Philosophical Studies, 169 (2014), pp. 1–18. Broome, John, Rationality Through Reasoning, Wiley–Blackwell, 2013. Broome, John, ‘Normativity in reasoning’, Pacific Philosophical Quarterly, 95 (2014), pp. 622–33. Carroll, Lewis, ‘What the tortoise said to Achilles’, Mind, 4 (1895), pp. 278–80. Fine, Kit, ‘Essence and modality’, Philosophical Perspectives, 8 (1994), pp. 1–16. Geach, P. ‘Ascriptivism’, Philosophical Review, 69 (1960), 221–5. Geach, P. ‘Assertion’, Philosophical Review, 74 (1965), 449–65. Hussain, Nadeem, ‘Practical reasoning and linking beliefs’, Philosophy and Phenomenological Research, 91 (2015), pp. 211–19. Velleman, David, Practical Reflection, Princeton University Press, 1989. Wittgenstein, Ludwig, Philosophical Investigations, Blackwell, 1968.
¹² Broome (2014).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
4 Attitudes in Active Reasoning Julia Staffel
1. Introduction Active reasoning is the kind of reasoning that we do deliberately and consciously. It is often contrasted with passive reasoning, which is not subject to conscious awareness or guidance.¹ Active reasoning has attracted particular philosophical interest, because it is thought to be subject to prescriptive epistemic norms in virtue of its purposeful nature. In characterizing what active reasoning is, and what norms it is governed by, the question arises which attitudes can be involved in this kind of reasoning. In this chapter, I am specifically interested in theoretical reasoning, and the question of which kind of beliefs can participate in it.² Epistemologists standardly distinguish between outright beliefs and degrees of belief. Outright beliefs are coarse-grained attitudes: I can believe a claim, disbelieve it, or suspend judgment about it. Degrees of belief are much more fine-grained. My degree of confidence that some claim is true can range anywhere from being certain that it is false to being certain that it is true, with a range of intermediate degrees of confidence in-between. In the literature on reasoning, we find different answers to the question of which types of belief we can reason with. Many authors take outright beliefs to be the attitudes we reason with. Others assume that we can reason with both outright beliefs and degrees of belief. A few think that we reason only with degrees of belief. These three positions are at least prima facie incompatible. But surprisingly, hardly anyone gives an explicit defense of their position.
I would like to thank Brian Talbot, Jonathan Weisberg, Sinan Dogramaci, Brendan Balcerak Jackson, Magdalena Balcerak Jackson, Sarah Moss, Lizzie Schechter, Kathryn Lindeman, and audiences at Princeton and the Boulder Cognitive Values Conference for helpful comments and discussion. ¹ Sometimes, the labels “personal” and “subpersonal” reasoning are used in the literature. I avoid these terms, having been convinced by Drayson (2012) that they should instead be reserved for types of explanation. ² My focus is on which kinds of beliefs we can reason with. We can of course also reason with attitudes that aren’t beliefs, such as suppositions and acceptances, among others.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Finding out which types of belief we reason with is interesting for at least two reasons. The first reason concerns epistemological methodology. Jeffrey (1970) raises a “Bayesian Challenge” for traditional epistemologists, asking whether there are any reasons why we need to appeal to outright beliefs, instead of theorizing only in terms of degrees of belief. Different philosophers have proposed answers (e.g. Buchak 2014, Kaplan 1996, Lyon 2014), but the role of attitudes in active reasoning has not been studied carefully, and might provide additional insights. The second reason is that answering this question lays an important foundation for projects in normative epistemology. Philosophers are keenly interested in the question of how we should reason, and what the norms of rationality are. Normative theories in epistemology always come with built-in assumptions about what types of beliefs we have. Yet, we can’t be sure what our normative theories should look like unless we know what attitudes they should be about. Formulating norms about particular attitudes first and worrying later about whether these attitudes play a role in our mental processes seems like doing things backwards. For example, if we have good reason to think that degrees of belief can play a role in reasoning, then normative theories of what constitutes good reasoning that only mention outright beliefs can’t possibly give us the full picture. But it is far from clear that normative theories that are formulated to cover only graded or only outright beliefs can simply be supplemented to account for other types of belief. Pettigrew (2016) in fact argues that adding outright beliefs to his view would create problems. Adding graded beliefs to a knowledge-first picture requires substantial changes to one’s theory of knowledge (Moss 2013, 2016). Hence, we need to answer the question of which attitudes we can reason with in order to avoid a mismatch between our descriptive and normative theories in epistemology. In this chapter I approach the question of what kinds of beliefs can participate in reasoning by using the following method: I take the default position to be maximally permissive—that both graded and outright beliefs can participate in reasoning. I then identify some features of active reasoning that appear at first glance to favor a more restrictive position about which types of belief we can reason with. I evaluate whether the arguments based on these features hold up, and argue that they don’t. From the failure of these arguments, we can draw at least two conclusions. First, proponents of the more restrictive views (that we can reason only with outright, or only with graded beliefs) cannot support their positions by pointing to specific characteristics of active reasoning. Second, the failure of these arguments provides evidence that a commonly made distinction between degrees of belief with non-probabilistic contents and outright beliefs with probabilistic contents does not track a substantive difference between mental states. The chapter has four main sections. In Section 2, I will explain what active reasoning is, and identify some of its most important features. In Section 3, I will give a more detailed overview of the positions people currently hold in the literature about which attitudes we can reason with. In Section 4, I will examine two non-starters
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
for giving an answer to our question: (i) using psychology or introspection, and (ii) drawing on the debate about credal reductivism. In Section 5, I will examine four features of active reasoning, and examine whether and how they can help us answer our question.
2. Active Reasoning Reasoning, broadly understood as a mental process by which we form or revise attitudes on the basis of other attitudes, is not a unified phenomenon. In the philosophical and psychological literature on reasoning, we frequently find a distinction between two different types of reasoning, which are labeled active reasoning and passive reasoning. Active reasoning is also sometimes called reflective reasoning or System 2 reasoning, and passive reasoning automatic reasoning or System 1 reasoning.³ What exactly this distinction amounts to is the subject of much discussion. For the purposes of this chapter, it will suffice to point to some paradigmatic examples. First, suppose you’re focused on some mundane task, such as cooking or driving. Suddenly, a thought pops into your head: you won’t be able to see your mother for her birthday, because you’re supposed to give a conference presentation on the same day. You agreed to give the presentation a while ago, but the scheduling conflict hadn’t occurred to you until now. Contrast this case with a second scenario: you have recently agreed to give a conference presentation, and now you’re wondering whether it presents a scheduling conflict. You mentally run through your family members’ birthdays, and thereby discover that your presentation is scheduled on your mother’s birthday. You conclude that because of this conflict, you won’t be able to see her. In each version of the example, you reach the same conclusion, namely that you’ll miss your mother’s birthday because of the scheduling conflict. Moreover, you reach this conclusion based on the same information about the date of the conference and your mother’s birthday (and possibly some additional background information). However, the way you arrived at it was very different. In the first version of the example, the question of whether your upcoming conference presentation would conflict with other obligations was not at the forefront of your mind. Rather, the conclusion spontaneously occurred to you, and you were unaware of the reasoning processes that produced it. By contrast, in the second version of the case, you were wondering about whether there would be scheduling conflicts with your conference presentation, and you went about answering this question by actively employing a particular reasoning strategy, namely comparing family members’ birthdays to the conference dates. You reach the conclusion about the scheduling conflict as a result of your purposeful deliberations, rather than surprisingly and spontaneously. ³ Discussions of dual-processing accounts can be found in Evans (2008), Frankish (2010), and Kahnemann (2011).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
The first case is a clear example of passive reasoning, whereas the second case is an example of active reasoning. Generally speaking, passive reasoning processes are automatic and largely unconscious, whereas active reasoning processes are conscious and subject to the reasoner’s direction (Evans 2008, Broome 2013, Boghossian 2014). Passive reasoning processes are also sometimes characterized as being mandatory, since the reasoner does not have immediate control over initiating or inhibiting them. Active reasoning processes constitutively involve performing mental actions with the aim of solving a particular problem. They take up the reasoner’s working memory and attention, and require that the reasoner focus her attention and monitor her reasoning strategy (Frankish 2009b). The claim that active reasoning is conscious means that the reasoner is conscious of her deliberation and its subject matter, but that doesn’t require being conscious of the rule applied to generate the conclusion, and it also doesn’t require being conscious of every background assumption used in the reasoning. While the reasoner must be aware that she is applying some rule of reasoning or other (rather than, for instance, freely associating), being conscious of the nature of the rule is consistent with, but not required for active reasoning. The basic observation that reasoning processes can differ in various respects such as their automaticity and consciousness seems very widely accepted. But it is controversial how to account for this observation in our theories of the mind. Dualprocessing theories try to establish a binary distinction between two different kinds of reasoning, but some philosophers and psychologists are skeptical about the possibility of this kind of categorization (e.g. Mugg 2016, Kruglanski and Gigerenzer 2011). These authors point out that many of the opposing attributes that are supposed to characterize the two kinds of reasoning don’t line up neatly into two categories. Moreover, many of the relevant attribute pairs, such as “fast/slow” and “not consciously monitored/consciously monitored” form a continuum rather than two distinct categories, which makes it hard to argue that there are clear distinctions between different kinds of reasoning. Yet, we can examine examples of active reasoning, and investigate which attitudes can participate in it without having to take a definitive stand on the question of whether different ways of reasoning can be neatly categorized or not. Because of the widespread agreement about the data that needs to be accounted for, the results of this discussion will be instructive regardless of how exactly the debate about categorizing reasoning will be resolved.
3. Who Thinks What? An attitude participates in reasoning by playing a role in the transition from the premise-attitude(s) to the conclusion-attitude(s). Participating attitudes can themselves be premise-attitudes or conclusion-attitudes, or they can be some kind of mediating or background attitudes that help generate the conclusion-attitude(s) based on the premise attitude(s). For example, when I say that my belief about the date of my conference presentation can participate in my reasoning, I mean that I can
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
use it as a starting point for further inferences, that I can form it as a conclusion of a reasoning process, and that it could also play some kind of mediating or background role in reasoning. Another point needs clarification: I understand the positions in the literature, and the discussion about which attitudes can participate in reasoning, to assume a realist view about mental states. This view stands in contrast with an instrumentalist view on which we are only interested in modeling reasoning. On the instrumentalist view, the resulting models carry no commitments about whether people actually have the attitudes that are postulated by the model in their minds (Lyon 2014). By contrast, a realist view is committed to the existence of the mental states it invokes, or at least committed to the view that the mental stakes it invokes are indispensible in our best theory of how we reason. It is easy to see why this distinction matters: there is nothing wrong with having two incompatible models of the same phenomenon, as long as each model serves a particular purpose. However, there is a problem with having two competing, incompatible theories about what the world is like. Assuming that the world is consistent, only one of these theories can be true. For instance, we might have two different maps of the London subway, whose representations of the network don’t agree on the relative distance between stations. This is not a problem if the maps are not intended to accurately capture these relative distances. However, if both maps claim to accurately depict the relative distances between stations, we know that they can’t both be correct. Similarly, when I ask which attitudes we can reason with, I don’t mean to merely ask which attitudes can play a role in useful models of human reasoning. Rather, I mean to ask: which attitudes do we actually reason with? The contenders for types of beliefs that can participate in theoretical, active reasoning are degrees of belief and outright beliefs. I mean to be neutral here on how degrees of belief should be formally represented in our theories. There is a debate in the literature on whether it is best to model them as having precise numerical values, or as intervals, or in some other way. By degree of belief, I simply mean the belief-y, graded attitude that all these models are designed to capture. I will sometimes, for ease of exposition, use the convention of representing someone’s degree of belief as a number between zero and one, but my arguments don’t depend on this choice of representation. One further noteworthy feature of graded, as opposed to outright beliefs, is that they are usually not taken to be candidates for knowledge, though some philosophers have recently argued that they can constitute knowledge (Moss 2013, 2016, Konek 2016). One distinction that is important to highlight is the difference between a degree of belief, and an outright belief that represents uncertainty. When agents have outright beliefs, those beliefs can have probabilistic contents. For example, I can believe that I will probably get a raise next year, or I can believe that it’s unlikely that there will be snow on Thanksgiving. However, neither of these beliefs is, at least at face value, the same attitude as a degree of belief in a non-probabilistic claim, such as a high
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
degree of belief that I will get a raise or a low degree of belief that it will snow on Thanksgiving. The former attitude is a binary belief towards a content that represents uncertainty, whereas the latter attitude is a graded belief in a content that doesn’t represent uncertainty. The default assumption in the literature, which I accept for the purposes of this chapter, is that there is a substantive difference between these two kinds of attitudes. Yet, some authors have argued that these two ways of characterizing doxastic attitudes that represent uncertainty are actually intertranslatable, and philosophers may choose whichever account fits most elegantly into their theories (Staffel 2016, Moss 2016, Schiffer 2003, ch. 5, Lance 1995). I will explain later how my arguments can shed light on this debate. Next, we will survey the positions held in the literature about which types of beliefs we can actively reason with. It’s not always perfectly obvious who holds which position, but I have tried to be charitable in classifying people’s views below.
3.1. Active reasoning involves only outright beliefs The position that active reasoning can involve outright, but not graded beliefs, appears to be the most popular in the philosophical literature, and it seems dominant among philosophers whose work focuses explicitly on the nature of active reasoning. Prominent holders of this view are Boghossian (2014), Broome (2013), Frankish (2009a, 2009b), Grice (2001), Harman (1986), and Hawthorne and Stanley (2008), and more generally proponents of knowledge-norms of reasoning. Some of these authors endorse the outright-belief-only view explicitly, whereas others just presuppose it. It is most explicitly advocated by Keith Frankish (2009a, 2009b), who thinks that degrees of belief belong exclusively to the domain of passive reasoning (which he calls subpersonal reasoning), whereas active conscious reasoning involves just outright beliefs. He says: “On this view flat-out beliefs and desires are premising policies, realized in non-conscious partial attitudes and effective in virtue of them. They form what amounts to a distinct level of mentality, which is conscious, reflective, and under personal control” (Frankish 2009a, p. 90). Frankish thus offers us a very clear statement of the view that active reasoning only involves outright beliefs. These outright beliefs can sometimes have probabilistic contents, but are still distinct from degrees of belief on his view. A less committed view is expressed by Broome (2013, sections 9.6, 15.2), who excludes reasoning with degrees of belief from his discussion. He doesn’t think reasoning with degrees of belief is impossible, but he is unsure how to integrate it into his view. Proponents of knowledge-norms on reasoning, such as Hawthorne and Stanley (2008), seem to be committed to the view that active reasoning, at least when done well, must involve outright, rather than graded beliefs. They think that it is impermissible to rely on attitudes in reasoning that don’t constitute knowledge. They share the standard assumption that what one knows is a subset of what one has outright beliefs in, and they therefore reject the possibility of norm-conforming reasoning
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
with degrees of belief. Defenders of graded-belief knowledge of course disagree with this line of argument (Moss 2013, 2016, Konek 2016). Many more discussions of reasoning can be found in the literature that simply presuppose the idea that we reason only with outright beliefs. This is not surprising, given that the notion of outright belief is the standard way to conceive of belief in most areas of epistemology. Moreover, the traditional association between reasoning and deductive logic invites this view of belief. Still, we can find some dissenters.
3.2. Active reasoning involves only graded beliefs On the opposite end of the spectrum, we have the view that we only reason with graded, but not outright beliefs. A prominent philosopher who holds the view that active reasoning involves only degrees of belief is Richard Jeffrey. In his article “Dracula Meets Wolfman: Acceptance vs. Partial Belief ” (1970), Jeffrey discusses the question of whether the attitude of outright belief (or outright acceptance) can play a role in rational deliberation, and in epistemology more generally. He thinks it can’t: we should do epistemology just in terms of partial belief. In the following passage, he explicitly mentions deliberation: Perhaps I am free to deliberate or not, but when I elect to deliberate I engage in an activity which, to the extent that it is successful, will pretty much force certain partial beliefs upon me, even though I may not be able to quote explicit rules that I am following. (p. 180)
It is pretty clear here that Jeffrey is talking about the kind of active reasoning I am interested in. The context of the paper suggests that he is mostly concerned with practical reasoning, but his overall argument makes it clear that he would say the same thing about theoretical reasoning. More recently, the graded-belief-only view has been defended by Richard Pettigrew (2016). On his view, saying that someone outright believes something is just a manner of speaking, which is appropriate when someone has high-enough confidence in a claim. Outright beliefs are not sui generis mental states over and above someone’s degrees of confidence, according to Pettigrew. Several other philosophers defend versions of degree of belief-only views, but their positions are harder to categorize. Wedgwood (2012), Clarke (2013), Tang (2015), and Greco (2015, 2016) defend versions of the idea that we sometimes simplify our degrees of confidence in order to make reasoning tasks more manageable. This kind of simplification involves taking things for granted of which we’re not technically completely certain. For example, in planning a trip, I might take it for granted that there will be taxis available at the airport, even though I am not completely certain that this will be the case. This simplification is characterized as the process of rounding one’s degrees of belief up to 1 or down to 0. Depending on the context, a reasoner can either draw on her proper degrees of belief or the simplified versions in reasoning. While these views initially seem like degree of belief-only views of active reasoning, it might turn out to be more appropriate to classify them differently.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Rounded degrees of belief, which dispose reasoners to treat claims as true (or false) in reasoning, may turn out to be the same attitudes that we usually call outright beliefs. If this is correct, then these views should not be seen as degree of belief-only views, but as fitting into the third category, according to which we can reason with graded and outright beliefs.
3.3. Active reasoning involves graded and outright beliefs Lastly, there is the view that active reasoning can involve both graded and outright beliefs. This view appears to be particularly popular with epistemologists who are interested in formal methods as well as the psychology of reasoning, such as Leitgeb (2016), Weisberg (2016, 2013), Buchak (2014), Ross and Schroeder (2014), Lin (2013), and Lin and Kelly (2012). While not all of these authors talk explicitly about active reasoning in their work, they all share the assumption that both graded and outright beliefs are psychologically real, and can enter into cognitive processes such as active reasoning.
4. Two Non-starters In this section, I will consider and dismiss two suggestions for how one might determine which attitudes can be involved in reasoning. The first involves appealing to introspection, and the second involves looking towards reductive views of belief. If we are wondering which attitudes can serve as premises and conclusions of active conscious reasoning, why can’t we just introspect, and determine which attitudes we reason with? Or maybe we just need to conduct some psychological experiments to find the answer? Unfortunately, the task is not that easy, because our mental life doesn’t come handily pre-labeled. How we classify our own mental states depends largely on the conceptual framework we’re familiar with. As Dennett nicely puts it: What we are fooling ourselves about is the idea of just “looking and seeing”. I suspect that when we claim to be just using our powers of inner observation, we are always actually engaging in a sort of impromptu theorizing—and we are remarkably gullible theorizers, precisely because there is so little to “observe” and so much to pontificate about without fear of contradiction. When we introspect, communally, we are really very much in the position of the legendary blind men examining different parts of the elephant. (Dennett 1991, p. 68)
Of course, even if we deny the claim that we can investigate the makeup of our mental lives and our attitudes by simply introspecting, we should not completely dismiss the value of introspective evidence. It just doesn’t by itself help us decide between different ways of conceptualizing our attitudes. The same problem arises with regard to the results of psychological experiments. The question of which mental states can participate in reasoning processes is a question about what our best theory of conscious, active reasoning is, and which conception of the attitudes
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
we can reason with fits best with it. This question cannot be answered by just gathering introspective or experimental data. The question is precisely what our best account of the data is, and for this we need to engage in philosophical theorizing about how to best characterize our beliefs. Another route towards a quick answer to the question of which attitudes we can reason with might come from the debate about credal reductivism. Advocates of credal reductivism typically defend the view that outright beliefs with nonprobabilistic contents, such as Jane’s belief that the store closes at midnight, reduce to degrees of belief in some way. On such a view, Jane’s belief is nothing over and above her high degree of belief that the store closes at midnight (for discussion, see e.g. Sturgeon 2015). Someone who has a reductive view might propose a shortcut to answering our question by way of the following argument. Since all types of belief are at bottom degrees of belief, we can of course reason with degrees of belief. And since outright beliefs are just specific kinds of graded beliefs, we can reason with outright beliefs, but this is not interestingly different from reasoning with the graded beliefs that they reduce to. Hence, we can use outright and graded beliefs in active reasoning, but this distinction does not track any real difference between two different ways of reasoning. This sounds like a tempting quick fix, but I think it is too fast. There are a couple of different versions of the reductive view, and upon closer inspection, this argument doesn’t work on either version. The first version of reductivism is linguistic reductivism (following Lyon 2014). According to this view, we often talk about outright beliefs, but when we do, this is merely a shorthand manner of speaking, and the states of affairs we are talking about really just involve degrees of belief. That means that linguistic reductivism is just the view that we can only reason with degrees of belief, but not with outright beliefs. An advocate of linguistic reductivism must hope that the position that we reason only with degrees of belief turns out to be defensible, but she hasn’t provided us with an argument for this view apart from whatever independent reasons there are to adopt linguistic reductivism. The second version of reductivism is metaphysical reductivism (following Lyon 2014). According to metaphysical reductivism, outright beliefs really exist in people’s minds, but they metaphysically reduce to degrees of belief. A very simple version of this view says that in order to have an outright belief that p, one must just have a degree of belief in p that is above a specific threshold.⁴ A proponent of the quick argument for reasoning with outright and graded beliefs might have this version of reductivism in mind.
⁴ It is somewhat difficult to pin down who holds this view, because metaphysical reductivism is not always clearly separated from linguistic reductivism and normative reductivism (i.e. the view that norms of rational belief reduce to norms on credences). Sturgeon (2015) and Foley (2009) seem to propose versions of metaphysical reductivism.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Metaphysical reductivism is supposed to be different from both linguistic reductivism, because it affirms the psychological reality of outright beliefs, and from the view that outright beliefs are sui generis mental states that we have over and above degrees of belief. Yet, it is not clear that it can be formulated coherently.⁵ One motivation for distinguishing between graded and outright beliefs is the observation that they have different normative and descriptive properties. For example, it is often argued that outright beliefs should obey norms of logical consistency, whereas graded beliefs should obey norms of probabilistic coherence. These two requirements can generate conflicts, depending on the reductive thesis they are combined with. If combined with the simple threshold view, these two claims generate the lottery paradox and the preface paradox, as long as the threshold for belief is lower than 1. In terms of their descriptive properties, graded and outright beliefs are usually taken to differ as well (assuming we don’t identify outright belief with having a degree of belief of 1). Having an outright belief in p is usually taken to involve a disposition to treat p as true, whereas having a high degree of belief in p does not involve such a disposition. But if this is so, then reasoning with an outright belief is not the same thing as reasoning with a high degree of belief (Ross and Schroeder 2014). If an outright belief in p and a high degree of belief in p differ in their normative and descriptive properties, then the outright belief in p cannot reduce to the graded belief in p in the sense that they are identical. If they were the same mental state, then their properties would have to be the same as well. On a more plausible view, a high degree of belief in p can give rise to an outright belief in p, but they are not the same mental state. Rather, the outright belief in p is generated by an act such as judging that p is settled, for which being sufficiently confident of p may be a precondition. Unfortunately, this version of metaphysical “reductivism” seems hardly reductive at all; rather, it is indistinguishable from the non-reductive view on which we have both graded and outright beliefs. Proponents of the nonreductive view need not deny that a person generally does not adopt an outright belief about a matter unless her degree of belief is suitably structured. If metaphysical reductivism thus collapses into the two-attitude view, then we cannot appeal to any argumentative shortcuts in order to settle the question of which attitudes we reason with. I have ruled out two possible ways of answering the question of which types of beliefs can participate in active reasoning. I have argued that empirical methods such as introspection or psychological experiments will be informative, but cannot deliver an answer all by themselves. I have furthermore argued that appealing to popular reductive views about the relationship between outright and graded beliefs does not give us an easy shortcut to answering our question.
⁵ I am grateful to Jonathan Weisberg for helpful discussion on this point.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
5. Four Features of Active Reasoning I will now turn to what I consider a more promising strategy for finding out what attitudes we can reason with. I will consider various properties that active reasoning is described as having, and I will examine whether those properties are incompatible with reasoning with particular attitudes. In particular, I will look for arguments that follow the following pattern. Argument schema: 1. Active reasoning has feature X. 2. If active reasoning has feature X, then, for any attitude A, we can reason with A only if A has feature Y. 3. Attitude A doesn’t have feature Y. 4. We can’t reason with attitude A. I will begin with the most permissive assumption, namely that both graded and outright beliefs can participate in reasoning, and I will use the argument schema together with a list of characteristics in order to examine whether either graded or outright beliefs can be ruled out. Of the different features and properties that active reasoning is said to have, only some are immediately relevant to our question.⁶ Specifically, I will discuss the following: (i) that active reasoning is conscious, (ii) that active reasoning is linked to language, (iii) that active reasoning is an operation on attitude contents, and (iv) that active reasoning requires working memory.
5.1 Conscious awareness The first feature of active reasoning I will consider is that it is conscious. By contrast, passive reasoning is usually described as being unconscious. What exactly does it mean that active reasoning is conscious? One obvious interpretation is that every attitude that plays a role in my reasoning must be conscious, where “conscious” doesn’t just mean “in principle accessible to our consciousness,” but “subject to conscious awareness at the time of deliberation.” But this requirement seems too strong on closer inspection. Episodes of conscious deliberation certainly involve some conscious, occurrent attitudes, but they also heavily rely on background beliefs. For example, suppose I am consciously deliberating whether I need to go to the store to buy milk today. There are a number of different conscious epistemic attitudes that enter my deliberations, for example regarding how much milk I have, and how much I need today and tomorrow. But for my reasoning to go through, I need to rely on ⁶ Evans (2008) provides a helpful list of all the characteristics that have been ascribed to System 2 reasoning in the literature: higher cognition, controlled, rational, systematic, explicit, analytic, rule based, conscious, reflective, higher order, high effort, slow, low capacity, inhibitory, evolutionarily recent, individual rationality, uniquely human, linked to language, fluid intelligence, domain general, abstract, logical, sequential, egalitarian, heritable, linked to general intelligence, limited by working memory capacity.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
background assumptions that I don’t consciously entertain, for example that the store has milk available, that the milk I have won’t suddenly turn sour, and so on. Hence, when we say that active conscious reasoning involves conscious attitudes, we mean that we are consciously aware of some of the most central attitudes involved in our deliberation, but not of all of them. We thus arrive at the following constraint: an attitude cannot serve as a non-background premise or conclusion of an active reasoning process unless that attitude is conscious and occurrent. Thus, we can partially fill in the argument schema as follows: 1. Active reasoning has conscious attitudes as its premises and conclusions. 2. If active reasoning has conscious attitudes as its premises and conclusions, then, for any attitude A, we can reason with A only if A is conscious. 3. Attitude A isn’t conscious. 4. We can’t reason with attitude A. In order to complete the argument, we have to replace the variable A in premise 3. If some attitude can’t be consciously occurrent, then we can conclude that it cannot be a premise or conclusion of active reasoning. That outright beliefs can be consciously occurrent seems uncontroversial, even if it is a controversial what makes them conscious.⁷ One popular view claims that our outright beliefs are transparent. On this view, to have a consciously occurrent outright belief that p is just to be aware that the world is such that p is true. No awareness of anything distinctly mental is required (Valaris 2014). Other accounts claim instead that, for a belief that p to be consciously occurrent, we must judge that p, or assent to p (e.g. Mellor 1978). Yet another view takes consciously occurrent beliefs to be conscious in virtue of being in some sense perceived by the mind (See e.g. Lycan 2004). What these and related views have in common is that they take the claim that we can have conscious, occurrent outright beliefs as a datum to be explained, rather than a thesis in need of argument. Thus, there seems hardly any question that outright beliefs can serve as premises and conclusions of active reasoning. On some of the aforementioned views, it is not easy to see how we could have a conscious, occurrent degree of belief in p. The transparency and the assent views of what makes beliefs conscious seem especially hard to apply to graded beliefs. If I am somewhat, but not fully confident that p, then I neither assent to p, nor am I straightforwardly aware of the world as being such that p is true.⁸ Yet, as I mentioned before, some philosophers argue that there is no substantive difference between construing degrees of belief as graded attitudes toward non-probabilistic
⁷ Not everyone agrees with this, of course. For example, Carruthers (2016) argues that none of our beliefs are conscious. ⁸ For example, Valaris admits in a footnote: “To the extent that graded beliefs (or credences) are not simply beliefs about probabilities, however, the present account is not meant to apply to them” (Valaris 2014, fn. 6).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
contents, and construing them as ungraded attitudes toward probabilistic contents. If we conceive of graded beliefs as simple attitudes towards probabilistic contents, then it is easier to see how they could be conscious on the assent and transparency views. To consciously and occurrently believe that p is probable would involve assenting to “probably p,” or being aware of things being such that p is probable (Dogramaci 2016). But what if we deny that views on which we have degrees of belief are intertranslatable with views on which we only have outright beliefs with probabilistic contents? Railton (2013) proposes an account of conscious, occurrent degrees of belief. He thinks that we are aware of how strongly we believe something via a particular affective state that he describes as trust-like. When we have a conscious degree of belief, we are aware of its content, and at the same time we are aware of our attitude towards this content in virtue of a feeling of being confident or unconfident in the proposition, or a stronger or weaker feeling of trust that the proposition is true. Railton’s view receives support from research in cognitive psychology. In an interesting survey article, Koriat (2012) explains that factors such as the strength of memory traces and the fluency of our cognitive processing determine whether or not a person has a strong “feeling of knowing.” People’s confidence in their judgments is generated based on the pieces of information that are called up when a judgment is made, the ease of recall, and the amount of conflict and agreement between the different pieces of evidence. Sampling evidence from memory to make a judgment is not necessarily a process that happens consciously or deliberately. Koriat argues that when people make a confidence judgment, they “do not go over the entire protocol underlying their decision, but rely primarily on the ‘gist’ of that protocol. They base their confidence on contentless mnemonic cues, such as the amount of deliberation and conflict that they had experienced in reaching the decision, and the speed with which the decision had been reached” (p. 216). Thus, the feelings that we are aware of when we have a consciously occurrent degree of confidence according to Railton may be understood as depending on factors such as the strength of our memory, and the ease and fluency of accessing the claim in question. I have explained how proponents of different views of what makes our beliefs conscious can account for the possibility of consciously occurrent graded beliefs. I won’t judge here which of the positions is most attractive. Instead, I just want to note that we have a variety of explanations available, which makes graded beliefs candidates for attitudes that can participate not only as background attitudes in active reasoning, but also as the central attitudes of which we are consciously aware of in deliberation.
5.2 Language involvement Another feature that attitudes involved in active reasoning are taken to have is linguistic structure. This constraint is motivated by the idea that the contents of our thoughts are structured and complex, and that language has the right resources to
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
provide structure to the contents of our thoughts. More support for the constraint comes from the plausible assumption that the contents of our thoughts and the contents of our utterances are closely related, and one way in which we can spell out this idea is by assuming that their contents share a common underlying linguistic structure (e.g. Carruthers 1998, 2016, Moss 2016, Broome 2013). The basic version of this constraint says that the contents of the attitudes you reason with must have linguistic structure. A stronger version of this constraint adds to the basic version the claim that for an attitude to be a premise or conclusion in active reasoning, it must be asserted in inner speech. Broome (2009) accepts this stronger view, but rejects it later (Broome 2013). We can thus fill in the argument schema in two different ways, depending on whether we accept the basic or the strong version of the language involvement constraint. Basic: 1. Active reasoning requires that the attitudes with which we reason have linguistically structured contents. 2. If active reasoning requires that the attitudes with which we reason have linguistically structured contents, then, for any attitude A, we can reason with A only if A has linguistically structured contents. 3. Attitude A doesn’t have linguistically structured contents. 4. We can’t reason with attitude A. Strong: 1. Active reasoning requires (i) that the attitudes with which we reason have linguistically structured contents, and (ii) that we assert the premises of our reasoning to ourselves in inner speech. 2. If active reasoning requires (i) and (ii), then, for any attitude A, we can reason with A only if A has linguistically structured contents, and is assertable in inner speech. 3. Attitude A doesn’t have linguistically structured content or isn’t assertable in inner speech. 4. We can’t reason with attitude A. The basic version of the language involvement constraint is arguably satisfied by both outright and graded beliefs. At least prima facie, there doesn’t appear to be any good reason to think that the contents of graded beliefs are interestingly different from the contents of outright beliefs. We can think of their contents as linguistically structured propositions, or as natural language sentences, or in some suitably similar way. Hence, the basic version of the argument is unsound, because neither outright nor graded beliefs can be substituted in premise 3 to make this premise true. By contrast, if we accept the stronger claim that active reasoning requires that the premises and conclusions are asserted in inner speech, then this tells against
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
reasoning with degrees of belief, at least in combination with a standard account of assertion. On the usual view of assertion, a sincere assertion expresses an outright belief. Thus, my high degree of belief that it will rain cannot directly play a part in my reasoning process, because I can’t assert it. What I can assert is a proposition of the form “It will probably rain,” but on the standard view of assertion, I am thereby expressing an outright belief with a probabilistic content, not a high degree of belief. Hence, if my assertions express my outright, but not my graded beliefs, and I must assert the premises of my active reasoning, then only my outright beliefs can figure in active reasoning. Of course, this line of reasoning collapses if we adopt a different view of assertion. Yalcin (2007) and Moss (2013, 2016) have argued that an assertion of “probably p” or “it is unlikely that p” doesn’t express an outright belief with a propositional content that is about probability, but a graded belief. Also, if we accept the view discussed before that there is no substantive difference between the view that we represent uncertainty in graded beliefs towards non-probabilistic contents, and the view that we represent uncertainty in ungraded beliefs with probabilistic contents, then the strong argument fails. On closer inspection, there are independent reasons for rejecting the strong version of the linguistic structure constraint. The first reason is that it is unclear what is added by asserting one’s belief in inner speech. Suppose an agent is consciously entertaining the (suitably linguistically structured) content of one of her attitudes. The content presents itself to the agent in the outright-belief-y way. If the strong language constraint is correct, then this is insufficient for this belief to be available as a premise in reasoning. But it’s not clear that anything else is needed, which would be supplied by asserting the content of the belief in inner speech. Plausibly, we can sometimes bring ourselves to consciously entertain our beliefs by saying them to ourselves in inner speech. But that does not justify the stronger claim that we cannot use a belief, or any other attitude, as a starting point of our reasoning unless we have asserted it in inner speech. One might respond that a belief can’t be conscious unless it is asserted in inner speech. But this view doesn’t strike me as particularly plausible, and I know of no philosopher who defends it. Secondly, the strong language constraint also seems difficult to reconcile with the idea that we can reason with attitudes other than beliefs, such as intentions and suppositions. The way we express suppositions or intentions in language is often hard to distinguish from the way we express beliefs. “I will stay awake until midnight” can express either a belief, or an intention, or both, and maybe it can also express a supposition, depending on the context. But beliefs, intentions, and suppositions behave very differently in terms of which inferences they license. Thus, even if I expressed the premises of my reasoning by asserting them in inner speech, I would still need to independently keep track of what kind of attitudes they are, since the assertion by itself doesn’t make this clear. But if I am already tracking the type and content of my attitude, the assertion doesn’t add anything important. Thus,
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
saying we can’t reason with attitudes unless they have been asserted in inner speech seems like a pointless extra requirement.⁹ These considerations favor accepting the basic version of the language involvement constraint. The basic version rules out neither graded nor outright beliefs from participating in reasoning, and it doesn’t rely on a particular account of assertion.
5.3 Operating on contents When we engage in active reasoning, we generate new attitudes on the basis of other attitudes. Moreover, we generate these attitudes in a particular way that is different from, for example, free association. But how exactly do we operate on our premise attitudes when we reason? In the literature, we can find two different answers to this question, with some authors not clearly committing to one side or the other. The first view says that when we engage in reasoning, we operate on the contents of our premise-attitudes. The second view says, by contrast, that when we engage in reasoning, we operate not just on the contents of our premise-attitudes, but on the combination of the attitude and its content. The first view seems to be suggested by Paul Boghossian (2014). He proposes a necessary condition for a cognitive process to count as active reasoning, which he calls “inferring.” Here’s what he says: (Inferring) S’s inferring from p to q is for S to judge q because S takes the (presumed) truth of p to provide support for q. On this account, my inferring from (1) and (2) to (3) must involve my arriving at the judgment that (3) in part because I take the presumed truth of (1) and (2) to provide support for (3). Let us call this insistence that an account of inference must in this way incorporate a notion of “taking” the Taking Condition on inference. Any adequate account of inference, I believe, must, somehow or other, accommodate this condition. (Taking Condition): Inferring necessarily involves the thinker taking his premises to support his conclusion and drawing his conclusion because of that fact. The intuition behind the Taking Condition is that no causal process counts as inference, unless it consists in an attempt to arrive at a belief by figuring out what, in some suitably broad sense, is supported by other things one believes. (Boghossian 2014)
Boghossian suggests here that in drawing inferences, reasoners are responsive to evidential support relations between propositions. In drawing an inference from some premise attitudes, the reasoner relies on the truth of the content of the premise attitudes. On the other end of the spectrum, we find for example Broome (2013) and Peacocke (2000), who argue that we operate on the combination of an attitude and its content.¹⁰ Peacocke says “Now the thinker who successfully reaches new beliefs by
⁹ Broome (2009) tries to solve the problem of distinguishing which attitude is expressed by an assertion by appealing to different grammatical features of the contents of assertions. In his (2013), he no longer endorses this solution. A similar view is also held by Frankish (2004, 97–103). ¹⁰ Wright (2014) also offers an account of inferring, but it’s not entirely clear how to classify it.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
inference has to be sensitive not only to the contents of his initial beliefs. He has also to be sensitive to the fact that his initial states are beliefs.” We will begin by plugging the first view into our argument schema: 1. Active reasoning involves drawing inferences from the contents of one’s premise-attitudes, but not from the attitudes themselves. 2. If active reasoning involves drawing inferences from the contents of one’s premise-attitudes, but not from the attitudes themselves, then, for any attitude A, we can reason with A only if all of A’s inference-relevant features are part of A’s content. 3. It is not the case that all of attitude A’s inference-relevant features are part of its content. 4. We can’t reason with attitude A. If we substitute ‘degrees of belief ’ for ‘A’ in premise three, we get an argument that, if sound, is bad news for reasoning with degrees of belief. Compare the following pair of inferences: Outright belief inference (OBI): Bel (It will rain tomorrow) ! Bel (The garden party won’t happen) Graded belief inference (GI): High confidence (It will rain tomorrow) ! Low confidence (The garden party will happen) In OBI, the content of the premise-belief and the content of the conclusion-belief stand in the right kind of support relation to one another. If it is true that it will rain tomorrow, then this supports the truth of the claim that the garden party won’t happen. The same thing is not true of GI. Here, the content of the premise attitude does not support the content of the conclusion attitude. Rain tomorrow would lead to a cancelled garden party, not the opposite! Of course this doesn’t mean that GI is not a perfectly sensible inference. It means that in order to reason with degrees of belief, both the content and the attitude towards the content must be taken into consideration in generating a conclusion, and the conclusion is not simply a proposition, it’s a particular attitude towards a proposition. Hence, only a proponent of the contentplus-attitude view mentioned above can capture reasoning with graded beliefs. Of course, there won’t be a problem if we adopt the view that uncertainty is encoded in simple attitudes with probabilistic contents. Yet, if those contents are not propositions, but sets of probability spaces, as suggested by Moss (2016), then we still cannot say that what is assumed in inferring is the truth of the premise-attitudes, since probability functions are not true or false. If we want to reject the claim that we operate only on attitude contents in reasoning, we should not do so based on the observation that this would rule out reasoning with graded beliefs. This would be begging the question. We must look
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
for independent reasons to doubt this claim. A good case for adopting the content-plus-attitude view can be made by pointing out that the same contents can be had by different attitudes, and these different attitudes need to be tracked in reasoning. Broome comes to adopt the second view for precisely this kind of reason when he discusses practical reasoning (Broome 2013, ch. 13). To illustrate this point, consider first cases of hypothetical reasoning, which can involve the same reasoning steps as non-hypothetical reasoning. You might reason like this: “I arrive in Paris on Tuesday. If I arrive on Tuesday, I cannot visit the Louvre on the same day, because it’s closed on Tuesdays. Hence, I cannot spend my first day in Paris at the Louvre.” Nothing about this inference tells us whether it is an inference from premises that I believe, or an inference that is merely hypothetical. “I will arrive in Paris on Tuesday” could be either a belief or a supposition, and which one it is determines whether it is appropriate to come to believe the conclusion of the argument. This example provides reasons to think that in reasoning, we operate on the attitudes and their contents, rather than just the contents. The content of “I arrive in Paris on Tuesday” is the same, regardless of whether it is the content of a belief or a supposition. But when used as a premise in reasoning, which one it is makes a difference to whether or not I come to believe the conclusion of my reasoning. If I weren’t keeping track of the type of my premiseattitude, but only its content, it’s unclear how I would be able to form the appropriate attitude towards the conclusion of my reasoning. This example makes it plausible that reasoners must keep track of the contents and the types of their attitudes. Secondly, consider cases of practical reasoning. A claim like “I will visit Paris next summer” can also be the content of an intention, not just of a belief or supposition. Whether or not I intend to visit Paris next summer, or merely believe that I will, makes an important difference to what kinds of inferences I am under rational pressure to make. Suppose I am attending to my intention to visit Paris next summer by thinking “I will visit Paris next summer.” If I am rational, my intention should motivate me to consider what the necessary means for doing so are. For example, I might reason my way to the intention to buy a plane ticket. By contrast, suppose I have an outright belief, but not an intention, that I will visit Paris next summer. Maybe this is so because I will go on vacation with my aunt, who I believe will force me to visit Paris even if I don’t want to go. If I merely believe that I will visit Paris, rather than intending it, my belief cannot replace the intention in the piece of reasoning just mentioned. My belief does not generate any rational pressure to form intentions that will facilitate my going there. If reasoning were purely an operation on contents, then an outright belief and an intention with the same content would have to play the same role in reasoning, since the difference in attitudes couldn’t contribute to the inference being drawn. However, a belief and an intention with the same content clearly don’t play the same role in reasoning, since, for example, they don’t generate the same rational pressure to engage in means-end
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
reasoning. Again, this gives us reason to conclude that reasoning cannot be an operation purely on contents. This is good news for the possibility of reasoning with graded beliefs. We now have independent grounds for thinking that in reasoning, we must keep track of both the contents and the types of our premise-attitudes, in order to generate conclusions with the correct contents and attitude types. If this is correct, then there is nothing problematic about an inference such as GI, where I form a low degree of belief that the garden party will happen based on a high degree of belief that it will rain tomorrow.
5.4 Working memory Another mark of active reasoning is that it takes up working memory, whereas passive reasoning doesn’t (see e.g. Evans and Stanovich 2013). Based on this observation, we can fill in our argument schema as follows: 1. Active reasoning relies on working memory. 2. If active reasoning relies on working memory, then, for any attitude type A, we can only reason with attitudes of type A if doing so doesn’t overload the capacities of working memory. 3. Reasoning with attitudes of type A overloads the capacities of working memory. 4. We can’t reason with attitudes of type A. An argument of this kind is endorsed by Gilbert Harman (1986). He argues that reasoning with degrees of belief outstrips the capacities of working memory, and concludes that only outright beliefs can figure in reasoning. Harman claims that reasoning with graded beliefs would have to make extensive use of conditionalization to incorporate new evidence, which is too computationally complex for the human mind. I argue elsewhere that this line of reasoning is problematic, because it assumes that reasoning with graded beliefs must be done, if it can be done at all, by following ideally correct rules (Staffel 2013). This assumption is unfounded; we can reason with heuristics as well. Here I want to focus on a different aspect of the argument. Harman intends to show that the limitations of working memory rule out the possibility of active reasoning with degrees of belief altogether. But of course Harman can’t claim that we never reason with attitudes that involve uncertainty. He says “Of course, to say one normally thinks of belief in an all-or-nothing way is not to deny one sometimes has beliefs about probabilities.” He must allow that we can sometimes actively reason with outright beliefs about probabilities, or more generally, with beliefs representing uncertainty, such as “I’ll probably arrive late.” Thus, his position is that only outright beliefs can participate in reasoning, and that some of these beliefs can represent uncertainty in their content. Suppose Harman is correct in pointing out that in order to keep reasoning tasks manageable, we can only handle some amount of uncertainty, and so we must take a
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
lot of things for granted. This does not bear on the question of how uncertainty must be encoded in our attitudes to enter into active reasoning.¹¹ Making an inference from the conscious outright belief that I will probably be early for my meeting to the conscious outright belief that I probably have enough time to get a coffee seems just as complicated as arriving at a high degree of belief that I have enough time to get coffee based on a high degree of belief that I will be early for my meeting. Hence, the argument that active reasoning is constrained by the resources of working memory can’t rule out that we can reason with graded belief if it admits the possibility of reasoning with beliefs with probabilistic contents. But even if Harman’s argumentative strategy is unconvincing, it might still be correct that we need outright beliefs in addition to graded beliefs in reasoning, especially in reasoning about what the best available option is. In a nutshell, the argument for this goes as follows. We often reason about what the best option is by employing coarse-grained framings of decision problems. When reasoning in this way, we treat some claims as true but defeasible premises. The attitudes we have towards these claims are suitably described as outright beliefs. Versions of this argument have been endorsed by various authors, such as Lance (1995), Joyce (1999, section 2.6), Ross and Schroeder (2014). Let’s consider the steps in more detail. According to normative decision theory, a rational agent goes about identifying the best option(s) in the following way. She catalogs all available actions and all the ways the world might be. She then calculates the expected utility of each action by appealing to the utilities of the outcomes of different actions in different world states, and the probabilities of these world states. She then selects the action(s) with the highest expected utility as best. Of course, human reasoners can’t make decisions in this maximally fine-grained way. We can’t ever consider all available actions and every possible state of the world. While we seem to do something that is similar to the process just described—thinking about how different actions might turn out under different circumstances—we usually consider only a limited space of options. Here’s an example. Jane, a poor graduate student, is deciding how many bottles of wine to buy for her dinner party. She wants to ensure she has enough, but she also wants to save money. She might consider the following actions: buying one bottle or two bottles. She evaluates them with regard to the following world states: her guests bring wine, or they don’t. She ignores possible actions such as buying more than two bottles, and she also ignores possible world states in which a bottle breaks or her guests bring more than one bottle. She then reasons as follows. Suppose they don’t bring wine. Then it would be better to buy two bottles, since I’ll look like a bad host if I have only one. But if they bring wine, then I’ll have wasted money on a second bottle, which I should spend on something else. Will Jane decide to buy a second bottle? It depends on exactly how likely ¹¹ Harman also argues that degrees of belief are not represented in the proper format to participate in any kind of reasoning. But this argument relies on a misconception, as I show in Staffel (2013).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
she thinks it is that her guests will bring wine, and how much she disvalues wasting money or being seen as a bad host. Thinkers like you and me frequently engage in reasoning processes similar to Jane’s. These processes structurally resemble the decision procedures recommended by normative decision theory, but only take into account a very limited set of options. In reasoning in this way, Jane relies on claims such as “If my guests bring a bottle of wine, we will have enough even if I only buy one bottle.” and “If my guests don’t bring wine and I only have one bottle, I will look like a bad host.” How should we describe Jane’s attitude towards these claims? There are three options: (i) Jane is certain of these claims; (ii) Jane treats them as being merely probable, but possibly false; or (iii) Jane treats them as true for the purposes of reasoning (without being certain of them). The first option is not very attractive. The claims in question express a clearly defeasible relationship between an action and an outcome, and so it seems implausible to claim that reasoners take them to be certain. For example, it might easily occur to Jane that if she dropped and broke one of the bottles, there wouldn’t be enough wine even if she and her guests each contributed a bottle. According to the second option, Jane treats these claims as being merely probable, but possibly false. This option isn’t very plausible either. When we attribute reliance on a non-extreme degree of belief in their reasoning to someone, we do so based on particular kinds of evidence, such as that they don’t treat the claim under consideration as settled, but instead somehow take into account what happens if the claim is false. This is because the role of attitudes that explicitly encode uncertainty is to help the agent take into account different possibilities and their likelihood. For example, it does make sense to ascribe to Jane a high degree of belief that her guests will bring wine, because she also attends to the possibility that they might not bring any. But as we have set up Jane’s case, there is no evidence based on which we can attribute to her the attitude that it is likely, but not settled, that if her guests bring a bottle of wine, they will have enough even if she buys just one bottle. Her reasoning is insensitive to the possibility that they still might not have enough wine.¹² Hence, it is much more plausible to instead adopt option (iii), which characterizes Jane as treating claims such as “If my guests bring a bottle of wine, we will have enough even if I only buy one bottle” as true for the purposes of reasoning. This is compatible with her taking those claims to be defeasible. Thus, our best characterization of Jane’s reasoning attributes to her attitudes that let her treat some claims as true but defeasible premises in her inferences. These attitudes are not suitably characterized as high degrees of belief, or complete certainties. But are they outright beliefs? There are further possible options for how to characterize these attitudes: as suppositions, as acceptances, or as high degrees of belief that are
¹² I don’t mean to endorse the view here that the following is impossible: an agent makes a decision that depends on whether p holds based on a high credence in p, without treating p as true, while also ignoring the possibility that ~p. Maybe this is impossible, but even if it isn’t, it seems clearly irrational according to standard decision theory. We should avoid attributing attitudes to Jane that make her reasoning seem irrational.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
rounded up to 1. All of these attitudes allow agents to treat claims as true but defeasible premises in inferences. I can’t give an extended argument here for why they aren’t suppositions or acceptances, but this is at least one reason: an important difference between outright beliefs on the one hand and suppositions and acceptances on the other hand is that the former are characteristically formed involuntarily and automatically, whereas the latter are adopted deliberately and voluntarily. Most of the time, we don’t deliberately choose what to treat as true in reasoning, rather, the framing of decision problems is executed by automatic, non-deliberative processes. This counts in favor of classifying the relevant attitudes as outright beliefs rather than suppositions or acceptances. What about the option that these attitudes are high degrees of belief that are rounded up to 100 percent confidence for the purposes of simplifying our reasoning? As I mentioned in Section 2, there is good reason to think that rounded degrees of belief are identical to outright beliefs, so the difference is just one of labeling. If there is a more substantial difference, then of course further arguments will be needed to settle the question of how the attitudes should be characterized that let us treat claims as true but defeasible premises in reasoning.
6. Conclusion I have explored different strategies for determining which types of belief can participate in active reasoning. I have argued that, based on the key features of active reasoning we have examined, we cannot rule out either graded or outright beliefs as participants of reasoning. Proponents of restrictive views of what attitudes we can reason with must either provide us with additional, independent reasons for why particular types of belief can’t be employed in reasoning, or expand their theories to make room for both graded and outright beliefs. The results of our discussion also shed light on controversies about whether there is really a substantive difference between views that are frequently distinguished in the literature. One of these controversies concerns the question of whether we should think of uncertainty as being encoded in graded attitudes with non-probabilistic contents or as encoded in ungraded attitudes with probabilistic contents. On some accounts, these views are treated as essentially intertranslatable, whereas other authors seem to assume that they are substantively different. The fact that it doesn’t seem to matter which account we use in explaining how we can reason with attitudes that encode uncertainty supports the view that they are intertranslatable. Another question concerns how we should think of the difference between views that postulate the existence of outright beliefs, and graded-belief-only views that allow for rounded degrees of belief, which let reasoners treat claims as true that they are not completely confident in. It is worth investigating whether outright beliefs have any additional characteristics that might distinguish them from rounded degrees of belief, but it might again turn out that these views are intertranslatable.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
References Boghossian, Paul (2014), What is Inference?, Philosophical Studies 169 (1), 1–18. Broome, John (2009), The Unity of Reasoning?, in: S. Robertson (ed.), Spheres of Reason, Oxford: Oxford University Press, 2009, 62–92. Broome, John (2013), Rationality through Reasoning, Chichester: Wiley-Blackwell. Buchak, Lara (2014), Belief, Credence, and Norms, Philosophical Studies 169 (2), 285–311. Carruthers, Peter (1998), Conscious Thinking: Language or Elimination?, Mind and Language 13, 323–42. Carruthers, Peter (2016), The Illusion of Conscious Thought, manuscript. Clarke, Roger (2013), Belief Is Credence One (In Context), Philosophers’ Imprint 13, 1–18. Dennett, Daniel (1991), Consciousness Explained, Boston: Little Brown. Dogramaci, Sinan (2016), Knowing Our Degrees of Belief, Episteme 13 (3), 269–87. Drayson, Zoe (2012), The Uses and Abuses of the Personal/Subpersonal Distinction, Philosophical Perspectives 26, 1–18. Evans, Jonathan St. B. T. (2008), Dual-Processing Accounts of Reasoning, Judgment, and Social Cognition, Annual Review of Psychology 59, 255–78. Evans, Jonathan St. B. T. and Stanovich, Keith E. (2013), Dual-Process Theories of Higher Cognition: Advancing the Debate, Perspectives on Psychological Science 8 (3), 223–41. Foley, Richard (2009), Beliefs, Degrees of Belief, and the Lockean Thesis, in: F. Huber and C. Schmidt-Petri (eds.), Degrees of Belief, Synthese Library 342, New York: Springer, 2009, 37–47. Frankish, Keith (2004), Mind and Supermind, Cambridge: Cambridge University Press. Frankish, Keith (2009a), Partial Belief and Flat-Out Belief, in: F. Huber and C. Schmidt-Petri (eds.), Degrees of Belief, Synthese Library 342, New York: Springer, 2009, 75–93. Frankish, Keith (2009b), Systems and Levels: Dual System Theories and the PersonalSubpersonal Distinction, in: J. Evans and K. Frankish (eds.), In Two Minds: Dual Processes and Beyond, Oxford: Oxford University Press, 2009, 89–107. Frankish, Keith (2010), Dual-Process and Dual-System Theories of Reasoning, Philosophy Compass 5 (10), 914–26. Greco, Daniel (2015), How I Learned to Stop Worrying and Love Probability 1, Philosophical Perspectives 29, 179–201. Greco, Daniel (2016), Cognitive Mobile Homes, Mind 126 (501), 93–121. Grice, Paul (2001), Aspects of Reason, Oxford: Oxford University Press. Harman, Gilbert (1986), Change in View, Cambridge, MA: MIT Press. Hawthorne, John and Stanley, Jason (2008), Knowledge and Action, Journal of Philosophy 105 (10), 571–90. Jeffrey, Richard (1970), Dracula Meets Wolfman: Acceptance vs. Partial Belief, in: M. Swain (ed.), Induction, Acceptance, and Rational Belief, Dordrecht: D. Reidel, 1970, 157–85. Joyce, James M. (1999), The Foundations of Causal Decision Theory, Cambridge: Cambridge University Press. Kahnemann, Daniel (2011), Thinking Fast and Slow, New York: Farrar, Straus and Giroux. Kaplan, Mark (1996), Decision Theory as Philosophy, Cambridge: Cambridge University Press. Konek, Jason (2016), Probabilistic Knowledge and Cognitive Ability, Philosophical Review 125 (4), 509–87.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Koriat, Asher (2012), The Subjective Confidence in One’s Knowledge and Judgments: Some Metatheoretical Considerations, in: Michal J. Beran et al. (eds.), Foundations of Metacognition, Oxford: Oxford University Press, 213–33. Kruglanski, Arie W. and Gigerenzer, Gerd (2011), Intuitive and Deliberative Judgments are Based on Common Principles, Psychological Review 118, 97–109. Lance, Mark Norris (1995), Subjective Probability and Acceptance, Philosophical Studies 77, 147–79. Leitgeb, Hannes (2016), The Stability of Belief: How Rational Belief Coheres with Probability, Oxford: Oxford University Press. Lin, Hanti (2013), Foundations of Everyday Practical Reasoning, Journal of Philosophical Logic 42 (6), 831–62. Lin, Hanti and Kelly, Kevin (2012), Propositional Reasoning that Tracks Probabilistic Reasoning, Journal of Philosophical Logic 41 (6), 957–81. Lycan, William G. (2004), The Superiority of HOP to HOT, in: Rocco G. Gennaro (ed.), Higher-Order Theories of Consciousness, Amsterdam: John Benjamins, 93–113. Lyon, Aidan (2014), Resisting Doxastic Pluralism: The Bayesian Challenge Redux, manuscript. Mellor, D. H. (1978), Conscious Belief, Proceedings of the Aristotelian Society, New Series 78, 87–101. Moss, Sarah (2013), Epistemology Formalized, The Philosophical Review 122 (1), 1–43. Moss, Sarah (2016), Probabilistic Knowledge, book manuscript. Mugg, Joshua (2016), The Dual-Process Turn: How Recent Defenses of Dual-Process Theories of Reasoning Fail, Philosophical Psychology 29 (2), 300–9. Peacocke, Christopher (2000), Conscious Attitudes, Attention and Self-Knowledge, in: Crispin Wright et al. (eds.), Knowing Our Own Minds, Oxford: Oxford University Press, 63–98. Pettigrew, Richard (2016), Accuracy and the Laws of Credence, Oxford: Oxford University Press. Railton, Peter (2013), Reliance, Trust, and Belief, Inquiry 57 (1), 122–50. Ross, Jacob and Schroeder, Mark (2014), Belief, Credence, and Pragmatic Encroachment, Philosophy and Phenomenological Research 88 (2), 259–88. Schiffer, Stephen (2003), The Things We Mean, Oxford: Oxford University Press. Staffel, Julia (2013), Can There Be Reasoning with Degrees of Belief?, Synthese 190, 3535–51. Staffel, Julia (2016), Unsettled Thoughts, ch. 1, manuscript. Sturgeon, Scott (2015), The Tale of Bella and Creda, Philosophers’ Imprint 15 (31). Tang, Weng Hong (2015), Belief and Cognitive Limitations, Philosophical Studies 172 (1), 249–60. Valaris, Markos (2014), Self-Knowledge and the Phenomenological Transparency of Belief, Philosophers’ Imprint 14 (8). Wedgwood, Ralph (2012), Outright Belief, Dialectica 66 (3), 309–29. Weisberg, Jonathan (2013), Knowledge in Action, Philosophers’ Imprint 13 (22). Weisberg, Jonathan (2016), Belief in Psyontology, Philosophers’ Imprint, forthcoming. Wright, Crispin (2014), Comment on Paul Boghossian, “What Is Inference?”, Philosophical Studies 169 (1), 27–37. Yalcin, Seth (2007), Epistemic Modals, Mind 116 (464), 983–1026.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Reasoning and Agency
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
5 The Question of Practical Reason Nicholas Southwood
Practical reasoning is often said to involve exercising a capacity to settle a distinctively practical question: the question of how we are to act.¹ But what exactly is this “question” that practical reasoning is supposed to involve exercising a capacity to settle? How should we understand the question of practical reason, as I shall call it? The problem is a difficult one, in part, because practical reason appears to have two elements that pull in different directions. First, there is what I shall call the correct responsiveness aspect of practical reason. The capacity to settle the question of how we are to act entails the capacity to answer the question correctly. This means that there is a correct answer to the question; that we have the capacity to discover what the correct answer is; and that we have the capacity to answer the question on the basis of those considerations that make it the correct answer. Practical reasoning is quite different, in this respect, from mere random selection (as when we pick a dish by closing our eyes and placing our finger haphazardly on the menu). There may be a correct answer to the question of which dish we are to order but clearly we are not aiming to discover the correct answer, still less to pick the correct dish because of considerations in virtue of which it is correct. But, second, practical reason also appears to have what I shall call an authoritative aspect. It involves exercising a distinctive power, the power to settle the question of what one is to do. We have a kind of authority with respect to this question. The ability to settle the question of what one is to do is not simply the ability to discover the answer to it. Rather, it is to make it the case that the question is now settled in
I am grateful to the editors for helpful written comments and to participants at the Reasoning Conference at the University of Konstanz (especially my commentator, Paulina Sliwa) and the Moral Rationalism Workshop at the University of Melbourne for valuable feedback and discussion. Research was supported by DP120101507 and DP140102468. ¹ For example, as R. Jay Wallace puts it, to engage in practical reasoning involves the “general human capacity for resolving, through reflection, the question of what one is to do” (Wallace 2013, p. 1, italics added). Robert Dunn writes: “When we are deliberating . . . [w]e are trying . . . to settle the question of what to do” (Dunn 2006, p. 58, italics added). Similar statements abound in philosophical and empirical work on practical reasoning.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
favor of doing this or that; and, hence, potentially to change the answer to the question of how we are to act. By engaging in practical reasoning about whether one is to buy a particular house and deciding to buy the house, we may make it the case that the question is now settled in favor of buying it. My aim is to consider the capacity of two rival conceptions of practical reason to accommodate these two core aspects. I shall argue that the dominant theoretical conception of practical reason, which holds that the question of practical reason is a theoretical question with a practical subject matter is, at best, capable of plausibly accommodating one or other aspect but not both. I shall then argue that an alternative, the practical conception, can do rather better. This holds that the question of practical reason is a distinctively practical kind of question—the question of what to do—that is distinct from the question of what one will do and the question of what one ought or has reason to do (Hieronymi 2009; Southwood 2016a; 2016c; 2018a).² The practical conception can easily accommodate the authoritative aspect of practical reason. But I shall argue that it can also accommodate the correct responsiveness aspect once we recognize that the correct answer to the question of what to do depends exclusively on agents’ actual attitudes. I shall conclude by saying something about the implications of the practical conception, thus interpreted, for several important issues in meta-ethics and the philosophy of normativity.
1. Is There a Question of Practical Reason? I am assuming that there is a question of practical reason—a question that we aim to settle in practical reasoning. Before we begin, however, it is worth briefly considering a challenge to this idea. This involves the important recent account of practical reasoning due to John Broome (2013) in which the idea of a special question that we are undertaking to settle seems to be playing no role. According to Broome, we engage in reasoning just insofar as we follow certain kinds of rules that enjoin us to derive the “marked content” of an attitude from the ² There are three main existing kinds of arguments for the practical conception. The first are intuitive arguments. These hold that there are instances of mental activity that, intuitively, appear to be instances of practical reasoning but do not appear to be instances of settling the question of what we will do, or what we ought or have reason to do (see e.g. Southwood 2016a, p. 64). The problem with these arguments is that a proponent of the theoretical conception may deny that these are bona fide instances of reasoning. To overcome this response we would need to offer sufficient conditions for practical reasoning—a formidable task that I have no wish to undertake here. The second are meta-ethical arguments. These hold that the practical conception is presupposed by some kind of meta-ethical (e.g. non-cognitivist (Gibbard 2003) or constructivist (Southwood 2018b) meta-ethics. The problem with these arguments is that they have no appeal whatsoever for those who are unsympathetic to the meta-ethical positions in question. The third (and most interesting) argument holds that the practical view is uniquely capable of explaining the special way in which we are answerable for the conclusions of practical reason (Hieronymi 2009). However, the problem with this argument is that we are answerable, not only for our decisions but also for our actions; but no one (with the possible exception of Aristotle) thinks that the actions we settle on are part of practical reasoning. The answerability test does not seem to be a good test.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
marked content(s) of some other attitude(s). A marked content is a proposition with a “marker” to indicate the kind of attitude you have towards the proposition. Take a very simple kind of Modus Ponens Rule that tells you to derive the marked content from the marked contents and . Suppose that I read on the front page of the Canberra Times that Britain has voted to leave the European Union, so I form the belief that Britain has voted to leave the European Union. Suppose, moreover, that I believe that if Britain votes to leave the European Union, then the British economy will suffer. And suppose that I derive from and by correctly following the simple Modus Ponens Rule. This is an instance of reasoning. What about practical reasoning? Practical reasoning is reasoning that culminates in an intention. For example, one kind of practical reasoning that Broome discusses is what he calls “enkratic reasoning.” Enkratic reasoning involves reasoning by following an enkratic rule. Broome’s enkratic rule tells us to derive the marked content from the marked content and (Broome 2013, p. 290). Suppose that I am watching a documentary about the refugee crisis in Syria and form the belief that I ought to donate, say, $5000 to Médecins Sans Frontières. Suppose, moreover, that I believe that it is up to me whether or not I give $5000 to Médecins Sans Frontières. I will engage in enkratic reasoning insofar as I derive from and . Broome’s account might seem to show that we do not need to postulate a special deliberative question to make sense of practical reasoning. That’s because Broome purports to have identified sufficient conditions for practical reasoning; and it seems that these conditions could obtain without the agent undertaking to settle a special deliberative question. To assess this challenge, the crucial issue is whether Broome’s conditions are indeed sufficient for practical reasoning. I shall suggest that they aren’t. To see this, consider the following (admittedly rather bizarre) activity (Southwood 2016c). The activity involves (a) opening at random a page from a special book in which one has enumerated many things one believes one ought to do and that one believes are such that it is up to one whether or not one does them, (b) identifying the first normative claim that catches one’s eye, and then (c) forming a corresponding intention by following the enkratic rule. Suppose, then, that I open the book and the first normative claim that catches my eye is the claim that I ought to take out the bins on Friday. Suppose, moreover, that I follow the enkratic rule and come to form an intention to take out the bins on Friday on the basis of believing I ought to take out the bins on Friday and believing that it is up to me whether I do so. In engaging in activity of this kind, I satisfy Broome’s sufficient conditions for (enkratic) practical
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
reasoning. But it seems highly implausible to suppose that I am engaging in practical reasoning. Why not? What is lacking is the special way in which we manifest agency in practical reason. Practical reasoning involves forming intentions in the service of responding to specific choice situations. This is not what happens when I form an intention to take out the bins on Friday. There is no such choice situation—say, whether I am to take out the bins on Friday, or when I am to take out the bins, or what I am to do on Friday—to which my intention to take out the bins is a response. There are certain kinds of choice situations that I face: say, whether I am to open the special book, whether I am to perform whichever act (or form whichever intention) happens to correspond to the first normative claim that catches my eye, whether I am to follow the enkratic rule. And, plausibly there are various intentions that I form in response to these various choice situations. So, plausibly I do manifest the right kind of agency at various points. But I don’t manifest the right kind of agency in forming the intention to take out the bins on Friday. How might we capture this agency-involving aspect of practical reasoning? The obvious solution is to supplement Broome’s account so that it requires, in addition, that reasoners have the aim of settling the question of whether they are to perform certain acts and their mental activity is in pursuit of that aim. An account of this kind would not be vulnerable to our counterexample. For although I form the intention to take out the bins on Friday by following the enkratic rule, I do not form the intention in pursuit of the aim of settling the question of whether I am perform some act: say, whether I am to take out the bins on Friday. So, it follows that I am not engaged in practical reasoning. Next, suppose instead that we change the case so that I do possess and am acting in pursuit of the relevant aim. For example, suppose that I am trying to settle the question of whether I am to take out the bins on Friday and I engage in some enkratic reasoning that results in my forming the intention to take out the bins on Friday by following the enkratic rule. This does appear to involve manifesting my agency in the right way to count as practical reasoning.
2. The Theoretical Conception of Practical Reason Let us suppose, then, that there is a question of practical reason. How should it be understood? According to perhaps the dominant conception of practical reason, the question is simply an ordinary question with a practical subject matter. Call this the theoretical conception of practical reason. There are two importantly different interpretations of the theoretical conception. The descriptive interpretation of the theoretical conception holds that the question is a descriptive question about one’s future conduct. To engage in practical reasoning involves undertaking to settle the question of how we will act: whether we will perform some act or whether we won’t (Harman 1997; Velleman 2000). On the more common normative interpretation, the question is a normative question. That is to say that practical reasoning involves undertaking
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
to settle the question of what we ought or have reason to do (Audi 2006, p. 3; Scanlon 2007; Parfit 2011; Cullity 2008; Chang 2013; Raz 1999). Either way, practical reasoning is thought to be a special kind of theoretical reasoning. Let us consider each interpretation in turn.
2.1. The descriptive interpretation Consider first the descriptive interpretation. This holds that the question of practical reason is just the question of what we will do (Velleman 2000, p. 25). There is a fact of the matter about how we will act: whether we will bowl a searing bouncer to the poor batsman who has just arrived at the crease, whether we will pat the kangaroo in our garden, and so on. So the question that we are exercising a capacity to settle in practical reason has an answer—an answer that is, in principle, discoverable (and indeed that we are in a special privileged position to discover). Thus, the correct responsiveness aspect of practical reason is vindicated. Moreover, we are in a special privileged position not only to discover but to settle the question of how we will act since we can make true an answer to the question of how we will act by deciding to act this way or that. For example, I am in a special position not only to discover but also to make it true that I will bowl a searing bouncer to the batsman by deciding to bowl a bouncer. So the authoritative aspect of practical reason is also vindicated. But this is too quick. First, take the correct responsiveness aspect. The problem is that the kind of answer that it seems that we have the capacity to discover in practical reason is supposed to be independent of our decisions in the sense that we can base our decisions on the considerations that make them correct. Our decisions are supposed to be appropriately responsive to the answers. But even if there are answers to various questions about what we will do, the answers are clearly not independent of one’s decision to do these things. Rather, the only reason to think that we will act in a certain way is that we will decide to act in that way. Our decisions cannot be based on, and hence they cannot be correct responses to the question of what we will do. So the descriptive interpretation of the theoretical conception cannot, in fact, account for the correct responsiveness aspect, properly understood. The more serious problem, however, is that the descriptive interpretation of the theoretical conception cannot account for the authoritative aspect of practical reason. The problem here is that it is simply not right to say that practical reasoning involves a capacity to make it true that we will act in a certain way. Practical reasoning culminates in a decision to act in this way or that. A decision to act in a certain way clearly doesn’t imply that we will act in that way. For example, one might engage in some practical reasoning and decide to ask one’s childhood sweetheart to marry one over dinner this evening. But this doesn’t mean that one will ask one’s childhood sweetheart to marry one over dinner this evening. When it comes to the crunch, one might chicken out and find oneself unable to carry through on one’s decision. Or perhaps we have a terrible row in consequence of which one changes one’s mind.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Or perhaps a fire breaks out in the restaurant requiring instant evacuation and putting an end to one’s romantic plans. At most, then, practical reasoning involves, say, making it more likely that we will act in a certain way; or perhaps making it the case that we will act in a certain way so long as e.g. we don’t change our mind or chicken out and the world and other relevant agents don’t thwart us; or perhaps making it the case that we decide to do this or that. These don’t seem plausible candidates for the question of practical reason. So the descriptive interpretation cannot account for the authoritative aspect either in a plausible way.
2.2. The normative interpretation Next, consider the normative interpretation. This holds that the question of what we are to do is the question of what we ought (or have reason) to do. The normative interpretation comes in a number of different versions. 1. One version holds that there is a fact of the matter about what we ought or have reason to do that is independent of what we decide, and that practical reason involves exercising a capacity to ascertain it (Audi 2006, p. 3; Parfit 2011). Such a view is well placed to account for the correct responsiveness aspect of practical reason. For example, if there is an independent fact of the matter about whether we ought or have reason to buy a house, then it follows that there is an answer to the question of whether we ought or have reason to buy the house that is independent of what we decide, that we might potentially discover by reflection and on which we might potentially base our decision. What the view plainly cannot account for, however, is the authoritative aspect of practical reason. For if there is an independent fact of the matter about what we ought or have reason to do, then clearly it’s not up to us to settle the question of what we ought or have reason to do. Rather, what settles the question is simply the independent fact of the matter about what we ought or have reason to do. 2. A second version of the normative interpretation holds that truths about what we ought or have reason to do are determined simply by what we decide. Such a view straightforwardly accounts for the authoritative aspect of practical reason since it implies that we really do possess the ability to settle the question that confronts us in practical reason. The problem is that it cannot account for the correct responsiveness aspect. If what we ought or have reason to do is determined simply by our decisions, then at t1, when we ask the question of what we ought or have reason to do, the answer to the question is provided by the fact that, at the later time, t2, we settle the question by deciding to act in a certain way. But it is very odd to think of this as involving our trying to “discover the answer” to the question of what we ought or have reason to do. Insofar as there is an answer that we are trying to discover, it seems to be an answer to a quite different question: the question of what we will decide. In any case, the answer is not independent of what we decide. We clearly cannot base our decision on the correct answer. So we cannot form decisions in a way that
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
amounts to responding correctly to the answer to the question of what we ought or have reason to do. 3. A third version of the normative interpretation holds that our decisions sometimes make it the case that we ought or have reason to act in the way we have decided, namely in those situations where our independent reasons underdetermine what we ought or have reason to do: when our reasons are either equally strong (Cullity 2008) or on par (Chang 2013). For example, if the independent reasons that I have to go to Italy as opposed to France are just as strong as (or on par with) the reasons I have to go to France as opposed to Italy, then deciding to go to Italy can make it the case that this is what I ought or have reason to do. The problem is that what we are looking for is a conception of practical reason that involves both exercising a capacity to discover and respond correctly to the answer and exercising a capacity to determine the answer to the question. By contrast, the Cullity–Chang view holds that practical reason either involves exercising a capacity to discover and respond correctly to the answer or determining the answer to the question of what we ought or have reason to do, but not both. For the only situations in which there is an answer to be discovered are those where we lack the capacity to determine the answer to the question. And the only situations where we possess the capacity to determine the answer to the question are those in which there is no independent answer that we might potentially discover. 4. There is also a fourth version. Like the third version, it holds that there are two sources of truths about what we ought or have reason to do: independent normative reality, on the one hand; and our decisions, on the other. However, like the second version and unlike the third version, it holds that so long as a certain competence condition is met our decisions potentially always make it the case that we ought or have reason to act as we decide (as opposed to doing so only in situations in which our independent reasons underdetermine what we ought or have reason to do). The idea is that there is an answer to the question of what we ought or have reason to do, at t1, when we ask the question, that does not depend on the fact that, at t2, we decide to act in a certain way, but that we have the ability to change the answer to the question insofar as we make a decision at t2. At least on one interpretation this is essentially Joseph Raz’s view (Raz 1999, pp. 66–73). Just as a legitimate state has a kind of normative power in virtue of which it can potentially change the answer to the question of whether I ought to drive my car with a blood alcohol reading fractionally over 0.05 by making a law forbidding driving with a blood alcohol reading over 0.05, so too we (sufficiently competent) practical reasoners have a kind of normative power in virtue of which we can potentially change the answer to the question of whether we ought or have reason to buy a particular house by deciding to do so. Unlike the other versions that we have considered so far, this version is able to account for both the correct responsiveness aspect and the authoritative aspect. It can account for the correct responsiveness aspect since it holds that there is an
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
answer to the question of what we ought or have reason to do that we have the capacity to discover and base our decisions on by engaging in practical reason. And it can account for the authoritative aspect since it holds that we have the ability to determine and not merely discover the answer to the question of what we ought or have reason to do by deciding to do this or that. The problem is that the view implies a highly objectionable kind of normative bootstrapping (Bratman 1981; Broome 2001; Brunero 2007). Suppose that I decide to poison a rival for an academic job. By doing so, it follows that I have make it the case that I now ought or have reason to poison her. But it is simply not plausible that we have the ability to determine normative reality in this way.
3. The Practical Conception I have argued that we have good reason to be skeptical about the capacity of the theoretical conception to accommodate both of the two core aspects of practical reason. But there is an alternative to the theoretical conception. This holds that practical reasoning is practical, not simply in virtue of its practical subject matter, but in virtue of involving a distinctively practical kind of question; the question of what to do (Hieronymi 2009; Southwood 2016a; 2018b). Call this the practical conception. Notice that the question of what to do is distinct from the question of what one will do. Suppose that one is offered a job that is so vastly superior in every relevant respect to all available alternatives that one knows that one will take it. Under these circumstances, we might very well consider (albeit fairly briefly) whether to accept it without wondering whether one will accept it. Conversely, suppose that we are waiting nervously in line for an elevator that will take us up to a bungee-jumping platform. Under these circumstances, we may perhaps wonder how we will act when we get to the top. Faced with the daunting prospect of a 50-meter drop into a narrow chasm, supported by nothing more than a flimsy piece of rubber, what will we do? Will we heroically launch ourselves headfirst over the edge? Or will we chicken out? In asking ourselves this question, we are not asking ourselves whether to jump. More controversially, I suggest that the question of what to do is also distinct from the question of what one ought or has reason to do in at least the following sense (Hieronymi 2009; Southwood 2016a; 2018b). Suppose that I am asking myself the question of what to do for my summer holiday—say, whether to go to Sardinia or Corsica or Sicily. In asking myself this question, I needn’t be asking myself the question of whether I ought to go to Sardinia or Corsica or Sicily. Perhaps I’ve already resolved to my satisfaction the question of whether I ought to go to Sardinia or Corsica or Sicily in favor of Sardinia. (The love of my life is there and I recognize that pursuing her is more important than anything else.) Yet, despite this, I retain a certain lingering inclination to visit the volcanoes of Sicily or the beaches of Corsica on my own, and so the question of what to do remains a live one. Or perhaps I am a consistent normative nihilist who believes that there are no truths about what one
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
ought to do, and who therefore refrains from having any beliefs about, or even interrogating the question of, what I ought to do (Southwood 2016a). A possible worry concerning the practical conception is that its alleged distinctness from the theoretical conception might seem to rest on a contingent psychological fact about the existence of akrasia. Consider a world different from our own in which we were psychologically constituted in such a way that whenever we formed the belief that we should X, we automatically formed the intention to X. Doesn’t the possibility of such a world call into question the idea that the domain of practical reason necessarily concerns a practical rather than a theoretical question? No. At most, it implies that normative theoretical reasoning in the non-akratic world always also counts as practical reasoning. But it does not follow that practical reasoning in the non-akratic world is exhausted by (still less to be identified with) such reasoning. Consider again the consistent normative nihilist in the non-akratic world who undertakes to settle the question of what to do for her summer holiday and, say, settles the question by forming the intention to go to Corsica. The practical conception holds that this may count as practical reasoning even though she does not engage in any normative theoretical reasoning. (This is, of course, consistent with its being true that if she were to form the belief that, say, she ought to go to Corsica (thereby ceasing to be consistent in her normative nihilism), then she would also automatically form the intention to go to Corsica.)³ The practical conception is obviously well placed to vindicate the authoritative aspect of practical reason. Suppose that we are reasoning about whether we are to put poison in one’s rival’s soup. And suppose that we decide to do so. The practical conception holds that we have the capacity to determine the answer to the question of what to do. That is, by deciding to poison one’s rival, we make it the case that the question of whether to poison one’s rival is now settled in favor of doing so. We make it the case that the answer to the question is now to poison one’s rival rather than not to poison her. Moreover, unlike Raz’s view, the practical conception avoids objectionable bootstrapping. To be sure, like Raz’s view, it holds that we have a power to determine the answer to the question that confronts us in practical reason. So, the practical conception is structurally analogous to Raz’s view. However, the latter holds that we have a power to determine the answer to the question of what we ought or have reason to do. And, as we have seen, it is simply not plausible that we have the power to make it the case that, say, we ought or have reason to poison a rival by deciding to do so. By contrast, the practical conception holds that we have a power to determine the answer to the question of what to do. Thus, we have the power to make it the case that poisoning one’s rival is the answer to the question of what to do. It does not follow from this that one ought or has reason to poison one’s rival.
³ I am very grateful to Brendan Balcerak Jackson for raising this important issue.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
How about the correct responsiveness aspect? It might seem that in this respect the practical conception is doomed to fail. That’s because there is no fact of the matter about what to do.⁴ If there were a fact of the matter about what to do, then the practical conception would collapse into a version of the theoretical conception according to which practical reasoning involves exercising a capacity to settle a theoretical question about what actions have the property of being “what to do.” There is no fact of the matter about what to do. Hence, the question of practical reason is not one to which we are capable of correctly responding on the supposition that the question of practical reason is the question of what to do. But this is a non-sequitor. It does not follow from the fact that (a) there is no fact of the matter about what to do that (b) there is no fact of the matter about the correct answer to the question of what to do. To be sure, it is incumbent on a proponent of the practical conception to say why we should think that there is a fact of the of the matter about the correct answer to the question of what to do and what it is in virtue of which an answer to the question of what to do is correct.
4. The Attitude-dependence Thesis My suggestion is that truths about the correct answer to the question of what to do depend exclusively on the agent’s attitudes in the sense that correct answers are made correct by the these attitudes alone.⁵ Call this the Attitude-Dependence Thesis. Here is the argument. First, it is clear that we have the capacity to make correct decisions— that is, decisions that are correct on the basis of the considerations that make them correct—a capacity that extends to a variety of situations that are relevantly defective. We do not lose the capacity to make correct decisions by being in the relevantly defective situations. Indeed, part of the function of decision-making, I take it, is precisely to help us to navigate our way through such situations. It may well be that our situation can get sufficiently defective in the relevant respects that we simply no longer possess the capacity to discover what we ought or have reason to do. But the capacity to make decisions is resilient to such changes in our situation. Second, as we shall see, the only considerations such that we retain the capacity to make correct decisions on the basis of those considerations in the relevantly defective situations are our actual attitudes. We are not able, in the relevant situations, to make correct decisions on the basis of considerations beyond our actual attitudes: say, on the basis ⁴ It might be said that talk of a “question” of what to do is therefore misleading inasmuch as questions are propositions in an interrogative mode. If so, then “practical questions” are not questions, strictly speaking. I shall continue to speak of the “question” of what to do, but purely as a matter of convenience. I am grateful to Daniel Nolan for discussion here. ⁵ Notice that the Attitude-Dependence Thesis is a thesis specifically about the considerations that make an answer to the question of what to do correct. As I have argued elsewhere, truths about the correct answer to the question of what to do “depend” on facts about what agents “can” do in a different sense (see Southwood 2016a). But this does not mean that such facts make answers to the question of what to do correct.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
of relevant matters of fact, or (available) evidence, or relevant normative truths. Third, the capacity to make decisions just is the capacity to settle the question of what to do. It follows that there are correct answers to the question of what to do that are made correct by our actual attitudes alone.
4.1. Non-normative beliefs Consider the following much-discussed case from Frank Jackson: Jill is a physician who has to decide on the correct treatment for her patient, John, who has a minor but not trivial skin complaint. She has three drugs to choose from: drug A, drug B, and drug C. Careful consideration of the literature has led her to the following opinions. Drug A is very likely to relieve the condition but will not completely cure it. One of drugs B and C will completely cure the skin condition; the other though will kill the patient, and there is no way that she can tell which of the two is the perfect cure and which the killer drug. (Jackson 1991, pp. 462–3)
Suppose that, as a matter of fact, drug B will completely cure the skin condition and drug C will kill the patient. What is the correct decision in this situation (call it S1)? There is one possibility that we can discount: that, because Jill will remain ignorant of some relevant matter of fact, there is no correct decision in S1. This would implausibly circumscribe the domain of our capacity for decision-making. There are numerous relevant matters of fact (especially about the future) of which we will remain ignorant, and this is often part of the reason why we feel compelled to engage in decision-making. This leaves two possibilities. One is that the correct decision in S1 is to prescribe drug B rather than drug A—because prescribing drug B will completely cure the skin condition of her patient. Such a verdict cannot be squared with the thesis that we must be able to respond correctly to the considerations that determine the correctness of a correct decision. Given Jackson’s description of the case, the only way of arriving at this verdict would be to hold that (1)
The correctness of correct decisions depends on relevant matters of fact.
But (1) cannot be right. In order to respond correctly to some consideration that determines the correctness of a correct decision, we must decide correctly on the basis of the consideration. Jill is and will remain ignorant of the fact that prescribing drug B will completely cure the skin condition of her patient. Given her ignorance, she cannot decide to prescribe drug B on the basis of the fact that prescribing drug B will completely cure the skin condition of her patient. That is, from the deliberative standpoint, she cannot decide to prescribe drug B because drug B will completely cure the skin condition. The only remaining possibility is that the correct decision is to prescribe drug A rather than drug B. How might we explain this? One perhaps initially tempting thought is that
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
(2) The correctness of correct decisions depends on our actual (available) evidence for certain non-normative claims. This nicely accounts for the fact that the correct decision in S1 is for Jill to prescribe drug A rather than drug B. Although, as a matter of fact, drug B will completely cure the skin condition, Jill has evidence that prescribing drug B has a 50 percent chance of killing her patient, whereas drug A is virtually certain to completely cure her patient and carries no other risks. Consider, however, the following modification of the case of Jill. Suppose that Jill acquires some new evidence that reveals that drug B is very likely to completely cure the skin condition (even if it doesn’t it will have no adverse effects). Let’s say that the new evidence is contained in an article that she has printed out and has lying on her desk within easy reach as she conducts the examination of her patient. But, alas, she will not get around to reading the article. So she won’t take account of her available evidence and hence will continue to believe that prescribing drug B has a 50 percent chance of being fatal to her patient. What is the correct decision in this modified situation (call it S2)? Again, I shall assume that we can sensibly rule out the possibility that, because she will fail to avail herself of relevant evidence, there is no correct decision in S2. This leaves two possibilities. One possibility is that the correct decision in S2 is to prescribe drug B, rather than drug A. This is what (2) suggests. But, again, such a verdict is inconsistent with the thesis that we must be able to respond correctly to the considerations that determine the correctness of a correct decision. If Jill’s situation is one in which she won’t take account of the available evidence that drug B is very likely to completely cure the skin condition, then she can hardly make the correct decision because drug B is very likely to completely cure the skin condition. So (2) is false. This only leaves the possibility that given Jill’s continuing to believe—albeit against the (available) evidence—that prescribing drug B has a 50 percent chance of being fatal to her patient, the correct decision in S2 is for Jill to prescribe drug A, rather than drug B. We can’t explain this by appealing to (2). Rather, it would seem that we must appeal to the following principle: (3) The correctness of correct decisions depends on our actual non-normative beliefs. Like (2), (3) suffices to explain the fact that the correct decision is to prescribe drug A, rather than drug B, in the initial situation, S1. Not only does Jill have available evidence in S1 that prescribing drug B has a 50 percent chance of killing her patient. She also believes it. But, unlike (2), (3) suffices to explain the fact that the correct decision is to prescribe drug A rather than drug B in situation S2. For even though it’s true that in S2 Jill has sufficient available evidence that prescribing drug B is very likely to completely cure the skin condition, her situation is such that she won’t take
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
account of this evidence. She will continue to believe that prescribing drug B has a 50 percent chance of killing her patient. Finally, whereas we cannot make correct decisions on the basis of relevant matters of fact of which we will remain ignorant, or relevant evidence of which we will fail to take account, it seems that we can make correct decisions on the basis of our non-normative beliefs, even when those beliefs are false or not supported by the available evidence. Thus, if Jill believes that drug B will completely cure Jill’s patient, she clearly can decide to prescribe drug B because it will completely cure her patient.
4.2. Normative beliefs Is this as far as the attitude-dependence of truths about the correctness of correct decisions extends? It might be argued, for example, that (4) The correctness of correct decisions depends exclusively on what it would be true that we ought or have reason to do given the assumption that our actual nonnormative beliefs are true (cf. Parfit 2001; 2011). But consider a further modification of the case of Jill. Suppose that Jill has some very disturbing normative views about the role that doctors ought to play. She does not believe that doctors ought to do what is likely to improve their patients’ health. Rather, she believes that doctors ought to do whatever is likely to improve the overall gene pool. This requires, for example, that doctors prescribe drugs that will kill their patients whenever doing so will help to eradicate hereditary skin conditions of the kind for which her patient is currently seeking treatment. So, Jill believes that she ought to prescribe drug B, rather than drug A. Given Jill’s situation (call it S3), what is the correct decision? (4) implies that the correct decision in S3 is just the same as the correct decision in S1 and S2, namely to prescribe drug A, rather than drug B. But this cannot be right. For there is no truth to which Jill can respond correctly. Jill cannot make the correct decision about which drug to prescribe on the basis of the fact that doctors ought to do what is likely to improve their patients’ health because she does not believe that doctors ought to do what is likely to improve their patients’ health. Again, I shall assume that we can sensibly disregard the possibility that, because Jill does not possess the true belief that doctors ought to do what is likely to improve her patients’ health, there is no correct decision. By elimination, then, it seems that we must conclude that, given Jill’s noxious belief that she ought to kill rather than cure her patient, the correct decision in S3 is to prescribe drug B, rather than drug A. We can’t explain this by appealing to (4). Rather, it would seem that we must appeal to the following principle: (5) The correctness of correct decisions depends on our actual beliefs about what we ought or have reason to do.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Unlike (4), (5) suffices to explain why the correct decision in S3 is to prescribe drug B, rather than drug A. And, unlike (4), (5) is compatible with the thesis that we must be able to respond correctly to the considerations that determine the correctness of a correct decision. Given that Jill believes that she ought to kill rather than cure her patient, it follows that, from the deliberative standpoint, she is able to decide to prescribe drug B because she ought to kill rather than cure her patient—even if, as a matter of fact, her belief is false.
4.3. Intentions It might be thought that this is all that truths about the correct answer to the question of what to do depend on, i.e. (6) The correctness of correct decisions depends exclusively on our actual nonnormative and our actual beliefs about what we ought or have reason to do. I shall argue that (6) is false. That’s because (7)
The correctness of correct decisions depends on other actual intentions.
On a certain view of intentions, (7) simply follows from (5). Recall that (5) says that (5) The correctness of correct decisions depends on our actual beliefs about what we ought or have reason to do. This is the view that (8) Intentions are or entail beliefs about what we ought or have reason to do (Schroeder 2009, p. 237; Scanlon 2007). But this is an implausible view of intentions. First, it seems to imply that we have a thoroughly misguided conception of ourselves as possessing the power to bootstrap not just reasons but oughts into existence. Second, it entails the impossibility of akratic intentions, intentions to do things that we believe we ought not to do.⁶ So, for example, when Huck forms the intention to help Jim escape, despite believing that he ought to turn him in, this seems to imply that Huck must have changed his mind about what he ought to do. But if he has changed his mind, why does he feel so bad about carrying out his intention? Still, there is compelling reason to think that the correctness of correct decisions depends on our intentions. Consider situations in which we have made a decision between two or more options such that we take the reasons to underdetermine which option to take: say, because we take them to be equally good (say, going to Italy or France on holiday). Suppose, then, that you have made your decision in favor of Italy and here you are in the travel section of the bookshop holding in your hand enticing ⁶ This isn’t to say that it entails the impossibility of akrasia. It would only do so if it held that beliefs about what we ought to do entail intentions as well as vice versa.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
guidebooks to Italy and France but with only enough money to buy one or the other. What is the correct further decision? Is the correct decision to buy the guidebook to Italy or the guidebook to France? We can set aside the possibility that there is no decision that is correct. That would, again, amount to circumscribing the domain of decision-making is a highly unwarranted way. It take it that we can also set aside the possibility that the correct decision is to buy the guidebook to France rather than buy the guidebook to Italy. This just leaves the possibility that the correct decision is to buy the guidebook to Italy rather than buy the guidebook to France. Notice that (5) cannot explain this. That’s because according to what you believe, the reasons underdetermine whether you ought to go to Italy or France. A fortiori, they underdetermine, in your view, which guidebook you ought to buy and which offer you ought to accept. So you cannot decide, from the deliberative standpoint, to buy the guidebook to Italy rather than the guidebook to France because this is what you ought or have reason to do. In order to explain this verdict, it seems that we need to appeal to (7). Recall that (7) holds that (7)
The correctness of correct decisions depends on our other actual intentions.
Unlike (5), (7) can explain why the correct decision is to buy the guidebook to Italy rather than buy the guidebook to France. Moreover, unlike (5), (7) is not incompatible with the thesis that we must be able to respond correctly to the considerations that determine the correctness of a correct decision. If you have decided to go to Italy rather than France on holiday, then you have made it the case that the correct decision is to go to Italy rather than France on holiday. It seems at least plausible that from the deliberative point of view, you can decide correctly to buy the guidebook to Italy rather than buy the guidebook to France because the correct decision is to go to Italy rather than to France.
4.4. Other attitudes? I have suggested that truths about the correctness of correct decisions depend on our non-normative beliefs, our normative beliefs, and our intentions. It might be thought that these are the only attitudes that we need to adduce in order to account for our capacity to make correct decisions on the basis of the considerations that make them correct. This suggests that (9) The correctness of correct decisions depends exclusively on our actual nonnormative beliefs, our actual beliefs about what we ought or have reason to do, and our actual intentions. I suspect this is not right. Consider credences. One way to argue for adding credences to the list of attitudes on which truths about the correctness of correct decisions depends is to appeal to a
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
different class of defective situations, namely situations involving intractable normative uncertainty (see Sepielli 2009). Suppose that one is making a decision about whether to end a friendship. Suppose that one neither believes that one ought to end the friendship nor believes that one ought not to end the friendship, but that one has slightly higher credence in the proposition that one ought not to end the friendship than in the proposition that one ought not to end the friendship. Suppose, moreover, that there are no relevant past intentions that we have formed that bear on the question of whether to end the friendship. If, as seems plausible, our capacity to engage in correct decision-making extends to situations of this kind and that it is not plausible to claim that the correct decision is to end the friendship, then the only remaining possibility is that the correct decision is not to end the friendship. But (9) cannot account for this. By contrast, consider (10) The correctness of correct decisions depends, in part, on credences in nonnormative and/or normative propositions. (10) can, and it is not obvious that any alternative can. If this is right, then it seems that we must add credences to the list of attitudes that we need to adduce in order to account for our continued capacity to engage in correct decision-making across relevantly defective situations. Perhaps we will need to go even further. It might be argued that there are yet other defective situations in which we lack relevant normative beliefs and intentions and yet where we retain the capacity to make correct decisions; and that to explain this we need to adduce other attitudes too: desires, preferences, aversions, hopes, wishes, fears, likings, dislikings, and so on. In any event, however sparse or expansive our list ends up being, the important point is that it seems extremely plausible to suppose that there are truths about correct decisions and that they are made true by our actual attitudes alone. Other kinds of considerations—matters of fact, truths about what we ought or have reason to do, truths about our evidence—are just the wrong kinds of considerations inasmuch as it is not the case that we can make correct decisions on the basis of them. Since the capacity to make decisions just is the capacity to settle the question of what to do, it follows that there are correct answers to the question of what to do and that they too are made true by our actual attitudes alone. So we have good reason to suppose that the practical conception is capable of vindicating the correct responsiveness aspect of practical reason after all.
5. The Upshot I have argued that a certain version of the practical conception seems to offer a way of accommodating two core aspects of practical reason. This involves the existence of a class of truths—truths about the correct answer to the question of what to do— whose truth depends exclusively on agents’ actual attitudes. This is interesting in
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
its own right. I shall conclude by suggesting that it also potentially sheds interesting light on some important controversies in meta-ethics and the philosophy of normativity.
5.1. Normative pluralism First, consider the idea that there is a plurality of distinct oughts. One common kind of normative pluralism holds that there are distinct objective and subjective oughts. Whereas objective (or fact-dependent) oughts depend simply on the facts, subjective (or attitude-dependent) oughts are somehow dependent on our attitudes. The idea that there are distinctive objective and subjective oughts is often thought to be necessary to do justice to our intuitive reactions in cases like Jackson’s case of Dr. Jill, where we are tempted to say both that there is a sense in which it is true that Jill ought to prescribe drug A and another sense in which it is false. The thought is that the claim that “Jill ought to prescribe drug A” is true insofar as it involves the objective ought and false insofar as it involves the subjective ought. On the other hand, the idea that there are distinct oughts of this kind also faces a familiar objection: that it seems to lead to a troubling kind of normative incommensurability (Kolodny 2005). What ought we to do when claims about what we objectively and subjectively ought to do can conflict? One thing to be said here is that that the practical conception arguably suggests a way of making sense of our apparently conflicting intuitions without adducing distinct objective and subjective oughts. As we have seen, the practical conception presupposes that there is a class of truths about the correct answer to the question of what to do that are, in effect, wholly practice-dependent in character. Perhaps our intuitive verdicts in the case of Jill are tracking, not the fact that there are distinct oughts that come apart from one another, but rather the fact that the correct answer to the question of what to do comes apart from what Jill ought to do. In other words, perhaps what we should deny is, not that Jill ought to prescribe drug A, but that her deciding to prescribe drug A would be an incorrect answer to the question of what to do. This has the virtue of avoiding the prospect of normative incommensurability. But the practical conception also potentially points to a different kind of normative pluralism that is not obviously vulnerable to the charge of objectionable normative incommensurability. This holds that there are distinct oughts that differ in virtue of different roles they are supposed to be capable of playing with regard to distinct normative practices: a deliberative ought that is supposed to be capable of playing a special role with regard to our deliberative practices, an evaluative ought that is supposed to be capable of playing a special role with regard to our evaluative practices, and so on (cf Southwood 2016a; 2016b; Schroeder 2011). On this view, truths about the deliberative ought entail corresponding truths about the correct answer to the question of what to do. Truths about the evaluative ought entail certain corresponding truths about correct evaluation. Such a view suggests that perhaps our intuitive verdicts in the case of Jill are tracking, say, the fact that what Jill
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
deliberatively ought to do comes apart from what she evaluatively ought to do. Interestingly, this view does not appear to entail an objectionable form of normative incommensurability. That’s because, relative to a normative practice (such as deliberation or evaluation), there is a single privileged ought (rather than a plurality of oughts that may issue in conflicting verdicts).
5.2. The normativity of rationality Second, consider the vexed issue of the normativity of rationality (Kolodny 2005; Southwood 2008; Kiesewetter 2013). The issue is puzzling because structural requirements of rationality appear to be normative in a way that goes beyond the kind of merely formal or minimal normativity possessed by, say, the rules of tiddlywinks. At the same time, such requirements do not seem to be robustly or substantively normative; they do not seem to entail corresponding claims about what we ought or have reason to do or intend. In what does their normativity consist? Once again, the practical conception of practical reason points to an intriguing “third way.” Perhaps (practical) requirements of rationality are normative inasmuch as they are or entail corresponding truths about the correct answer to the question of what to do (see Southwood 2018a; 2018b). This potentially allows us to vindicate the idea that requirements of rationality purport to be not-merely-formally normative, since merely formally normative phenomena such as the rules of tiddlywinks clearly do not entail corresponding truths about the correct answer to the question of what to do. And it also potentially allows us to vindicate the idea that requirements of rationality are not robustly normative. That’s because, as we have seen, truths about the correct answer to the question of what to do don’t entail corresponding truths about what we ought or have reason to do or intend.
5.3. A novel form of constructivism Finally, the practical conception also points to the possibility of a distinctive form of meta-ethical constructivism. Constructivism is the view that truths about what we ought or have reason to do are somehow to be explained in terms of certain standards of correct reasoning (Southwood 2018a). However, a difficult problem for constructivism is to say what these standards are supposed to involve. If standards of correct reasoning are normative in virtue of entailing reasons, then constructivism will be palpably circular. If the standards are reducible to non-normative truths, then constructivism won’t represent a distinct alternative to meta-ethical naturalism. Moreover, constructivists’ main existing way of trying to avoid this objection— recourse to some kind of constitutivism—doesn’t work (see Enoch 2006; Southwood 2018a). The practical conception of practical reason suggests an interesting alternative: that standards of correct practical reasoning are to be understood in terms of truths about the correct answer to the question of what to do; and, hence, a form of constructivism that holds that truths about what we ought or have reason to do
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
can somehow be reduced to truths about the correct answer to the question of what to do (Southwood 2018b). Of course, no straightforward reduction will be plausible given the Attitude-Dependence Thesis. But the reduction needn’t be straightforward. For what it’s worth, my view is that the most promising kind of constructivism will be contractualist: that is, truths about what we ought to do are ultimately to be explained, not in terms of truths about the thing for any particular individual to do, but in terms of the thing for us to do insofar as we confront the task of living together (see Southwood 2010). Working out the details of such an account and seeing whether it can be made to work are matters for another occasion.
References Audi, Robert. 2006. Practical Reasoning and Practical Decision. Abingdon: Routledge. Bratman, Michael. 1981. “Intention and Means-End Reasoning,” The Philosophical Review, 90, 252–65. Broome, John. 2001. “Are Intentions Reasons? And How Should We Cope with Incommensurable Values?” In Practical Rationality and Preference: Essays for David Gauthier, ed. C. Morris and A. Ripstein. Cambridge: Cambridge University Press, 98–120. Broome, John. 2013. Rationality Through Reasoning. Chichester: Wiley-Blackwell. Brunero, John. 2007. “Are Intentions Reasons?” Pacific Philosophical Quarterly, 88, 424–44. Chang, Ruth. 2013. “Grounding Practical Normativity: Going Hybrid,” Philosophical Studies, 164, 163–87. Cullity, Garrett. 2008. “Decisions, Reasons, and Rationality,” Ethics, 119 (1), 57–95. Dunn, Robert. 2006. Values and the Reflective Point of View. Aldershot: Ashgate. Enoch, David. 2006. “Agency, Shmagency: Why Normativity Won’t Come from What Is Constitutive of Action,” The Philosophical Review, 115, 169–98. Gibbard, Allan. 2003. Thinking How to Live. Cambridge, MA: Harvard University Press. Harman, Gilbert. 1997. “Practical Reasoning.” In The Philosophy of Action, ed. Alfred Mele. Oxford: Oxford University Press, 149–77. Hieronymi, Pamela. 2009. “The Will as Reason,” Philosophical Perspectives, 23, 201–20. Jackson, Frank. 1991. “Decision-Theoretic Consequentialism and the Nearest and Dearest Objection,” Ethics, 101, 461–82. Kiesewetter, Benjamin. 2013. The Normativity of Rationality. Berlin: Humboldt University. PhD dissertation. Kolodny, Niko. 2005. “Why Be Rational?” Mind, 114, 509–63. Parfit, Derek. 2001. “Rationality and Reasons.” In Exploring Practical Philosophy: From Action to Values, ed. Dan Egonsson, Jonas Josefsson, Björn Petterson, and Toni RønnowRasmussen. Aldershot: Ashgate, 2001, 17–39. Parfit, Derek. 2011. On What Matters. Oxford: Oxford University Press. Raz, Joseph. 1999. Engaging Reason: On the Theory of Value and Action (Oxford: Oxford University Press). Scanlon, T.M. 2007. ‘Structural Irrationality’. In Common Minds: Themes from the Philosophy of Philip Pettit, ed. Geoffrey Brennan, Robert E. Goodin, Frank Jackson and Michael Smith. Oxford: Oxford University Press, 84–103.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Schroeder, Mark. 2009. “Means-Ends Coherence, Stringency, and Subjective Reasons,” Philosophical Studies, 143, 223–48. Schroeder, Mark. 2011. “Ought, Agents, and Actions,” The Philosophical Review, 120, 1–41. Sepielli, Andrew. 2009. “What to Do when You Don’t Know What to Do,” Oxford Studies in Metaethics, 4, 5–28. Southwood, Nicholas. 2008. “Vindicating the Normativity of Rationality,” Ethics, 119, 9–30. Southwood, Nicholas. 2010. Contractualism and the Foundations of Morality. Oxford: Oxford University Press. Southwood, Nicholas. 2016a. “ ‘The Thing to Do’ Implies ‘Can’,” Noûs, 50, 61–72. Southwood, Nicholas. 2016b. “Does ‘Ought’ Imply ‘Feasible’?” Philosophy & Public Affairs, 44, 7–45. Southwood, Nicholas. 2016c. “The Motivation Question,” Philosophical Studies, 173, 3413–30. Southwood, Nicholas. 2018a. “Constructivism About Reasons,” The Oxford Handbook of Reasons and Normativity, ed. D. Star. Oxford: Oxford University Press. Southwood, Nicholas. 2018b. “Constructivism and the Normativity of Practical Reason,” The Many Moral Rationalisms, ed. K. Jones and F. Schroeter. Oxford: Oxford University Press. Velleman, David. 2000. The Possibility of Practical Reason. Oxford: Clarendon Press. Wallace, R. Jay. 2013. “Practical Reason.” In The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta, .
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
6 Is Reasoning a Form of Agency? Mark Richard
1. Is Reasoning Something the Reasoner Does? Is reasoning something the reasoner does? We certainly hold the (human) reasoner responsible for her conclusions, criticizing or praising her as irrational or rational. Responsibility seems to be a mark of agency. And if we say that to reason is to come to believe some p because one takes some considerations as support for the belief, again it will seem we ought to say that reasoning is something that she who reasons does. However, even leaving infants and non-human animals to the side, there are many things we call reasoning that appear more or less subpersonal. In everyday abductive and inductive inferences, “creative leaps,” and even a good deal of what one reconstructs as deduction, the agent (that is, the person who ends up with a belief ) seems in some important sense outside of the process: I find myself thinking something, often on reflection at a loss to say just how I got to the belief. And in many such cases, even when I can enumerate considerations that support the conclusion drawn, it can seem gratuitous to say that the conclusion was drawn because I took the considerations to support it. One would like to be able to say both that a hallmark of reasoning is that it is something for which the agent is responsible, and that cases of adults coming to have beliefs that most of us are inclined to think obviously deserve the label “reasoning” count as such. But how can we say both of these when it seems that so much mundane reasoning is not under our control? One can be responsible for things that one does not directly do. The Under Assistant Vice-President for Quality Control is responsible for what the people on the assembly line do, but of course she is not down on the floor assembling the Thanks to Paul Boghossian, Matt Boyle, and Susanna Siegel for comments and several discussions of the topic of this chapter; also to Brendan Balcerak Jackson, Magdalena Balcerak Jackson, and Eric Mandelbaum for comments. Versions of this chapter were read at a conference on reasoning at the University of Konstanz in 2014 and the St. Louis Conference on Reasons and Reasoning in 2015; I thank the audiences at these events for their comments as well.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
widgets. Why shouldn’t my relation to much of my reasoning be somewhat like the VP’s relation to widget assembly? Suppose I move abductively from the light won’t go on to I probably pulled the wire out of the fixture changing the bulb. Some process of which I am not aware occurs. It involves mechanisms that typically lead to my being conscious of accepting a claim. I do not observe them; they are quick, more or less automatic, and not demanding of attention. Once the mechanisms do their thing, the conclusion is, as they say, sitting in the belief box. But given a putative implication, I am not forced to mutely endorse it. If I’m aware that I think q and that it was thinking p that led to this, I can, if it seems worth the effort, try to consciously check to see if the implication in fact holds. And once I do that, I can on the basis of my review continue to accept the implacatum, reject the premise, or even suspend judgment on the whole shebang. In this sense, it is up to me as to whether I preserve the belief. It thus makes sense to hold me responsible for the result of the process. I say that something like this story characterizes a great deal of adult human inference. Indeed, it is tempting to say that all inference—at least adult inference in which we are conscious of making an inference—is like this: mechanisms of which the reasoner is not aware delivering conclusions that the reasoner then has the option of endorsing or dismissing. Given that I have the concept of one thing following from another, I will (be in a position to) interpret the appearance of my new belief as (a sign of ) the conclusion’s following from the premises. Indeed, if I have the concept of consequence, I will often “take” the belief I have formed to be a consequence of the thought I had that was the “input” to that process of which I had and have no conscious awareness. In these cases, inference is accompanied by the agent taking her premises to support her conclusion. But this taking is a reflex of the inference itself. Here, it is not necessary, in order that inference occur, that the agent comes to believe what is inferred because she takes her premises to support it. I’ve been arguing that the fact that we hold the reasoner responsible for the product of her inference—we criticize her for a belief that is unwarranted, for example—doesn’t imply that in making the inference the reasoner exercises a (particularly interesting) form of agency. Now, it might be said that we hold she who reasons responsible not just for the product of her inference, but for the process itself.¹ When a student writes a paper that argues invalidly to a true conclusion, the student gets no credit for having blundered onto the truth; he loses credit for having blundered onto the truth. But, it might be said, it makes no sense to hold someone responsible for a process if they aren’t the, or at least an, agent of the process. Let us grant for the moment that when there is inference, both its product and the process itself are subject to normative evaluation. What exactly does this show? We hold adults responsible for such things as implicit bias. To hold someone responsible ¹ Thanks to Susanna Siegel for making it clear to me that this is what those who think reasoning involves a strong sort of agency presumably have in mind.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
?
for implicit bias is not just to hold them responsible for whatever beliefs they end up with as a result of the underlying bias. It is to hold the adult responsible for the mechanisms that generate those beliefs, in the sense that we think that if those mechanisms deliver faulty beliefs, then the adult ought to try to alter those mechanisms if he can. (And if he cannot, he ought to be vigilant for those mechanisms’ effects.) There are obviously methods of belief fixation for which we hold people responsible even when we take the operation of those methods to be in important senses non-agential: the beliefs that implicit bias produces often enough are ones that the bias imposes on the believer. It does not follow, from the fact that we hold an agent responsible for a process, that she is in any strong sense the agent of the process: she may be responsible for the process in the sense that she is under an obligation to try to correct it, even if she does not have conscious control, direct awareness, or even much understanding of it. Of course, something quite similar is true of the Under Assistant VP in charge of quality control.
2. Inferring p versus Taking p to Follow Is it necessary that an adult take—that is, believe—the conclusion of an inference she makes to follow from (or stand in some other epistemically justifying relation to) its premises? Is taking even a part of normal adult human inference? It’s hard to see why we should think that if I infer q from a set of premises I must take it to follow from all the premises. My inference about the light bulb presumably made use of many premises, including some standing beliefs. Some of them, one thinks, I need never have articulated; some of them I might not be able without considerable effort and tutelage to articulate. If taking is something that is straightforwardly accessible to consciousness, this indicates that inferring q from some p’s doesn’t require taking q to follow from them. More significantly, there are cases that certainly seem to be inferences in which I simply don’t know what my premises were. I know Joe and Jerome; I see them at conventions, singly and in pairs, sometimes with their significant others, sometimes just with each other. One day it simply comes to me: they are sleeping together. I could not say what bits of evidence buried in memory led me to this conclusion, but I— well, as one sometimes says, I just know. Perhaps I could by dwelling on the matter at least conjecture as to what led me to the conclusion. But I may simply be unable to. Granted, not every case like this need be a case of inference. But one doesn’t want to say that no such case is. So if taking is something that is at least in principle accessible to consciousness, one thinks that in some such cases we will have inference without taking. I said I was tempted to say that all adult human inference was the result of quick, more or less automatic processes that deliver beliefs that we (usually) can review and reject. But if we are tempted to say that this is what adult human inference is,
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
shouldn’t we be tempted to say that inference really hasn’t much to do with taking a conclusion to follow from premises? Myself, I’m tempted to say it. Lead us not, it will be pled, into temptation. Consider the case where, lounging in bed with you and hearing the patter of rain on the roof, I: think it is raining; reflect but if it is raining, I ought to wear galoshes when I leave; and then find myself thinking so I should wear galoshes; I wonder where they are. In this case—and, it will be pled, surely such a case is a paradigm of inference—it is I who is doing all the work: I consider the premises, I see that they imply the conclusion, I come to believe the conclusion because of my taking the one to support the other. That, after all, is the point of my thinking the so. In this sort of case, at least, inference is a transition from premises to conclusion that is brought about by taking the latter to be supported by the former. And in so far as this sort of example is paradigmatic, surely we have reason to say that the norm is that reasoning is a transition produced because one takes one thing to follow from others.² But not very much follows from the claim that the example is a paradigm of reasoning. Jack’s driving from Lowell to LA is a paradigm of a cross-country trip; it is brought about by a belief that LA is the place to be. That doesn’t mean it’s essential to such journeys that they are produced by such beliefs. From the fact that paradigm Fs are Gs, it just doesn’t follow that Fs are usually Gs or that the normal ones are. Furthermore, one has to wonder whether in normal examples of inferring my conclusion from my premises—even in the example at hand—the inference occurs because I take the conclusion to be supported by the premises. What is obvious in the example is that: (a) I am aware of thinking p; (b) I am then aware of thinking that if p, then q; (c) I am then aware of thinking that q follows and of accepting q. It does not follow from the fact that this is what happened that the second part of (c) occurred because the first part did. The acceptance of q, after all, could have been brought about by underlying processes that were fast, automatic, and below conscious perusal; the thought that q followed from the rest might be simply a matter of the my consciously endorsing something that had already occurred. Whether or not we think this is the right thing to say about the example under discussion, there is a more significant point to be made about it: the example of you and me and rain on the roof is in important ways abnormal. Normally, when I engage in the sort of reasoning that occurs in this example I have no conscious awareness of the premises from which I undoubtedly reason: I hear the patter of the rain and find
² Paul Boghossian makes much of examples like these (in, for example, Boghossian 2014) as a prolegomenon to characterizing inference as requiring some kind of taking to support. He does not endorse the (transparently bad) sort of argument in this paragraph. But it seems fair to say that he does presuppose that what such examples raise to salience is essential to inference.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
?
myself thinking merde, je dois de porter galoches. In such examples there isn’t anything we are aware of that corresponds to an event that is both a taking of one thing to follow from others and that causes the fixation of a belief. The fact that we aren’t aware of such an event in normal reasoning does not, of course, entail that there is no such event. But one wonders what explanatory role positing such an event would serve. Someone might observe that a normal adult human is disposed, when they have a conditional as a background belief and accept its antecedent, to think that the consequent is true because it is supported by what they accept. They might say that having this disposition is one way to believe that the conditional and its antecedent support the consequent. But if this is so and the disposition causes one to accept the consequent, then, it might be said, one’s accepting the consequent is caused by taking the p and the if p then q to support q.³ Even if we agree with this last claim, nothing interesting follows about whether taking q to follow from p is required in order to infer q from p. Compare the adult who infers in a modus ponensy way with a seven year old who makes the same transitions in thought, but lacks the concept of one thing being a reason for another. There seems to be no reason to think that the same mechanisms couldn’t underlie both the adult’s and the seven year old’s abilities to go from p and if p then q to q. Given that the underlying mechanisms are more or the less the same, most of us are inclined to say that the seven year old is reasoning.⁴ But in the seven year old the mechanisms do not realize the belief, that p and if p then q provide reason for thinking q. At least they do not if having this belief requires having the concept of the relation x is reason to think y, for the seven year old lacks the concept. But the seven year old is making the same inference and making it in the same way as the adult. So given that it is reasonable that X believes that a Rs b requires that X is able to conceptualize the relation R, it is not part of making an inference that one believe its premises support its conclusion, much less that such a belief explains or brings about the fact that the reasoner accepts the conclusion. But it is plausible that believing p requires being able to conceptualize the relations involved in p. So we should conclude that even in the case of the adult, taking the conclusion to follow from the premises is no part of the inference.⁵ ³ This story is not open to Boghossian, who resists identifying beliefs with dispositions. ⁴ Perhaps you feel we should deny that the seven-year-old can make any inferences if he lacks the concept. I’ll discuss this response in Section 3. ⁵ The general point here is that a dispositional state of type T may be a “part” of an inferential process and be a state of believing p without the fact, that it has the later property, entering into an account of what it is that the disposition contributes to making the process a process of inference.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Perhaps you are inclined to reject the idea that believing that . . . X . . . requires having the concept X. Or perhaps you think that the child’s being wired in such a way that she makes modus ponensy transitions in thought means that the child does indeed have the concept of following from, even if she doesn’t yet have a word for the concept. If you have the inclination or the thought, you might then argue that: (a) in reasoning one is caused to accept a conclusion by a state that links premises and conclusion in the way the child’s and the adult’s states link their conclusions with their premises; (b) such a linking state is to be identified with “taking” the conclusion to follow from the premises, or with one’s “following a rule” that dictates drawing the conclusion from the premises; but (c) if reasoning involves “taking” or rule following, it is agential.⁶ Such a view marks a significant retreat from the idea that reasoning involves an interesting sort of agency. One wonders how the “agency” involved in reasoning is supposed to come to more than one’s being wired in such a way that one is disposed to undergo certain transitions in thought. One thinks that such wiring needn’t be accompanied by anything like control by “the agent” of the process of reasoning. At least it needn’t be accompanied by anything over and above the sort of control that a computer science major gives a computer when she writes and implements a learning algorithm that allows the computer to analyze data and combine the results of the analysis with information it already has.⁷ Some may be inclined to say that the cases of inference that ought to be the focus of philosophical investigation are not those I am calling normal examples of the rain inference, but the cases I’ve called abnormal, in which all premises are present to the mind and the agent does something like think to herself sotto mentis voce “and so it follows”. After all, the philosopher is presumably interested in inference as an instrument of inquiry. But it is this sort of case, in which justification for believing is transparently transmitted from premises to conclusion by the process of inference, in which the epistemic role of inference is most obvious. If we had reason to think that it was only in such explicit cases that justification could be transmitted from premises to conclusion, then perhaps we could agree that such cases should be given pride of place. But we have no reason to think that. I see a ⁶ John Broome (Chapter 3, this volume) endorses something like this view. He suggests that in reasoning that leads to belief, it is necessary that one at least implicitly believe a conditional that “links” premises and conclusion. This is because one can’t be following a rule in reasoning if one doesn’t have such a belief (and, Broome says), rule following is “essential” to reasoning. Broome however qualifies this: he thinks that beliefs are bundles of dispositions, and allows that one might not “include enough of the dispositions that constitute a typical belief.” The final position seems to be that in reasoning one must have at least a disposition to move from premises to conclusion, one that can reasonably be identified as rule following. ⁷ Thanks here to the editors for their comments and for their directing me to Broome’s essay. In the remainder of this chapter, I presuppose that in cases like that of the seven year old, the (putative) reasoner does not have such concepts as follows from, provides support for, gives reason to think, or justifies, and thus that in such cases the (putative) reasoner does not satisfy any version of the “taking” condition that is stronger than one on which to take q to follow from p is simply to think if p, then q.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
?
face; I immediately think that’s Paul. My perceptual experience—which I would take to be a belief or at least a belief-like state that I see a person who looks so—justifies my belief that I see Paul. It is implausible that in order for justification to be transmitted I must take the one to justify the other. For that matter, the seven year old comes to be justified in q on reaching it in a modus ponensy way. Fast, automatic processes are a way—one of the primary ways—that we increase our knowledge of the world. The assumption that inference is interesting because it is an engine of the epistemic gives us no reason at all to think that there is anything of special philosophical interest in cases of inference in which something like taking occurs.
3. Homologizing Inference One might concede most of what I have said but still insist that reasoning is of necessity agential. Return to the contrast between the adult and the seven year old, who both think p, then if p, then q, and then q. The contrast was in the fact that the adult’s thinking q was accompanied by the thought that it followed from the rest while the seven year old’s was not. It is open to us say that it is this thought that constitutes the inference, even if the thought q did not occur because of the thought that one thing follows from another. The thought about following, one might say, is the crucial sign that the agent is monitoring the process of moving from premises to conclusion, ready to intervene if something goes awry. She is thus in some sense in charge of the process. Let us say that a cognitive process is one of belief-fixation provided that (part of ) its functional role is to produce new states of acceptance on the basis of already existing states of acceptance. On the view just limned, one infers q from some premises iff one moves via processes of belief-fixation from the premises to q and one takes q (usually after the fact) to follow from those premises. If we endorse such a view, we will have to say that the child did not infer the conclusion from the premises; indeed, we will have to say that the child was not reasoning, since (lacking the concept of one thing supporting another) the child could not reason. The child, on this view, is like many higher non-human animals that are capable of forming new beliefs on the basis of old ones in a reliable way. What the child is unable to do is to understand what it is doing; unable to understand what it is doing, it has little or no control over the processes that fix its beliefs. Not being in any interesting sense master of its epistemic domain, it is thus the sort of thing that is unable to reason. The proponent of this view might, in a concessive move, grant that ordinary people, cognitive scientists, and ethologists use the term “reasoning” in such a way that what the seven year old does counts under their usage as reasoning. Ditto, for what the dog, the fox, the eagle, and the lynx do, when they are working at their cognitive apex. The proponent might agree that it would be useful to have a term for what is common to the processes that underlie belief fixation in both homo,
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
animal rationale and the cognitively deprived child and animal. Perhaps we could appropriate the word “reckon” for the task, and say that while both the child and adult reckoned that q on the basis of other beliefs, only the adult inferred q therefrom. This is not an absurd view. But it seems willful to hold that the child or the dog is incapable of knowing that the fox is chasing a chicken or of having standing knowledge that when a fox is chasing a chicken he will catch it. So one wants to know whether for such creatures reckoning is a means of moving from knowledge to new knowledge. If, as seems reasonable, it is allowed that it is, the view seems to make the question—what is reasoning?—less interesting than might otherwise have been thought. After all, it is the process of reckoning (which is common to the child, the dog, and the adult) that carries each from her beliefs, that the fox is chasing the chicken and that if it is, it will catch it, to her belief that the fox will catch the chicken. But then it is not clear how much—if anything—the adult’s reasoning is adding in such cases to the adult’s expansion of her knowledge. The child and the dog come to have new knowledge simply on the basis of reckoning to it. Wouldn’t the adult have known this if she had just reckoned that the fox’s lunch would soon be had? Inference, on the view we are considering, turns out be a mark of a quality control process present only in the most highly evolved animals. It is a useful and important process, of course, but one that is at best secondary to the processes, like reckoning, that are the workhorses of the epistemic—the processes that are responsible for most of the beliefs and most of the knowledge that we have of the world. We should be interested in it. But it is not really where the action is, epistemically. Myself, I wouldn’t endorse this view. We ought to agree that adult inferential activity is pretty much continuous with the cognitive activity of toddlers, infants, and higher non-human mammals. Of course toddlers lack the concept of consequence, infants are probably not subpersonally up to modus ponens, and bonobo beliefs may not even be conscious. But in all four cases we (presumably) find mechanisms that take perception and occurrent beliefs as inputs, access standing beliefs, and then regularly and in a reliably predictable way produce new beliefs. To the extent that the output of such mechanisms is (more or less) predictable on the basis of the input, such mechanisms will make the individual’s information-processing behavior look like the behavior of someone who is being “guided by a rule.” But of course there is a great deal of fast, automatic, non-agential behavior in the animal world that looks like the behavior of someone who is being guided by a rule. Are all four of us—adult, toddler, infant, bonobo—reasoning? If this is a question about ordinary usage or a philosopher’s question about the analysis of “our concept of reasoning,” it doesn’t strike me as terribly interesting. I’d go with ordinary usage myself, but it’s hard to believe that anything of substance hangs on the decision. There are, though, interesting questions to ask about what (we should expect) is and is not continuous across the cases. All have (varying degrees of ) the ability to change their inferential patterns on the basis of experience. The infant and the
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
?
bonobo presumably lack the ability to review and revise the output of the underlying mechanisms, the toddler is only beginning to develop such an ability. Presumably only the adult has the ability to conceptualize her inferences as such and to ask questions as to whether what she now believes follows from what led her to the belief. The interesting questions have to do with commonalities across cases and with how learning might affect the mechanisms that underlie inference in various kinds of cognitive systems. So, anyway, I reckon.
4. Knowing It When You See It, but Not Knowing What It Is I started with the question of whether to reason is do something, and thus to act as an agent. A reason to think so—the reason, as I see it—is that we hold reasoners responsible for both the product and the process of their reasoning; but responsibility is a mark of agency. I’ve argued that our holding the reasoner responsible for his reasoning doesn’t imply that reasoning per se involves a particularly interesting sort of agency. When I move abductively from the light won’t go on to I probably pulled the wire out of the fixture changing the bulb I “do” something in the sense in which I am doing something when, after having an egg thrown in my face, I refrain from wiping it away. In the latter case, I am put in a position where I will have egg on my face if I don’t do something; my refraining from wiping, since I could wipe, is my doing something which results in leaving egg on my face. If you are embarrassed by my messy face, you are within your rights to hold me responsible for not cleaning myself up. In inference, I am put in a position of having a belief; my refraining from reconsidering, since I could reconsider, is my doing something— refraining from doing certain things—and thereby maintaining a new belief.⁸ You can criticize me for my inaction if the inference was a howler. To ask whether reasoning is a form of agency is not to ask what it is to reason, and the latter is not a question I have tried to answer. But I should say something about the latter question. It is best, perhaps, to start with concrete cases. The example in the last paragraph, where I infer that I probably pulled the wire out of the fixture, has two moments: there is the transition, largely outside consciousness, from the lack of light to the conclusion; and there is my refraining from rejecting the conclusion when it comes upon me. With what should we identify the inference: the first moment, the first and second, just the second, or with yet something else? ⁸ The situation here is very much like perceptually formed belief. The visual system offers something like an iconic hypothesis about the lay of the visible land; it is up to us to accept its offer. (“It looks like a turkey with a halo, but that can’t be right . . . ”) The picture I am suggesting, for both visual system and inference, is one rooted in the idea that evolution starts with the visual systems and System 1 mechanisms of mammals from which we are descended. It then (somehow . . . ) manages to layer on top the human personal system, which has the ability to override attempts by the other systems to insert representations into a position in one’s functional economy from which they can control behavior.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
My current inclination is to identify it just with something that occurs during the first moment.⁹ I’m so inclined simply because I think what the dog, the fox, the eagle, and the lynx do, when they are working at their cognitive apex, is often making inferences; but I somehow doubt that they exercise the sort of cognitive control over their information processing that is involved in the second moment of the example above. That an animal or a person does not reject a belief does not imply that it was up to him as to whether to reject it. The same sort of thing, I think, may well be true of many of the inferences we draw as a result of bias when the conclusions of those inferences are not available to consciousness. Myself, I think we have many beliefs that we are not only unaware of but that we can become aware of only through being helped to see their effects in how we behave; some of these, I think, arise through inference. He who is biased against a racial group, I think, believes that they merit certain sorts of treatment; he may be utterly unaware of his bias. Such a person, noticing that someone is a member of the group, will come to think that the person merits the relevant sort of treatment. It is, in my opinion, over-intellectualized fastidiousness to suggest that the later belief is not the result of inference. Of course to say this is not to answer the question of what might make the first moment in the example above an inference. What is inference? I am inclined to think that asking that question, at least at the moment, is something of a mistake. We have a fairly good handle on what human behavior counts as paradigmatic inference, as well as a tolerable handle on what behavior is paradigmatically not inference. We have something of a handle on what behavior is problematic in this regard. I should think that the thing to do for the moment is to look closely at what we think we know about the paradigms and the processes that underlie them and see to what extent they have something in common, to what extent they form not a single kind but a family. Then, but only then, we might be in a position to see whether the question has an illuminating answer.
Reference Boghossian, P. 2014. What is Inference? Philosophical Studies 169, 1–18.
⁹ Susanna Siegel in her contribution to this volume (Chapter 2) gives a somewhat different reason for thinking that the second moment in the example is not essential to inference than the one I am about to give.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
7 Inference, Agency, and Responsibility Paul Boghossian
1. Introduction What happens when we reason our way from one proposition to another? This process is usually called “inference” and I shall be interested in its nature.¹ There has been a strong tendency among philosophers to say that this psychological process, while perhaps real, is of no great interest to epistemology. As one prominent philosopher (who shall remain nameless) put it to me (in conversation): “The process you are calling ‘inference’ belongs to what Reichenbach called the ‘context of discovery’; it does not belong to the ‘context of justification,’ which is all that really matters to epistemology.”² I believe this view to be misguided. I believe there is no avoiding giving a central role to the psychological process of inference in epistemology, if we are to adequately explain the responsibility that we have, qua epistemic agents, for the rational management of our beliefs. I will try to explain how this works in this chapter. In the course of doing so, I will trace some unexpected connections between our topic and the distinction between a priori and a posteriori justification, and I will
Earlier versions of some of the material in this chapter were presented at a Workshop on Inference at the CSMN in Oslo, at the Conference on Reasoning at the University of Konstanz, both in 2014, at Susanna Siegel’s Seminar at Harvard in February 2015, and at a Workshop on Inference and Logic at the Institute of Philosophy at the University of London in April 2015. A later version was given as a keynote address at the Midwest Epistemology Workshop in Madison, WI in September 2016. I am very grateful to the members of the audiences at those various occasions, and to Mark Richard, Susanna Siegel, David J. Barnett, Magdalena Balcerak Jackson. and an anonymous referee for OUP for detailed written comments. ¹ Importantly, then, I am not here talking about inference as argument: that is as a set of propositions, with some designated as “premises” and one designated as the “conclusion.” I am talking about inference as reasoning, as the psychological transition from one (for example) belief to another. Nor am I, in the first instance, talking about justified inference. I am interested in the nature of inference, even when it is unjustified. ² For another example of a philosopher who downplays the importance of the psychological process of inference see Audi (1986).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
draw some general methodological morals about the role of phenomenology in the philosophy of mind. In addition, I will revisit my earlier attempts to explain the nature of the process of inference (Boghossian 2014, 2016) and further clarify why we need the type of “intellectualist” account of that process that I have been pursuing.
2. Beliefs and Occurrent Judgments I will begin by looking at beliefs. As we all know, many of our beliefs are standing beliefs in the sense that they are not items in occurrent consciousness, but reside in the background, ready to be manifest when the appropriate occasion arises. Among these standing beliefs, some philosophers distinguish between those that are explicit and those that are implicit. Your explicit beliefs originated in some occurrent judgment of yours: at some point in the past, you occurrently affirmed or judged them (or, more precisely, you occurrently affirmed or judged their subtended propositions). At that point, those judgments went into memory storage.³ In addition to these occurrent-judgment-originating beliefs, some philosophers maintain that there are also implicit standing beliefs, beliefs that did not originate in any occurrent judgment, but which you may still be said to believe. For example, some philosophers think that you already believed that Bach didn’t write down his compositions on watermelon rinds, prior to having encountered this proposition here (as we may presume) for the first time. Others dispute this. I won’t take a stand on the existence of implicit beliefs for the purposes of this chapter. Let me focus instead on your explicit standing beliefs. Many of these beliefs of yours (we may also presume) are justified. There is an epistemic basis on which they are maintained, and it is in virtue of the fact that they have that basis that you are justified in having them, and justified in relying on them in coming to have other beliefs. In Pryor’s terminology (2005: 182), bases are justifiers.
3. Epistemic Bases for Standing Beliefs What are the bases or justifiers for these explicit standing beliefs? There are (at least) two distinct questions here. First, what sorts of things are such bases? Second, how do these bases behave over time? More specifically, once a basis for a belief is established, does it retain that basis (putting aside cases where the matter is explicitly reconsidered), or can that basis somehow shift over time, even without reconsideration? ³ Later on, I shall argue that this should be regarded as preservative memory storage, in the sense of Burge (1993) and Huemer (1999).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
, ,
Let me start with the first question. Many epistemologists have become increasingly sympathetic to the view that justifiers are exclusively propositions, rather than mental states (albeit, states with propositional content). I want to buck this particular trend. In the case of a perceptual belief that p, I think, like Pryor (2000), that it can be the case that the thing in virtue of which your belief that there is a cat on the mat in this room is justified is that you have a visual state as of seeing that there is a cat on the mat in this room, and that you base your belief on that state. Call this the Statist View (see Pryor 2007). On the alternative view, the justifier for your belief that there is a cat on the mat in this room is not your visual state as of seeing that there is a cat on the mat in this room, but, rather, just the proposition itself that there is a cat on the mat in this room. Call this the Propositional View (see, for example, Williamson 2000 and Littlejohn 2012). Now, of course, even on a Propositional View, there must be something about you in virtue of which the proposition that there is a cat on the mat can serve as a justification for your belief that there is a cat on the mat, even while it does not serve as such a justification for me who, not having seen the cat, have no such justification. What is that explanation, given that the proposition that there is cat on the mat is, in some sense, available to both of us? The answer presumably is that, while the proposition as such is available to both of us, your visual experience makes it available to you as evidence, while it is not in this way available to me. Your visual experience, but not mine, gives you access to the relevant proposition as evidence on which you base your beliefs. This is no doubt a subtle difference. Still, there seems to be something importantly different between the two views. On the first view, mental states themselves play an epistemic role, they do some justifying. On the alternative view, the mental states don’t do any justifying themselves; their role is to give you access to the things—the propositions—that do the justifying. I wasn’t inclined to worry much about this distinction between alternative views of justifiers until I came to think about its interaction with the a priori/a posteriori distinction. I am now inclined to think that the a priori/a posteriori distinction requires rejecting the view that only propositions can serve as justifiers. Let me try to spell this out. The tension between the Propositional View and the a priori/a posteriori distinction might be thought to emerge fairly immediately from the way in which the distinction between the a priori and the a posteriori is typically drawn. We say that a belief is a priori justified just in case its justification does not rely on perceptual experience. This seems to presuppose that perceptual experiences have justifying roles. But this is, of course, far from decisive. As we have already seen, even on the Propositional View, experiences will need to be invoked in order to explain why some propositions are available to a thinker as evidence and others aren’t.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Given that fact, couldn’t the Propositionalist make sense of the a priori/a posteriori distinction by saying that perceptual experiences give one access to a posteriori evidence, while non-perceptual experiences, such as intuition or understanding, give one access to a priori evidence? The problem with this reply on behalf of the Propositionalist is that we know that sometimes perceptual experience will be needed to give one access to a proposition that is then believed on a priori grounds. For example, perceptual experience may be needed to give one access to the proposition “If it’s sunny, then it’s sunny.” But once we have access to that proposition, we may come to justifiably believe it on a priori grounds (for example, via the understanding). In consequence, implementing the response given on behalf of the Propositionalist will require us to distinguish between those uses of perceptual experience that merely give us access to thinking the proposition in question, versus those that give us access to it as evidence. But how are we to draw this distinction without invoking the classic distinction between a merely enabling use of perceptual experience, and an epistemic use of such experience, a distinction that appears to presuppose the Statist View. For what would it be to make a proposition accessible as evidence, if not to experience it in a way that justifies belief in it? To sum up. Taking the a priori/a posteriori distinction seriously requires thinking of mental states as sometimes playing a justificatory role; it appears not to be consistent with the view that it is only propositions that do any justifying.⁴ I don’t now say that the Statist View is comprehensively correct, that all reasons for judgment are always mental states, and never propositions. I only insist that, if we are to take the notion of a priori justification seriously, mental states, and, in particular, experiences, must sometimes be able to play a justificatory role.
4. Epistemic Bases for Standing Beliefs: Then and Now Going forward, and since it won’t matter for present purposes, I will assume that the Statist View is comprehensively correct. However, the reader should bear in mind that this is done merely for ease of exposition and because the issues to be addressed here don’t depend on that assumption. ⁴ Once we have successfully defined a distinction between distinct ways of believing a proposition, we can introduce a derivative distinction between types of proposition: if a proposition can be reasonably believed in an a priori way, we can say that it is an a priori proposition; and if it can’t be reasonably believed in an a priori way, but can only be reasonably believed in an a posteriori way, then we can say that it is an a posteriori proposition. But this distinction between types of proposition would be dependent upon, and derive from, the prior distinction between distinct ways of reasonably believing a proposition, a distinction which depends on construing epistemic bases as mental states, rather than propositions.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
, ,
Here is a further question. Assuming that we are talking about an explicit standing belief, does that belief always have as its basis the basis on which it was originally formed; or could the basis have somehow shifted over time, even if the belief in question is never reconsidered? It is natural to think that the first option is correct, that explicit standing beliefs have whatever bases they had when they were formed as occurrent judgments. At their point of origin, those judgments, having been arrived at on a certain basis, and not having been reconsidered or rejected, go into memory storage, along with their epistemic bases. Is the natural answer correct? It might seem that it is not obviously correct. For sometimes, perhaps often, you forget on what basis you formed the original occurrent judgment that gave you this standing belief. If you now can’t recall what that basis was, is it still the case that your belief has that basis? Does it carry that basis around with it, whether you can recall it or not? I want to argue that the answer to this question is “yes.” I think this answer is intuitive. But once again there is an unexpected interaction with the topic of the a priori. Any friend of the a priori should believe that the natural answer is correct. Let me say a little more about this interaction.
5. Proof and Memory Think here about the classic problem about how a lengthy proof might be able to deliver a priori knowledge of its conclusion. In the case of some lengthy proofs, it is not possible for creatures like us to carry the whole thing out in our minds. Unable to keep all the steps of the proof in mind, we need to write them down and look them over. In such cases, memory of the earlier steps in the proof, and perceptual experience of what has been written down, enter into the full explanation of how we arrive at our belief in the conclusion of the proof. And these facts raise a puzzle that has long worried theorists of the a priori: How could belief arrived at on the basis of this sort of lengthy proof be a priori warranted? Won’t the essential role of memory and perceptual experience in the process of carrying out the proof undermine the conclusion’s alleged a priori status?⁵ To acquiesce in a positive answer to this question would be counterintuitive. By intuitive standards, lengthy proof can deliver a priori warrant for its conclusion just as well as a short proof can. The theoretical puzzle is to explain how it can do so, given the psychologically necessary role of perceptual and memory experiences in the process of proof. The natural way to respond to this puzzle is to appeal to an expanded conception of the distinction between an enabling and an epistemic use of experience. Tyler ⁵ See, for example, Chisholm (1989).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Burge has developed just such a view (I shall look at his application of it to memory). Burge distinguishes between a merely enabling use of memory, which Burge calls “preservative memory,” and an epistemic use, which Burge calls “substantive memory”: [Preservative] memory does not supply for the demonstration propositions about memory, the reasoner, or past events. It supplies the propositions that serve as links in the demonstration itself. Or rather, it preserves them, together with their judgmental force, and makes them available for use at later times. Normally, the content of the knowledge of a longer demonstration is no more about memory, the reasoner, or contingent events than that of a shorter demonstration. One does not justify the demonstration by appeals to memory. One justifies it by appeals to the steps and the inferential transitions of the demonstration. . . . In a deduction, reasoning processes’ working properly depends on memory’s preserving the results of previous reasoning. But memory’s preserving such results does not add to the justificational force of the reasoning. It is rather a background condition for the reasoning’s success.⁶
Given this distinction, we can say that the reason why a long proof is able to provide a priori justification for its conclusion is that the only use of memory that is essential in a long proof is preservative memory, rather than substantive memory. If substantive memory of the act of writing down a proposition were required to arrive at justified belief in the conclusion of the proof, that might well compromise the conclusion’s a priori status. But it is plausible that in most of the relevant cases all that’s needed is preservative memory. Now, against the background of the Propositional View of justifiers, all that would need to be preserved is, as Burge says, just the propositions (along perhaps with their judgmental force). However, against the background of the Statist View, we would need to think that preservative memory can preserve a proposition not only with its judgmental force, but also along with its mental state justifier.⁷ Now, just like an earlier step in a proof can be invoked later on without this requiring the use of substantive memory, so, too, can an a priori justified standing belief be invoked later on without this compromising its ability to deliver a priori justification. To account for this, we must think that when an occurrent judgment goes into memory storage, it goes into preservative memory storage.
⁶ Burge (1993, pp. 462–3). ⁷ A competing hypothesis is that preservative memory need only retain the proposition, along with its judgmental force and epistemic status, but without needing to preserve its mental state justifier. The problem with this competing hypothesis is that we will want to retain the status of the proposition as either a priori or a posteriori justified. However, a subject can be a priori justified in believing a given proposition, without himself having the concept of a priori justification. If we are to ensure that its status as a priori is preserved, without making undue conceptual demands on him, we must require that the mental state justifier that determines it as a priori justified is preserved. I am grateful to the anonymous referee for pressing me on this point.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
, ,
Thus, it follows that a standing belief will have whatever basis it originally had, whether or not one recalls that basis later on. And so, we arrive at the conclusion we were after: a belief ’s original basis is the basis on which it is maintained, unless the matter is explicitly reconsidered. Of course, both of these claims about bases are premised on the importance of preserving a robust use for the a priori/a posteriori distinction. But as I’ve argued elsewhere (Boghossian forthcoming), we have every reason to accord that distinction the importance it has traditionally had.
6. The Basis for Occurrent Judgments We come, then, to the question: What are the bases for those original occurrent judgments, the ones that become our standing beliefs by being frozen in preservative memory? On what bases do we tend to arrive at particular occurrent judgments? Well, here, as we are prone to say, the bases may be either inferential or noninferential. When a judgment’s basis is non-inferential, it will typically consist in a perceptual state, such as a visual or auditory state. It may also consist in some state of introspection. Some philosophers allow that non-inferential bases may also consist in such experiential states as intuitions, and such non-experiential states as the understanding of concepts. For our purposes, here, though, we may leave such controversies aside. I will be interested, instead, in what it is for an occurrent judgment to have an inferential basis. Given our working assumption that the Statist View is comprehensively correct, the inferential basis for an occurrent judgment will always be some other occurrent judgment.⁸ The question is: How does one judgment get established as the basis for another judgment?⁹ When we ask what the justification for a judgment is, we need to be asking what that justification is for a particular person who makes the judgment. We are asking what your basis is for making the judgment, what your reason is. And so, naturally, there looks to be no evading the fact that, at the end of the day, it will be some sort of psychological fact about you that establishes whether something ⁸ On the Propositional View, the basis would always be a proposition that is the object of some occurrent judgment. As I say, this particular distinction won’t matter for present purposes. ⁹ Of course, judgments are not the only sorts of propositional attitude that inference can relate. One can infer from suppositions and from imperatives (for example) and one can infer to those propositional attitudes as well. Let us call this broader class of propositional attitudes that may be related by inference, acceptances (see Wright 2014 and Boghossian 2014). Some philosophers further claim that even such nonattitudinal states as perceptions could equally be the conclusions of inferences (Siegel 2017). In the sense intended (more on this below), this is a surprising claim and it is to be hoped that getting clearer on the nature of inference will eventually help us adjudicate on it. For the moment, I will restrict my attention to judgments.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
is a basis for you, whether it is the reason for which you came to make a certain judgment. Thus, it is some sort of psychological fact about you that establishes that this perception of yours serves as your basis for believing this observable fact. And it is some sort of psychological fact about you that establishes that it is these judgments that serve as your basis for making this new judgment, via an inference from those other judgments. I am interested in the nature of this process of arriving at an occurrent judgment that q by inference from the judgment that p, a process which establishes p as your reason for judging q.
7. Types of Inference What is an example of the sort of process I am talking about? One example, that I will call, for reasons that I will explain later, reasoning 2.0, would go like this: (1)
I consider explicitly some proposition that I believe, for example p.
And I ask myself explicitly: (2)
(Meta) What follows from p?
And then it strikes me that q follows from p. Hence, (3)
(Taking) I take it that q follows from p.
At this point I ask myself (4) (5) (6)
Is q plausible? Is it less plausible than the negation of p? I conclude that q is not less plausible than not-p. So, I judge q.
I add q to my stock of beliefs. Reasoning 2.0 is admittedly not the most common type of reasoning. But it is probably not as rare as it is fashionable to claim nowadays. In philosophy, as in other disciplines, there is a tendency to overlearn a good lesson. Wittgenstein liked to emphasize that many philosophical theories overly intellectualize cognitive phenomena. Perhaps so. But we should not forget that there are many phenomena that call for precisely such intellectualized descriptions. Reasoning 2.0 happens in a wide variety of contexts. Some of these, to be sure, are rarified intellectual contexts, as, for example, when you are working out a proof, or formulating an argument in a paper. But it also happens in a host of other cases that are much more mundane. Prior to the 2016 presidential election, the conventional wisdom was that Donald Trump was extremely unlikely to win it. Prior to that election, many people are likely to have
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
, ,
taken a critical stance on this conventional wisdom, asking themselves: “Is there really evidence that shows what conventional wisdom believes?” Anyone taking such a stance would be engaging in reasoning 2.0. Having said that, it does seem true that there are many cases where our reasoning does not proceed via an explicit meta-question about what follows from other things we believe. Most cases of inference are seemingly much more automatic and unreflective than that. Here is one: On waking up one morning you recall that (Rain Inference) (1) It rained heavily through the night. You conclude that (2) The streets are filled with puddles (and so you should wear your boots rather than your sandals). Here, the premise and conclusion are both things of which you are aware. But, it would seem, there is no explicit meta-question that prompts the conclusion. Rather, the conclusion comes seemingly immediately and automatically. I will call this an example of reasoning 1.5. The allusion here, of course, is to the increasingly influential distinction between two kinds of reasoning, dubbed “System 1” and “System 2” by Daniel Kahneman. As Kahneman (2011, pp. 20–1) characterizes them, System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control. System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration. As examples of System 1 thinking, Kahneman gives detecting that one object is more distant than another, orienting to the source of a sudden sound, and responding to a thought experiment with an intuitive verdict. Examples of System 2 thinking are searching memory to identify a surprising sound, monitoring your behavior in a social setting, and checking the validity of a complex logical argument. Kahneman intends this not just as a distinction between distinct kinds of reasoning, but of thinking more broadly. Applied to the case of reasoning, it seems to me to entail that a lot of reasoning falls somewhere in-between these two extremes. The (Rain) inference, for example, is not effortful or attention-hogging. On the other hand, it seems wrong to say that it is not under my voluntary control, or that there is no sense of agency associated with it. It still seems to be something that I do. That is why I have labeled it “System 1.5 reasoning.” The main difference between reasoning 2.0 and reasoning 1.5 is not agency per se, but rather the fact that in reasoning 2.0, but not in 1.5, there is an explicit (Meta)
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
question, an explicit state of taking the conclusion to follow from the premises, and, finally, an explicit drawing of the conclusion as a result of that taking. All three of these important elements seem to be present in reasoning 2.0, but missing from reasoning 1.5.
8. Inference versus Association Now, one of the main claims that I have made in previous work on this topic is that we need to acknowledge a state of taking even in the 1.5 cases, even in those cases where, since the reasoning seems so immediate and unreflective, a taking state appears to be absent. This way of thinking about inferring, as I’ve previously noted, echoes a remark of Frege’s (1979, p. 3): To make a judgment because we are cognisant of other truths as providing a justification for it is known as inferring.
My own preferred version of Frege’s view, for reasons that, since I have explained elsewhere, I won’t rehearse here, I would put like this: (Inferring) S’s inferring from p to q is for S to judge q because S takes (the accepted truth of ) p to provide (contextual) support for (the acceptance of ) q. Let us call this insistence that an account of inference must in this way incorporate a notion of “taking” the Taking Condition on inference: (Taking Condition): Inferring from p to q necessarily involves the thinker taking p to support q and drawing q because of that fact. As so formulated, this is not so much a view, as a schema for a view. It signals the need for something to play the role of “taking,” but without saying exactly what it is that plays that role, nor how it plays it. In other work, I have tried to say more about this taking state and how it might play this role (see Boghossian 2014, 2016). In an important sense, however, it was probably premature to attempt to do that. Before trying to explain the nature of the type of state in question, we need to satisfy ourselves that a correct account of inference has to assume the shape that the (Taking) condition describes. Why should we impose a phenomenologically counterintuitive and theoretically treacherous-looking condition such as (Taking) on any adequate account of inference? In this chapter, then, instead of digging into the nitty-gritty details of the taking state, I want to explain in fairly general terms why an account of inference should assume this specific shape, what is at stake in the debate about (Taking), and why it should be resolved one way rather than another.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
, ,
9. What is at Stake in this Debate? You might think it’s quite easy to say what’s at stake in this debate. After all, making claims that involve the notion of inference are common and central to philosophy. Many philosophers, for example, think that it’s an important question about a belief whether its basis is inferential or not. If it’s inferential, then its justification is further dependent on that of the belief on which it rests; if it’s not, the chain of justification might have reached its end. To give another example, Siegel (2017) claims that the conclusion of an inference need not be an acceptance but might itself be a perceptual state. This is linked to the question whether one may be faulted for having certain perceptions, even if, in some sense, one can’t help but have the perceptions one has. Both of these claims depend on our being able to say what inference is and how to recognize it when it occurs. There is reason, then, to think that the question of the nature of inference is important. The question, though, is to say how to distinguish this substantive dispute about the nature of inference, from a verbal dispute about how the word “inference” is or ought to be applied. Here, as elsewhere in philosophy, the best way to get at a substantive dispute about the nature of a concept or phenomenon is to specify what work one needs that concept or phenomenon to do. What is expected of it? An initial thought about what work we want the concept of reasoning to do is that we need it to help us distinguish reasoning from the mere association of thoughts with one another. In giving an account of inferring a q from a p, we need to distinguish that from a case in which p merely gives rise to the judgment that q in some associative way. That’s fine as far as it goes, but it now invites the question: Why does it matter for us to capture the distinction between association and inference? The start of an answer is that inferring from the judgment that p to the judgment that q establishes p as the epistemic basis for judging q, whereas associating q with p does not. The fuller answer is that we want to give an account of this idea of a thinker establishing p as a basis for believing q that is subject to a certain constraint: namely, that it be intelligible how thinkers can be held responsible for the quality of their reasoning. I can be held responsible for the way I reason, but not for what associations occur to me. I can be held responsible for what I establish as a good reason for believing something, but not for what thoughts are prompted in me by other thoughts. These, then, are some of the substantive issues that I take to be at stake in the debate I wish to engage in, and they are issues of a normative nature. In genuine reasoning, you establish one judgment as your basis for making another; and you can be held responsible for whether you did that well or not.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
All of this, of course, is part and parcel of the larger debate in epistemology about the extent to which foundational notions in epistemology are normative and deontological in nature. The hope is that, by playing out this debate in the special case of reasoning, we will shed light both on the nature of reasoning and on the larger debate. With these points in mind, let’s turn to our question about the nature of inference and to whether it should in general be thought of as involving a “taking” state.
10. Inference Requires Taking One point on which everyone is agreed is that for you to infer from p to q your acceptance of p must cause your acceptance of q. Another point on which everyone is agreed is that while such causation may be necessary, it is not sufficient, since it would fail to distinguish inference from mere association. The question is: What else should we add to the transition from p to q for it to count as inferring q from p? Hilary Kornblith has claimed that, so long as we insist that the transitions between p and q “[involve] the interaction among representational states on the basis of their content,” then we will have moved from mere causal transitions to full-blooded cases of reasoning. Thus, he says: But, as I see it, there is now every reason to regard these informational interactions as cases of reasoning: they are, after all, transitions involving the interaction among representational states on the basis of their content. (2012, p. 55)
But this can’t be right. Mere associations could involve the interaction of representational states on the basis of their content. The Woody Allenesque depressive who, on thinking “I am having so much fun” always then thinks “But there is so much suffering in the world,” is having an association of judgments on the basis of their content. But he is not thereby inferring from the one proposition to the other. (For more discussion of Kornblith, see Boghossian 2016.) If mere sensitivity to content isn’t enough to distinguish reasoning from association, then perhaps what’s missing is support: the depressive’s thinking doesn’t count as reasoning because his first judgment—that he is having so much fun—doesn’t support his second—that there is so much suffering in the world. Indeed, the first judgment might be thought to slightly undermine the second, since at least the depressive is having fun. By contrast, in the (Rain) inference the premise does support the conclusion. But, of course, that can’t be a good proposal either. Sometimes I reason from a p to a q where p does not support q. That makes the reasoning bad, but it is reasoning nonetheless. Indeed, it is precisely because it is reasoning that we can say it’s bad. The very same transition would be just fine, or at any rate, acceptable, if it were a mere association.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
, ,
What, then, should we say? At this point in the dialectic, something like a taking-based account seems not only natural, but forced: the depressive’s thinking doesn’t count as reasoning not because his first judgment doesn’t support his second, but, rather, it would seem, because he doesn’t take his first judgment to support his second. The first judgment simply causes the second one in him; he doesn’t draw the second one because he takes it to be supported by the first. On the other hand, in the (Rain) inference, it doesn’t prima facie strain credulity to say that I take its having rained to support the claim that the streets would be wet and that that is why I came to believe that the streets will be wet. At least prima facie, then, there looks to be a good case for the Taking Condition: it seems to distinguish correctly between mere association and inferring. And there doesn’t seem any other obvious way to capture that crucial distinction.
11. Further Support for the Taking Condition: Responsibility and Control These is, however, a lot more to be said in favor of the Taking Condition, in addition to these initial considerations. I will review some of the more central considerations in this section. To begin with, let’s note that even reasoning 1.5, even reasoning that happens effortlessly and seemingly automatically, could intuitively be said to be something that you do—a mental action of yours—rather than simply something that happens to you. And this fact is crucially connected to the fact that we can (a) not only assess whether you reasoned well, but (b) hold you responsible for whether you reasoned well, and allow that assessment to enter into an assessment of your rationality. For on the picture on offer, you take your premises to support your conclusion and actively draw your conclusion as a result. I don’t now assert that the Taking Condition is the only way to make your reasoning count as agential. I do assert that it is one clear way of doing so and that no other ways appear obviously available. As a result, the picture on offer satisfies one of the principal desiderata that I outlined: that your reasoning be a process for which you could intelligibly be held rationally responsible. Second, talk of taking fits in well with the way in which, with person-level inference, it is always appropriate to precede the drawing of the conclusion with a “so” or a “therefore.” What are those words supposed to signify if not that the agent is taking it that her conclusion is justified by her premises?¹⁰
¹⁰ Pettit made this observation in his (2007, p. 500).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Third, taking appears to account well for how inference could be subject to a Moore-style paradox. That inference is subject to such a style of paradox has been well described by Ulf Hlöbil (although he disputes that my explanation of it is adequate). As Hlöbil (2014, pp. 420–1) puts it: (IMP) It is either impossible or seriously irrational to infer P from Q and to judge, at the same time, that the inference from Q to P is not a good inference. . . . [It] would be very odd for someone to assert (without a change in context) an instance of the following schema, (IMA) Q; therefore, P. But the inference from Q to P is not a good inference (in my context). . . . it seems puzzling that if someone asserts an instance of (IMA), this seems self-defeating. The speaker seems to contradict herself . . . Such a person is irrational in the sense that her state of mind seems self-defeating or incoherent. However, we typically don’t think of inferrings as contentful acts or attitudes . . . Thus, the question arises how an inferring can generate the kind of irrationality exhibited by someone who asserts an instance of (IMA). Or, to put it differently: How can a doing that seems to have no content be in rational tension with a judgment or a belief?
Hlöbil is right that there is a prima facie mystery here: how could a doing be in rational tension with a judgment? The Taking Condition, however, seems to supply a good answer: there can be a tension between the doing and the judgment because the doing is the result of taking the premises to provide good support for the conclusion, a taking that the judgment then denies. Fourth, taking offers a neat explanation of how there could be two kinds of inference—deductive and inductive.¹¹ Of course, in some inferences the premises logically entail the conclusion and in others they merely make the conclusion more probable than it might otherwise be. That means that there are two sets of standards that we can apply to any given inference. But that only gives us two standards that we can apply to an inference, not two different kinds of inference. Intuitively, though, it’s not only that there are two standards that we can apply to any inference, but two different types of inference. And, intuitively once more, that distinction involves taking: it is a distinction between an inference in which the thinker takes his premises to deductively warrant his conclusion versus one in which he takes them merely to inductively warrant it. Finally, some inferences seem not only obviously unjustified, and so not ones that rational people would perform; more strongly, they seem impossible. Even if you were
¹¹ The following two points were already mentioned in Boghossian (2014); I mention them here again for the sake of completeness.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
, ,
willing to run the risk of irrationality, they don’t seem like inferences that one could perform. Consider someone who claims to infer Fermat’s Last Theorem (FLT) directly from the Peano axioms, without the benefit of any intervening deductions, or knowledge of Andrew Wiles’s proof of that theorem. No doubt such a person would be unjustified in performing such an inference, if he could somehow get himself to perform it. But more than that, we feel that no such transition could be an inference to begin with, at least for creatures like ourselves. What could explain this? The Taking Condition provides an answer. For the transition from the Peano axioms to FLT to be a real inference, the thinker would have to be taking it that the Peano axioms support FLT’s being true. And no ordinary person could so take it, at least not in a way that’s unmediated by the proof of FLT from the Peano axioms. (The qualification is there to make room for extraordinary people, like Ramanujan, for whom many more number-theoretic propositions were obvious than they are for the rest of us.)¹² We, see, then, that there are a large number of considerations, both intuitive and theoretical, for imposing a Taking Condition on inference.
12. Helmholtz and Sub-personal Inference But what about the fact that the word ‘inference’ is used, for example in psychology and cognitive science, to stand for processes that have nothing to do with taking? Helmholtz is said to have started the trend by talking about unconscious and subpersonal inferences that are employed by our visual system (see his 1867); but, by now, the trend is a ubiquitous one. Who are we, armchair philosophers, to say that inference must involve taking, if the science of the mind (psychology) happily assumes otherwise? Should we regard Helmholtz’s use of “inference” as a source of potential counterexamples to our taking-based accounts of inference, or is Helmholtz simply using a different (though possibly related) concept of “inference,” one that indifferently covers both non-inferential sub-personal transitions that may be found in simple creatures, as well as our examples of 2.0 and 1.5 cases of reasoning? How should we decide this question? How should we settle whether this is a merely terminological dispute or a substantive one?
¹² I’m inclined to think that this sort of example actually shows something stronger than that taking must be involved in inference. I’m inclined to think that it shows that the taking must be backed by an intuition or insight that the premises support the conclusion. For the fact that an ordinary person can’t take it that FLT follows from the Peano axioms directly isn’t a brute fact. It is presumably to be explained by the fact that an ordinary person can’t simply “see” that the FLT follows from the Peano axioms. I hope to develop this point further elsewhere.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
We need to ask the following: What would we miss if we only had Helmholtz’s use of the word “inference” to work with? What important seam in epistemology would be obscured if we only had his word? I have already tried to indicate what substantive issue is at stake. I don’t mind if you use the same word “inference” to cover both Helmholtz’s sub-personal transitions and adult human reasoning 2.0 and 1.5. The crucial point is not to let that linguistic decision obscure the normative landscape: unlike the latter, the subpersonal transitions of a person’s visual system are not ones that we can hold the person responsible for, and they are not ones whose goodness or badness enters into our assessments of her rationality. I can’t be held responsible for the hard-wired transitions that make my visual system liable to the Müller-Lyer illusion. Of course, once I find out that it is liable to such illusion, I am responsible for not falling for it, so to say, but that’s a different matter. Obviously, here we are in quite treacherous territory, the analogue to the question of free will and responsibility within the cognitive domain. Ill-understood as this issue is in general, it is even less well understood in the cognitive domain, in part because we are far from having a satisfactory conception of mental action. But unless you are a skeptic about responsibility, you will think that there are some conditions that distinguish between mere mechanical transitions, and those cognitive transitions towards which you may adopt a participant reactive attitude, to use Strawson’s famous expression (see also Smithies 2016). And what we know from reflection on cases, such as those of the habitual depressive, is that mere associative transitions between mental states—no matter how conscious and content-sensitive they may be—are not necessarily processes for which one can be held responsible. It is only if there is a substantial sense in which the transitions are ones that a thinker performed that she can be held responsible for them. That is the fundamental reason why Helmholtz-style transitions cannot, in and of themselves, amount to reasoning in the intended sense. We can get at the same point from a somewhat different angle. Any transition whatsoever could be hard-wired in, in Helmholtz’s sense. One could even imagine a creature in which the transition from the Peano axioms to FLT is hard-wired in as a basic transition. What explains the perceived discrepancy between a mere transition that is hardwired in versus one that is the result of inference? The answer I’m offering is that merely wired-in transitions can be anything you like because there is no requirement that the thinker approve the transition, and perform that transition as a result of that approval; they can just be programmed in. By contrast, I’m claiming, inferential transitions must be driven by an impression on the thinker’s part that his premises support his conclusion. To sum up the argument so far: a person’s inferential behavior, in the intended sense, is part and parcel of his constitution as a rational agent. A person can be held
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
, ,
responsible for inferring well or poorly, and such assessments can enter into an overall evaluation of his virtues as a rational agent. Making sense of this seems to require imposing the Taking Condition on inference. Helmholtz-style sub-personal “inferences” can be called that, if one wishes, but they lie on the other side of the bright line that separates cognitive transitions for which one can be held responsible from those for which one cannot.
13. Non-reflective Reasoning in Humans Helmholtz-style sub-personal transitions, however, are not the only potential source of counterexamples to taking-based accounts. For what about cases of reasoning 1.5, such as the (Rain) inference? Those, we have allowed, are cases of very common, person-level reasoning for which we can be held responsible. And yet, we have conceded that, at least phenomenologically, a taking state doesn’t seem to be involved in them. Why, then, do they not refute taking-based accounts? There are at least two important points that need to be made in response. The first is that the power of an example like that of (Rain) to persuade us that taking states are not in general involved in garden-variety cases of inference stems purely from the phenomenology of such cases: they simply don’t seem to be involved in such effortless and fleeting cases of reasoning. When we reflect on such cases, we find it phenomenologically plausible that there was a succession of judgments, but not that there was a mediating taking state. The trouble, though, is that resting so much weight on phenomenology, in arriving at the correct description of our mental lives, is a demonstrably flawed methodology. There are many conscious states and events that we have reason to believe in, but which have no distinctive qualitative phenomenology, and whose existence could not be settled by phenomenological considerations alone. This point, it seems to me, is of great importance; so I shall pause on it. Consider the controversy about intuitions understood as sui generis states of intellectual seeming. Many philosophers find it natural to say that, when they are presented with a thought experiment—for example, Gettier’s famous thought experiment about knowledge—they end up having an intuition to the effect that Mr. Smith has a justified true belief but does not know. Intuition skeptics question whether these philosophers really do experience such states of intuition, as opposed to merely experiencing some sort of temptation or disposition to judge that Mr. Smith has a justified true belief but does not know. Timothy Williamson, for example, writes: Although mathematical intuition can have a rich phenomenology, even a quasi-perceptual one, for instance in geometry, the intellectual appearance of the Gettier proposition is not like that. Any accompanying imagery is irrelevant. For myself, I am aware of no intellectual
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
seeming beyond my conscious inclination to believe the Gettier proposition. Similarly, I am aware of no intellectual seeming beyond my conscious inclination to believe Naïve Comprehension, which I resist because I know better. (2007, p. 217)¹³
There is no denying that a vivid qualitative phenomenology is not ordinarily associated with an intellectual seeming. However, it is hard to see how to use this observation, as the intuition skeptic proposes to do, to cast doubt on the existence of intuitions, while blithely accepting the existence of occurrent judgments. After all, ordinary occurrent judgments (or their associated dispositions) have just as little distinctive qualitative phenomenology as intuitions do. For example, right now, as I visually survey the scene in front of me, I am in the process of accepting a large number of propositions, wordlessly and without any other distinctive phenomenology. These acceptances are unquestionably real. But they have no distinctive phenomenology.¹⁴ To put the point in a slogan: phenomenology is often useless as a guide to the contents of our conscious mental lives. Many items of consciousness have phenomenal characteristics; however, many others don’t.¹⁵,¹⁶ That is why I am not deterred from proposing, by mere phenomenological considerations, that taking states are involved even in cases, like that of the (Rain) inference, where they may not phenomenologically appear to be present. The second point that limits the anti-taking effectiveness of (Rain)-style examples is that much of the way in which our mental activities are guided is tacit, not explicit. When you first learn how to operate your iPhone, you learn a bunch of rules and, for a while, you follow them explicitly. After a while, though, operating the phone, at least for basic “startup” functions, may become automatic, unlabored, and unreflective: you place your finger on the home button, let it rest there for a second to activate the fingerprint recognition device, press through and carry on as desired, all the while with your mind on other things. That doesn’t mean that your previous grasp of the relevant rules isn’t playing a role in guiding your behavior. It’s just that the guidance has gone tacit. In cases where, for whatever reason, the automatic behavior doesn’t achieve the desired results, you will find yourself trying to retrieve the rule that is guiding you and to formulate its requirements explicitly once again.
¹³ See also Cappelen (2012) and Deutsch (2015). ¹⁴ It won’t do to argue that these acceptances are “unconscious” as they are so easily retrieved. ¹⁵ I hesitate to characterize this distinction in terms of Ned Block’s famous distinction between “phenomenal” and “access” consciousness, since this may carry connotations that I wouldn’t want to endorse. But it is obviously in the neighborhood of that distinction. See Block (1995). ¹⁶ An anonymous referee suggested that an alternative hypothesis is that not all aspects of phenomenal life are open to naïve introspection, that theory, or other tools, are needed to guide one’s introspective search for those phenomenal characteristics that are present. As I explain below (and also in Boghossian (forthcoming) these two hypotheses are not necessarily in competition with one another.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
, ,
To put these two points together, I believe that our mental lives are tacitly guided to a large extent by phenomenologically inert conscious states that do their guiding tacitly. One of the interesting lessons for the philosophy of mind that is implicit in all this is that you can’t just tell by the introspection of qualitative phenomenology what the basic elements of your conscious mental life are, especially when those are intentional or cognitive elements. You need a theory to guide you. Going by introspection of phenomenology alone, you may never have seen the need to recognize states of intuition or intellectual seeming; you may never have seen the need to recognize fleeting occurrent judgments, made while surveying a scene; and you may never have seen the need to postulate states of taking. I think that part of the problem here, as I’ve already noted, stems from overlearning the good lesson that Wittgenstein taught us, that in philosophy there has been a tendency to give overly intellectualized descriptions of cognitive phenomena. My own view is that conscious life is shot through with states and events that play important, traditionally rationalistic roles, which have no vivid qualitative phenomenology, but which can be recognized through their indispensable role in providing adequate accounts of central cognitive phenomena. In the particular case of inference, the fact that we need a subject’s inferential behavior to be something for which he can be held rationally responsible is a consideration in favor of the Taking Condition that no purely phenomenological consideration can override.
14. Richard’s Objections But is it really true that the Taking Condition is required for you to be responsible for your inferences? Mark Richard has argued at length that it is not (Chapter 6, this volume). Richard agrees that our reasoning is something we can be held responsible for; however, he disputes that responsibility requires agency: One can be responsible for things that one does not directly do. The Under Assistant VicePresident for Quality Control is responsible for what the people on the assembly line do, but of course she is not down on the floor assembling the widgets. Why shouldn’t my relation to much of my reasoning be somewhat like the VP’s relation to widget assembly? Suppose I move abductively from the light won’t go on to I probably pulled the wire out of the fixture changing the bulb. Some process of which I am not aware occurs. It involves mechanisms that typically lead to my being conscious of accepting a claim. I do not observe them; they are quick, more or less automatic, and not demanding of attention. Once the mechanisms do their thing, the conclusion is, as they say, sitting in the belief box. But given a putative implication, I am not forced to mutely endorse it. If I’m aware that I think q and that it was thinking p that led to this, I can, if it seems worth the effort, try to consciously check to see if the implication in fact holds. And once I do that, I can on the basis of my review continue to accept the implacatum, reject the premise, or even suspend judgment on the whole shebang.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
In this sense, it is up to me as to whether I preserve the belief. It thus makes sense to hold me responsible for the result of the process. I say that something like this story characterizes a great deal of adult human inference. Indeed, it is tempting to say that all inference—at least adult inference in which we are conscious of making an inference—is like this. (Chapter 6, this volume, p. x)
Richard is trying to show how you can be responsible for your reasoning, even if you are not aware of it and did not perform it. But all he shows, at best, is that you can be held responsible for the output of some reasoning, rather than for the reasoning itself, if the output of that reasoning is a belief. But it is a platitude that one can be held responsible for one’s beliefs. The point has nothing to do with reasoning and does not show that we can be held responsible for our reasoning, which is the process by which we sometimes arrive at beliefs, and not the beliefs at which we arrive. You could be held responsible for any of your beliefs that you find sitting in your “belief box,” even if it weren’t the product of any reasoning, but merely the product of association. Once you become aware that you have that belief, you are responsible for making sure that you keep it if and only if you have a good reason for keeping it. If it just popped into your head, it isn’t yet clear that it has an epistemic basis, let alone a good one. To figure out whether you have a good basis for maintaining it, you would have to engage in some reasoning. So, we would be right back where we started. Richard (Chapter 6, this volume, p. 93) comes around to considering this objection. He says: I’ve been arguing that the fact that we hold the reasoner responsible for the product of her inference—we criticize her for a belief that is unwarranted, for example—doesn’t imply that in making the inference the reasoner exercises a (particularly interesting) form of agency. Now, it might be said that we hold she who reasons responsible not just for the product of her inference, but for the process itself. When a student writes a paper that argues invalidly to a true conclusion, the student gets no credit for having blundered onto the truth; he loses credit for having blundered onto the truth. But, it might be said, it makes no sense to hold someone responsible for a process if they aren’t the, or at least an, agent of the process. Let us grant for the moment that when there is inference, both its product and the process itself are subject to normative evaluation. What exactly does this show? We hold adults responsible for such things as implicit bias. To hold someone responsible for implicit bias is not just to hold them responsible for whatever beliefs they end up with as a result of the underlying bias. It is to hold the adult responsible for the mechanisms that generate those beliefs, in the sense that we think that if those mechanisms deliver faulty beliefs, then the adult ought to try to alter those mechanisms if he can. (And if he cannot, he ought to be vigilant for those mechanisms’ effects.)
Implicit bias is, of course, a huge and complicated topic, but, even so, it seems to me to be misapplied here.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
, ,
We are certainly responsible for faulty mechanisms if we know that they exist and are faulty. Since we are all now rightly convinced that we suffer from all sorts of implicit bias, without knowing exactly how they operate, we all have a responsibility to uncover those mechanisms within us that are delivering faulty beliefs about other people and to modify them; or, at the very least, if that is not possible, to neutralize their effects. But what Richard needs is not an obvious point like that. What he needs to argue is that a person can be responsible for mechanisms that deliver faulty judgments, even if he doesn’t know anything about them: doesn’t know that they exist, doesn’t know that their deliverances are faulty, and doesn’t know how they operate. Consider a person who is, relative to the rest of the community of which he is a part, color blind, but who is utterly unaware of this attribute of his. Such a person would have systematically erroneous views about the sameness or difference of certain colored objects in his environment. His color judgments would be faulty. But do we really want to hold him responsible for those faulty judgments and allow those to enter into assessments of his rationality? I don’t believe that would be right. Once he discovers that he is color-blind relative to his peers, then he will have some responsibility to qualify his judgments about colors and look out for those circumstances in which his judgments may be faulty. But until such time as he is brought to awareness, we can find his judgments faulty without impugning his rationality. The confounding element in the implicit bias case is that the faulty judgments are often morally reprehensible and so suggest that perhaps a certain openness to morally reprehensible thoughts lies at the root of one’s susceptibility to implicit bias. And that would bring a sense of moral responsibility for that bias with it. But if so, that is not a feature that Richard can count on employing for the case of inference in general. Richard (Chapter 6, this volume, p. 92) also brings up the case where we might be tempted to say that we inferred but where it is not clear what premises we inferred from: More significantly, there are cases that certainly seem to be inferences in which I simply don’t know what my premises were. I know Joe and Jerome; I see them at conventions, singly and in pairs, sometimes with their significant others, sometimes just with each other. One day it simply comes to me: they are sleeping together. I could not say what bits of evidence buried in memory led me to this conclusion, but I—well, as one sometimes says, I just know. Perhaps I could by dwelling on the matter at least conjecture as to what led me to the conclusion. But I may simply be unable to. Granted, not every case like this need be a case of inference. But one doesn’t want to say that no such case is. So if taking is something that is at least in principle accessible to consciousness, one thinks that in some such cases we will have inference without taking.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
There is little doubt that there are cases somewhat like the ones that Richard describes. There are cases where beliefs simply come to you. And sometimes they come with a great deal of conviction, so you are tempted to say: I just know. I don’t know exactly how I know, but I just know. What is not obvious is (a) that they are cases of knowledge or (b) that when they are, that they are cases of (unconscious) inference from premises that you’re not aware of. Of course, a Reliabilist would have no difficulty making sense of the claim that there could be such cases of knowledge, but I am no Reliabilist. But even setting that point to one side, the most natural description of the case where p suddenly strikes you as true is that you suddenly have the intuition that p is true. No doubt there is a causal explanation for why you suddenly have that intuition. And no doubt that causal explanation has something to do with your prior experiences with p. But all of that is a far cry from saying that you inferred to p from premises that you are not aware of. Richard’s most compelling argument against requiring taking for inference stems, interestingly enough, from the case of perception: If we had reason to think that it was only in such explicit cases [he means reasoning 2.0] that justification could be transmitted from premises to conclusion, then perhaps we could agree that such cases should be given pride of place. But we have no reason to think that. I see a face; I immediately think that’s Paul. My perceptual experience—which I would take to be a belief or at least a belief-like state that I see a person who looks so—justifies my belief that I see Paul. It is implausible that in order for justification to be transmitted I must take the one to justify the other. (Chapter 6, this volume, p. 96)
I agree that no taking state is involved in perceptual justification. I wouldn’t say, with Richard, that perceptual experience is a belief: I don’t see how the belief That’s Paul could justify the belief That’s Paul. But I would say that the perceptual experience is a visual seeming with the content That’s Paul and that this can justify the belief That’s Paul without an intervening taking state. But there is a reason for this that is rooted in the nature of perception (one could say something similar about intuition). The visual seeming That’s Paul presents the world (as John Bengson (2015) has rightly emphasized) as this being Paul in front of one. When you then believe That’s Paul on that basis, there is no need to take its seeming to be p to support believing its being p. You are already there (that’s why Richard finds it so natural to say that perceptual experience is itself a belief, although that is going too far). All that can happen is that a doubt about the deliverances of one’s visual system can intervene, to block you from endorsing the belief that is right there on the tip of your mind, so to speak.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
, ,
But that is the abnormal case, not the default one. Not to believe what perception presents you with, unless you have reason to not believe it, would be the mistake. But the inference from p to q is not like that. A belief that p, which is the input into the inferential process, is not a seeming that q. And while the transition to believing that q may be familiar and well supported, it is not simply like acquiescing in something that is already the proto-belief that q.
15. Conclusion Well, there are many things that I have not done in this essay. I have not developed a positive account of the taking state. I have not discussed possible regress worries. But one can’t do everything. What I’ve tried to do is explain why the shape of an account of reasoning that gives a central role to the Taking Condition has a lot to be said for it, especially if we are to retain the traditional connections between reasoning, responsibility for reasoning, and assessments of a person’s rationality.
References Audi, Robert. (1986). Belief Reason, and Inference. Philosophical Topics, 14 (1), 27–65. Bengson, John. (2015). The Intellectual Given. Mind, 124 (495), 707–60. Block, Ned. (1995). On a Confusion about a Function of Consciousness. Behavioral and Brain Sciences, 18 (2), 227–87. Boghossian, Paul. (2014). What is Inference? Philosophical Studies, 169 (1), 1–18. Boghossian, Paul. (2016). Reasoning and Reflection: A Reply to Kornblith. Analysis, 76 (1), 41–54. Boghossian, Paul. (forthcoming). Do We Have Reason to Doubt the Importance of the Distinction Between A Priori and A Posteriori Knowledge? A Reply to Williamson. In P. Boghossian and T. Williamson, Debating the A Priori. Oxford University Press. Burge, Tyler. (1993). Content Preservation. The Philosophical Review, 102 (4), 457–88. Cappelen, Herman. (2012). Philosophy Without Intuitions. Oxford University Press. Chisholm, Roderick M. (1989). Theory of Knowledge, 3rd edition. Prentice-Hall. Deutsch, Max. (2015). The Myth of the Intuitive. MIT Press. Frege, Gottlob. (1979). Logic. In Posthumous Writings. Blackwell. Helmholtz, Heinrich von. (1867). Treatise on Physiological Optics, Vol. III. Dover Publications. Hlöbil, Ulf. (2014). Against Boghossian, Wright and Broome on Inference. Philosophical Studies, 167 (2), 419–29. Huemer, Michael. (1999). The Problem of Memory Knowledge. Pacific Philosophical Quarterly (80), 346–57. Kahneman, Daniel. (2011). Thinking, Fast and Slow. Macmillan. Kornblith, Hilary. (2012). On Reflection. Oxford University Press. Littlejohn, Clayton. (2012). Justification and the Truth-Connection. Cambridge University Press.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Pettit, Philip. (2007). Rationality, Reasoning and Group Agency. Dialectica, 61 (4), 495–519. Pryor, James. (2000). The Skeptic and the Dogmatist. Noûs, 34 (4), 517–49. Pryor, James. (2005). There is Immediate Justification. In M. Steup and E. Sosa (eds.), Contemporary Debates in Epistemology. Blackwell. Pryor, James. (2007). Reasons and That-clauses. Philosophical Issues, 17 (1), 217–44. Siegel, Susanna. (2017). The Rationality of Perception. Oxford University Press. Smithies, Declan. (2016). Reflection On: On Reflection. Analysis, 76 (1), 55–69. Williamson, Timothy. (2000). Knowledge and Its Limits. Oxford University Press. Williamson, Timothy. (2007). The Philosophy of Philosophy. John Wiley & Sons. Wright, Crispin. (2014). Comment on Paul Boghossian, “What is Inference.” Philosophical Studies, 169 (1), 27–37.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
PART II
The Value of Reasoning
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Rules for Reasoning
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
8 Isolating Correct Reasoning Alex Worsnip
Let me start with a threefold distinction between some different normative notions as they apply to attitudinal mental states like beliefs and intentions. The first is that of the reasons or justification that one has for an individual attitude. Having a reason for an attitude, I assume, is a matter of there being considerations of some kind that speak in favor of that attitude. Perhaps these reasons have to pass some kind of epistemic filter or in some sense be “available” to the agent to be the kinds of reasons that justify in the operative sense. But at least an important subset of reasons justify attitudes “prospectively,” and then when those attitudes are held in a way that is appropriately responsive to those reasons, they are justified, for want of a better term, “retrospectively.”¹ The second notion is that of the structural rationality² or coherence of certain combinations of attitudes. Here the idea is that there are certain combinations of attitudinal mental states that it is rationally impermissible to hold jointly. For example, it’s structurally irrational to simultaneously believe p and believe not-p, or to simultaneously intend to F, believe that one cannot F unless one Gs, and fail to intend to G. Unlike a claim about justification—say, that believing p is justified—the claim that one may not rationally (believe p and believe not-p) is completely silent on the substantive merits of any one individual attitude. It merely puts a rational constraint on which attitudes can be combined. The third notion is that of correct reasoning, and rules thereof. Here reasoning is being understood—perhaps somewhat narrowly or idiosyncratically—in terms of
For helpful discussion and feedback, I’m grateful to David James Barnett, Paul Boghossian, Cian Dorr, Ram Neta, Jim Pryor, Miriam Schoenfield, Daniel Whiting, members of the epistemology reading group at NYU, and the editors of this volume. ¹ The term “prospective” justification comes from Pryor (2018: 112), and corresponds to what many philosophers call “propositional justification.” The ugly talk of “retrospective” justification is my own. Epistemologists (including Pryor) often label the latter notion “doxastic” justification, but this label inherently resists generalization to non-doxastic attitudes, and my account is supposed to apply to both the doxastic and the practical domains. ² For this terminology, see, e.g., Scanlon (2007) and Fogal (2015).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
transitions between attitudes. So correct reasoning involves making a correct transition from one attitude to another. For instance, it’s common to say that to move from believing p and believing (if p then q) to believing q is correct reasoning. And it’s common to say that moving from intending to F and believing that one cannot F unless one Gs to intending to G is correct reasoning. This chapter is part of a broader project investigating the relationship between these three normative notions, in both the practical and doxastic (“theoretical”) cases. My view is that the relationships between these notions are less tight than many have implicitly assumed (when they have even distinguished them at all). This chapter can be seen as continuing my case for this broad view. In previous work (Worsnip 2018a), I argued for pulling apart the first and second notions and against some putative connections between them. Here I’m going to argue that the third notion is likewise to be pulled apart from both the first notion and the second. So, after a few more brief remarks on the notion of rules of correct reasoning (Section 1), I will argue that attempts to understand rules of reasoning in terms of the justification of individual attitudes fail (Section 2). Then, I will go on to further deny that rules of reasoning correspond to either requirements or permissions of structural rationality (Section 3). Out of the ashes of these attempts to reduce or at least identify correct reasoning with one of the other normative notions, I will make some gestures towards an account of correct reasoning as a more sui generis notion (Section 4). I suggest that this sui generis account has some independently interesting results.
1. A Bit More on Correct Reasoning I don’t want to give too specific an account of (rules of ) correct reasoning at the outset, lest that prejudge the issues to be discussed. Rather, I am going to assume that there is some intuitive if inchoate notion of (rules of ) correct reasoning that we can grasp, and that we are in the course of investigating which (if any) more precise notions in the neighborhood of this intuitive notion should feature in our ultimate theory of normativity. However, we can explicate the notion we’re interested in somewhat by paying attention to paradigmatic examples of the kind of rules that it is intuitively correct to reason with. Here are two examples, one theoretical and one practical: Modus Ponens Rule. From the belief p and the belief if p then q, derive the belief q. Instrumental Rule. From the intention to Φ and the belief that to Φ you must Ψ, derive the intention to Ψ. I offer these rules only as rough examples. They may need some modification. I even leave open the view that they should ultimately be rejected (indeed, I’ll consider such a view later on). But even as initially plausible candidates to be rules of correct reasoning, they offer some (rough) fix on the notion we are after.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
One thing that is for sure is that we will not want to understand these rules so that, whenever a rule is correct, one is required to employ such a rule on all the occasions that one has the premise-attitudes that it mentions. For example, it cannot be right that whenever one has the belief p and the belief (if p then q), one is required to come to believe q. One reason for this is that it is also (sometimes) permissible to give up one of one’s original beliefs instead.³ This is particularly clear if we imagine the case so that, in one’s initial state, one believes p and believes if p then q, but also positively believes not-q. In this kind of case, plausibly one should (at least usually) give up one of these three inconsistent beliefs, but there is no general reason why one should, in all such cases, give up one’s belief in not-q (and, indeed, form a belief in q) rather than give up one’s belief in p or in if p then q. There is nothing privileged about reasoning via modus ponens rather than modus tollens. So we should understand the claim that (e.g.) modus ponens is a rule of correct reasoning as coming to something like the claim that when one reasons via modus ponens, one reasons correctly. There may in any particular case be other ways of reasoning correctly, some of which may be incompatible with reasoning via modus ponens. Second, I do not want to presuppose at this stage that all valid reasoning is correct reasoning; I will treat that as a substantive issue to be settled by our best theory of correct reasoning. Validity is a logical notion, concerned with what follows from what; correctness is a normative notion. I will try not to use “valid” and “correct” interchangeably.⁴ A final general point about the general, intuitive notion of correct reasoning is that it focuses specifically on transitions between states. So, in the sense we’re interested in, one can be reasoning correctly even if one starts with unjustified premises, and so ends up with an unjustified conclusion. What is at fault in such a case is not one’s reasoning itself but one’s premises.⁵ There is a distinctive kind of rational achievement in being good at reasoning, and it is different from that of having justified starting points.
2. Correct Reasoning and Justification For the reason just given, someone who wants to link correct reasoning and justification cannot simply propose that correct reasoning always issues in a justified attitude. Nevertheless, many philosophers take a slightly subtler relationship between correct reasoning and justification for granted: they assume that rules of correct ³ This point is forcefully made by Harman (1986: 11–12). See also Broome (1999: 405; 2013: 82), Scanlon (2007: 84), and (for the analogous point in the practical case) Greenspan (1975: 272–3), Darwall (1983: 46–7), and Bratman (1987: 24–7). ⁴ Harman (1986: 3–6) thinks that to talk of a valid rule of reasoning, or even to talk of deductive reasoning, is to commit a category error: there are only valid/deductive arguments. I don’t go this far: I think it is perfectly sensible to talk of deductive reasoning, and the validity of such reasoning. ⁵ Again, see Harman (1986: 7).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
reasoning preserve justification. Let us call this the “preservation thesis.” As Jonathan Way and Daniel Whiting put it, the idea is this: If you reason correctly from justified premise-attitudes, you will reach a justified conclusionattitude. (Way and Whiting 2016: 1876)6, ⁷
I will begin by taking it that we want the account of rules of correct reasoning to vindicate rules of reasoning that are paradigmatically taken to be correct, such as the modus ponens rule and the instrumental rule. (Later, I’ll consider the revisionary strategy of rejecting these rules.) I’ll begin with the modus ponens rule, before turning more briefly to the instrumental rule, arguing that in both cases the preservation thesis fails. In the current operative sense of “preserve,” to say that the modus ponens rule preserves justification is to say that, for all cases, if one is justified in believing p and justified in believing (if p then q), then one is justified in believing q. The case that I will use to undermine this claim is familiar (though it is typically used for other purposes).⁸ Considering a long chain of deductions via modus ponens: P₁ If P₁, then P₂ So, P₂ If P₂, then P₃ So, P₃ [...] If Pn–1, then Pn So, Pn Let’s call the premises in such a chain that are not themselves derived from other premises in the chain the “non-derived” premises. In the above chain, P₁ is a nonderived premise, and so are all the conditionals ((If P₁, then P₂), (If P₂, then P₃), etc). By contrast, P₂, P₃, etc, are derived in the course of the chain of deductions. If this chain of deductions is long enough, then even though one is justified in believing every non-derived premise in the deduction, one may not be justified in believing Pn. This is because each premise, compatibly with its being justified, carries some small risk of being false.⁹ As the chain of deductions continues, these small risks ⁶ Wedgwood (2012: 273) endorses a very similar thesis. ⁷ Are we talking about prospective or retrospective justification here? As they state it, I assume Way and Whiting’s principle is about retrospective justification. But one could also state an analogue of the principle for prospective justification: that if one could correctly reason from prospectively justified premiseattitudes to some conclusion-attitude, then the conclusion-attitude is prospectively justified. What I say will be adaptable as an argument against either claim. ⁸ The case derives from the preface paradox, originally introduced by Makinson (1965). See Foley (1993), Christensen (2004), and Sturgeon (2008) for particularly powerful reiterations. ⁹ Some would challenge this modestly fallibilist idea, that being justified in believing p is compatible with some small risk of being mistaken. See Sutton (2007), Littlejohn (2012), and perhaps the latter-day
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
aggregate, until one arrives at conclusions that may (unless they enjoy some other, independent support) have a significant chance of being false. To put the point more explicitly in terms of evidential support: supposing one has excellent but less than infallible evidence for P₁, and excellent but less than infallible evidence for (If P₁, then P₂),¹⁰ then barring further independent evidence for P₂, one’s evidence for P₂ will be slightly worse than one’s evidence for either of the individual premises from which it is derived. And then, supposing one’s evidence for (If P₂, then P₃) is likewise excellent but less than infallible, one’s evidence for P₃ will again be slightly worse. And so on, until we reach some number n that is high enough such that one’s evidence for Pn is too weak to justify one in believing Pn. But this is inconsistent with the thesis that the modus ponens rule preserves justification, in the way that we are currently understanding it. We are supposing that every non-derived premise in the chain of deductions is justified. So P₁ and (If P₁, then P₂) are both justified. So, if the modus ponens rule preserves justification, P₂ is also justified. Since (If P₂, then P₃) is another non-derived premise, it too is justified. So if the modus ponens rule preserves justification, P₃ is also justified. And so on, until we reach the result that Pn is justified. That contradicts the result from the previous paragraph, that even when all the non-derived premises in the chain are justified, Pn can fail to be justified. So the modus ponens rule does not preserve justification in the current sense. I am, of course, crucially working here with an “on–off ” notion of justification here. But that is necessitated by the way we are currently understanding the claim that correct reasoning preserves justification. Our way of understanding this, taking our lead from Way and Whiting, amounts to the claim that if one reasons correctly from justified premise-attitudes, one will arrive at justified conclusion-attitudes. It is hard to see how to make sense of this claim as employing anything other than an “on–off ” notion of justification.¹¹ Perhaps, however, this shows that Way and Whiting’s principle is not the best way to interpret the spirit behind the preservation thesis. Suppose we begin instead with a scalar notion of justification. Having done so, one might say, it becomes clear that the problem with the long chain of deductions via modus ponens is still ultimately a problem with the justification that one starts with for the premises employed in this long chain of deductions, rather than its failing to be preserved.
Williamson, though not the Williamson of Knowledge and its Limits (cf., e.g., Williamson 2000: 9). I cannot defend it in the space available here, though I have defended a more robust form of fallibilism elsewhere (Worsnip 2015). ¹⁰ And that one’s evidence for P₁ and one’s evidence for (If P₁, then P₂) are at least somewhat independent of each other. ¹¹ This is compatible, of course, with thinking that in some sense underlying this “on–off ” notion of justification is a more scalar notion of justification, such that one’s being “on–off” justified in believing p is at least sometimes a matter of meeting some (perhaps vague and/or situation-dependent) threshold of a more gradable or scalar justification for p.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Even if this scalar justification suffices for each of those premises to be justified in some “on–off ” sense, it is still less than perfect, and this is what results in the conclusion of the argument being unjustified (in both a scalar and an “on–off ” sense). One might then understand the preservation thesis as claiming that any deficiency in one’s justification for the conclusion of an instance of correct reasoning can be traced to a deficiency in one’s justification for its (non-derived) premises. However, even this new interpretation of the preservation thesis fails. To see this, consider a variant on the long chain of deductions where each step involves a singlepremise deduction. In contrast to the deduction via modus ponens, which moves from two premises—P₁ and (If P₁, then P₂)—to the conclusion P₂, we are now to suppose that P₁ simply entails P₂; no conditional premise is needed for the inference to be valid. And similarly for every other step. So the long chain of deductions simply looks like this: P₁ So, P₂ So, P₃ [...] So, Pn Unlike the chain of deductions via modus ponens, this whole chain uses only one non-derived premise, P₁. If every step is valid—as we are to suppose—then everything else in the chain follows from P₁ alone. Consequently, if there are deficiencies in one’s justification for Pn that are traceable to the justification of its non-derived premises, they must be deficiencies in one’s justification for P₁. Let us now suppose that one does have strong justification for P₁. Could it still be the case that one’s justification for Pn is poor? Here is one reason to think so:¹² we are not logically infallible, and we sometimes make mistakes when doing deductions. Just as we can be mistaken about the premises of arguments, then, we can also be mistaken about transitions from one step to another. For each step one makes in a deductive argument, then, there is in at least some good sense a small chance that one has made an invalid deductive inference. But these risks of error do still aggregate even in a chain of single-premise deductions, and when such chains are long, they will become significant.¹³ ¹² The line of argument here is directly indebted to that given by Lasonen-Aarnio (2008) and Schechter (2013). See DeRose (1999: 23, fn. 14) for an earlier voicing of a closely related thought. ¹³ These claims, once again, are somewhat controversial. One might claim that, as long as one doesn’t in fact make a logical mistake in deduction, merely having evidence that one has made such a mistake has no impact on one’s justification for believing the conclusion. (See Wedgwood 2012: esp. 290–4; see also remarks in this direction made by Weatherson (ms.) and Lasonen-Aarnio (2014: esp. 322–3).) Some of what I say below will put pressure on such claims. But obviously this is a bigger topic than I can decisively settle here. And I should be up front that the way of thinking I’m endorsing sits badly, for fairly obvious reasons, with probabilism as a theory of the structural requirements on credences. That will be a great cost from some people’s point of view, but not so much from mine.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
In fact, I think that there are two quite distinct ways that these risks of error can arise. One goes through a kind of higher-order evidence. If there are enough steps in the chain, and one has reason to think that one is not logically infallible, one should suspect that there’s a very good chance one has gone wrong somewhere in the chain. And, at least on most theories of higher-order evidence,¹⁴ this has at least some effect on one’s justification for believing the conclusion, Pn, at least on the basis of the chain of deductions. If this were the only way in which the risk of error could arise, it might encourage a picture on which each step in the chain of deductions itself transmits justification perfectly, as it were, only for this justification to then be “undermined” by the higher-order (justified) belief that one may have gone wrong somewhere in the deduction. But I think there is another way in which risk arises that calls into question the transmission of justification in each step. This is supplied by the actual risks that, given one’s logical fallibility, one always incurs when engaged in the activity of deductive inferences. Here the risk is not created by reasons to think that one is logically fallible, as with the higher-order risk, but by one’s actual logical fallibility. The thought is that at each step of the deduction, the risk (given one’s logical fallibility) that one is not in fact performing a valid deductive inference means that a small amount of justification is “lost,” such that the conclusion is (again, barring independent evidential support) slightly less justified than the premise. This will be controversial, so here’s a way of motivating it. Suppose that some proposition p entails some other proposition q, but that the entailment is highly nonobvious. Now consider a person who reasons from a belief in p to a belief in q, but without any grasp of this entailment whatsoever. His inference is valid only by fluke: there is in a good sense, then, a strong chance that his inference is invalid, even though it is in fact valid. I hope you share my sense that such a person’s justification for q might then be considerably worse than his justification for p.¹⁵ Now, I am not imagining the person who performs a chain of single-premise inferences this way: I am assuming that the entailments in her chain are not especially opaque, and that for each step of her deduction, she has the normal reasons to (implicitly) think she is inferring validly that we typically have when we perform simple deductions. We could say that she grasps the entailments, as long as we do not let the factivity of “grasp” push us in an unwarrantedly infallibilist direction: though her inferences are valid, she is—like all of us—subject to making occasional mistakes,
¹⁴ Even theories on which the higher-order evidence is not necessarily decisive with respect to what firstorder attitude one should take typically allow that such higher-order evidence has some impact. See, e.g., Kelly (2010), Pryor (2013: 99–100), Lasonen-Aarnio (forthcoming). ¹⁵ See also Pryor (2018). Is it merely retrospective justification that he lacks? I don’t think so: if the entailment is sufficiently non-obvious, such that he couldn’t possibly have grasped it given his situation and cognitive capacities, I think he lacks prospective justification as well.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
even with simple inferences, and so there is in a perfectly good sense some small chance at each stage that she is inferring invalidly. But now the question is this. If a strong chance that one has inferred invalidly can result in one’s conclusion being (much) less justified than one’s premise, why couldn’t a slight chance that one has inferred invalidly result in one’s conclusion being (slightly) less justified than one’s premise? In both cases, the diminishing of justification occurs notwithstanding the actual validity of the inference. I think we should conclude that both cases are possible. We have to tread carefully in drawing the lesson for the preservation thesis. As I have just claimed, deductively valid inferences can sometimes fail to fully preserve justification. But it is open to the defender of the preservation thesis to deny that whenever an inference is deductively valid, it is in the relevant sense thereby correct. Indeed, it seems that a defender of the preservation thesis will have to deny that all valid inferences are correct, in order to deal with the simpler cases where the reasoner infers validly completely by fluke.¹⁶ She must then identify some privileged set of deductively valid inferences that constitute the rules of correct (deductive) reasoning, and that always preserve justification. The simple, very general rule to always infer q from p when p entails q will not be in this set, since such entailments can be highly non-obvious and opaque. However, I have been assuming that the modus ponens rule will be in this set. And so now we simply need the point that, just as there can be some small risk of making inferential mistakes in some simple single-premise deduction, likewise there can be a small risk of making inferential mistakes in a two-premise deduction via modus ponens. So, in our original long chain of deductions via modus ponens, the reasoner has some reason to worry—if the chain is long enough—that not every single one of these deductions is in fact a legitimate instance of modus ponens. Even if it happens that each one of them is a legitimate instance of modus ponens, there is still in the relevant sense, from the reasoner’s perspective, a risk that it might not have been (in the same way that a belief can be true but in a perfectly good sense carry a risk of being false). From the reasoner’s perspective, one of her steps might in fact have been an instance of affirming the consequent, or have involved an equivocation such that it superficially resembles an instance of modus ponens, but in fact invokes a subtly different proposition in the antecedent of the conditional premise than in the unconditional premise. These are the kinds of mistakes we can make, especially when reasoning implicitly and quickly, as we often do.¹⁷ As in the single-premise case, such risks slightly weaken our justification for the conclusions of our reasoning, even when such reasoning is correct. Deficiencies in one’s justification for the conclusion of such reasoning that are traceable to these risks are not traceable to a lack of justification for the non-derived premises in one’s reasoning. They are
¹⁶ Cf. Wedgwood (2012: 279).
¹⁷ See, e.g., Kahneman (2011: 45).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
incurred in the course of the reasoning itself, notwithstanding its correctness. Thus, even the scalar interpretation of the preservation thesis fails.¹⁸ It’s now time to consider a more radical response to the objections I have voiced so far. I have been assuming that we have some fix on intuitively plausible rules of correct reasoning—the modus ponens rule being one such rule—and that it is the job of a general account to prove its extensional adequacy against this background. But if one is convinced that what it is for a rule of reasoning to be correct just is for it to preserve justification, one might instead try rejecting the idea that the modus ponens rule is, as it stands, a rule of correct reasoning. Perhaps the correct rule will have to be some heavily caveated version of the modus ponens rule.¹⁹ Maybe it tells us to infer via modus ponens when there are no relevant risks of the kind that I have been urging prevent it from always preserving justification. Other caveats may be needed too. I do not think we should be casual about the idea that we can simply caveat the modus ponens rule to take account of the problems I have been discussing, and thereby solve the problems for the preservation thesis.²⁰ First, there is the general worry that the caveats to the modus ponens rule are gerrymandered to save the presservation thesis, and that they are either independently implausible or render the resulting preservation thesis trivial. But even setting this broad methodological point aside, it is very hard to get the caveats to the modus ponens rule right so that they both save the preservation thesis from the above counterexamples and give us a plausible account of the modus ponens rule itself. Here is a first attempt at a caveated rule: Modus Ponens Rule-Caveated. From the belief p, and the belief if p then q, derive the belief q, unless there are risks sufficient to disrupt the preservation of whatever justification one has for these two beliefs to the belief in q. This caveated rule faces a dilemma. Either we are to understand “disrupt” as covering any kind of slight disruption to the preservation of justification, or we are to understand it as covering only significant disruptions to the preservation of justification. On the former interpretation, the caveat seems to exclude every (or at least ¹⁸ One might now suggest that, having gone scalar about justification, we should also go scalar about belief and work with credences instead of outright beliefs, and with rules of reasoning for credences. Perhaps then we can say (for example) that the correct modus ponens rule should go from high credences in p and (if p then q) to a slightly lower credence in q. (See Wedgwood 2012: esp. 288–9; also Field 2009.) But I don’t think that a single rule of this kind that would always preserve justification can be specified, even in principle. This is because how much justification is “lost” in an inference will vary from case to case depending on the particular thinker’s epistemic position with respect to the inference. ¹⁹ Remarks by Schechter (Chapter 9, this volume) suggest that this is his response to the problems he himself noted with long chains of deductions. ²⁰ Lasonen-Aarnio (2014: 322–3) also argues against such a caveating of our rules of inference. However, she assumes that this bolsters the conclusion that we are always justified in believing the conclusions of such inferences even when we have evidence that we have inferred wrongly. This implicitly assumes the preservation thesis, in taking it for granted that if we can avoid caveating our rules of reasoning, parallel conclusions about justification will follow.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
nearly every) instance of modus ponens reasoning from falling under the modus ponens rule. For any inference carries with it some risk of error, and so the preservation of one’s justification is always slightly disrupted in inference. On the latter interpretation, the caveat fails to deal with the examples involving long chains of inference. For the whole point of those examples is that at no one stage at which the rule applies is the disruption large. Rather, through many repeated steps, the inferences eventually lead to an attitude which is much less justified that the attitude or attitudes we began with. So on this interpretation, the preservation thesis will still fail even for the caveated modus ponens rule. It seems that the only way to get around this dilemma is to return to an “on–off ” notion of justification. The cases where it would be incorrect to infer via modus ponens, on this view, will be those where there are risks such that, although the premises are each on–off justified, the conclusion is not on–off justified. This may be because the premises only just meet the threshold for justification. This sounds more promising. But there is another problem. Remember that a piece of reasoning that starts with unjustified premises can be perfectly correct qua reasoning. This is why the justification-based account of rules of reasoning talks about the preservation of justification, not about whether the conclusion of the reasoning is always actually justified. In itself, the caveated modus ponens rule does not say anything inconsistent with this. But now consider a more complex case. Suppose that the premises of a putative modus ponens inference are (on–off ) justified, but only just—such that the conclusion of one’s reasoning is not (on– off ) justified. Suppose further that one overestimates the (scalar) justification of the premises, thinking that they easily meet the threshold for on–off justification. As such, one takes it that it is perfectly safe to infer the conclusion—and does so. This reasoning is not vindicated as correct by the caveated modus ponens rule, since there are risks sufficient to disrupt the preservation of one’s on–off justification from premises to conclusion. But that is an odd result, especially when juxtaposed with the claim that in the simpler case where one reasons via modus ponens from straightforwardly unjustified premises, one is reasoning correctly. If there is nothing wrong with one’s reasoning in the simpler case, what could be wrong with one’s reasoning in the more complex case? In both cases, one makes a mistake only in beginning with premises that are not well-supported enough to justify the conclusion—not, intuitively, a mistake in one’s reasoning itself. The caveated version of the modus ponens principle yields the bizarre result that when one makes a big mistake about the justification of one’s premises (considering them justified then they are not), one reasons correctly—but when one makes a smaller mistake about the justification of one’s premises (getting it right that they are on– off justified, but considering them to be somewhat more justified than they are), one reasons incorrectly! It may seem like there’s a clear fix to the caveated modus ponens rule to take care of this problem. One could say the caveat should concern cases in which one believes
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
there to be risks that disrupt the preservation of justification, rather than cases in which there are such risks. This avoids the problem raised above, but at the price of once again opening the way for failures of the preservation thesis even given the caveat on the modus ponens rule. For consider a case where there are risks that disrupt the preservation of justification, but one lacks the belief that there are such risks. If these were instances of correct reasoning—as they are according to the current proposal—they would be instances of correct reasoning that do not preserve justification. So this kind of caveat would not save the preservation thesis. Thus, I think the better thing to say is that the original, uncaveated version of modus ponens itself is the rule of correct reasoning, but that risks associated with the preservation of justification can be reasons, from time to time, not to employ it, notwithstanding its correctness. As such, the preservation thesis cannot be sustained. The modus ponens rule provides a counterexample. Very similar strategies could be mimicked for other rules of theoretical reasoning. I want to round off this section by briefly sketching how the problems developed here also extend—less familiarly—to the instrumental rule and thus to rules of practical reasoning. The extension is simple. Consider a long chain of practical reasoning via the instrumental rule, where ‘A’ stands for an action: Intend(A₁) Believe(To A₁, Must A₂) So, Intend(A₂) Believe(To A₁, Must A₃) So, Intend(A₃) [...] Intend(To An1) Believe(To An1, Must An) So, Intend(An) Let us grant that both one’s intention to A₁, and each of the instrumental beliefs in the chain, is justified. (This amounts to granting that each non-derived premise in the chain of reasoning is justified.) Even when this is so, each instrumental belief comes with a small risk of error. One’s justification for an intention to take a means tracks both one’s justification for intending the relevant end, and one’s justification for believing that the means is necessary for the end. So, since one’s belief that to A₁, one must A₂ comes with a small risk of error, one’s justification for intending to A₂ is slightly worse than one’s justification for intending to A₁. Likewise, one’s justification for intending to A₃ is slightly worse than one’s justification for intending to A₂. And so on, such that if there are enough steps in the chain, one’s justification for intending to An may be very weak (at least bracketing independent reasons to An that are not derivative on one’s reason to A₁). Put another way, each instrumental belief in the chain introduces an extra risk that it may not be the case that one must An in order to A₁, and these aggregate so that overall it may be very doubtful whether one must
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
An in order to A₁. Then, one’s justification for intending to An—again, insofar as it is derivative on one’s justification for intending to A₁—may be very weak. Moreover, one may be (justifiably) doubtful about whether every step in the chain is a correct instance of instrumental reasoning. It could be—from one’s own perspective—that some step involved the fallacious, affirming-the-consequent-esque pattern [Intend(Φ), Believe(To Ψ, Must Φ), so Intend(Ψ)]. Or one step may have involved the pattern [Intend(Φ), Believe(To Φ, Can Ψ), so Intend(Ψ)]. And so on. Provided there is a risk of such mistakes, even a scalar interpretation of the preservation thesis will fail for instrumental reasoning just as it failed for deductive reasoning: there will be deficiencies in one’s justification for intending to An that are not traceable to deficiencies in one’s justification for any of the premises of the chain of reasoning.
3. Correct Reasoning and Structural Rationality What I’ve said so far has put pressure on attempts to forge a tight link between correct reasoning and justification. But perhaps even if there is no tight link between correct reasoning and justification, there is a tight link between correct reasoning and structural rationality. One proposal of this sort is made (at least on one possible interpretation) by Nadeem Hussain (ms.). Roughly, the idea is that reasoning is correct just when it brings one to satisfy requirements of structural rationality.²¹ However, as it stands this cannot be right. First, the purported condition for correct reasoning is not necessary. As I said in Section 1, the notion of correct reasoning is not such that whenever reasoning is correct, one is always required to undertake such reasoning. Although this claim is not quite the same as the claim that reasoning is correct when it brings one to satisfy structural requirements, the same sorts of cases undermine both claims. For example, consider disjunction introduction. It is surely correct reasoning to move from believing p to believing (p ∨ q). But is there a structural requirement to the effect that believing p without believing (p ∨ q) is always rationally forbidden? It seems not. Suppose you believe that it’s raining. If the purported structural requirement held, you’d be irrational if there were even one disjunction of the proposition that it’s raining and some other proposition q (quantifying over all possible propositions), such that you failed to believe the disjunction. For example, you would be irrational if you failed to believe the disjunction that either it’s raining or V.I. Lenin was an off-spin bowler for Nottinghamshire County Cricket Club. But that is not plausible: you are surely not irrational for lacking this belief. (You might not even have the concepts, such as the concept of an off-spin bowler, that are plausibly required to have it.) Rationality does not require you to ²¹ See, e.g., Hussain (ms.: 46).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
fill your mind with trivial, useless extensions of your existing belief state.²² So, disjunction introduction is correct reasoning, without bringing one to satisfy any structural requirement.²³ Second, the purported condition for correct reasoning is not sufficient.²⁴ Suppose you intend to attend your court appearance, and believe that to do so you must get your car fixed, but you fail to intend to get your car fixed. You violate the instrumental requirement, and if you eliminate any of these three states, you will ipso facto no longer be violating (this particular instance of ) this requirement. But there are routes of reasoning that achieve this, yet do not constitute correct reasoning. For example, suppose your aunt Hilda fixes cars, and you believe that to get your car fixed you must call Hilda. And suppose you do intend to call your aunt Hilda. If you were to reason from these two states (intending to call Hilda and believing that to get your car fixed you must call Hilda) to intending to get your car fixed, your reasoning would not be correct: it would be like a practical version of affirming the consequent.²⁵ Nevertheless, in adopting the intention to get your car fixed, you would come to satisfy a structural requirement that you previously failed to satisfy, since you would no longer be in the combination of states where you intend to attend your court appearance, believe that do so you must get your car fixed, but fail to intend to get your car fixed.²⁶ The first, and perhaps also the second, of these problems arise for Hussain’s view because he tries to tie correct reasoning to structural requirements. One might think that the solution here is to tie correct reasoning not to structural requirements but to structural permissions. Indeed, this tack is taken by John Broome, who claims that “if it is correct to reason to some conclusion, that is because rationality permits you to
²² Yet again, this point was heavily stressed by Harman (1986: 5–6, 12), in a slightly different context. See also Broome (2013: 158–9, 246–7.) ²³ Hussain recognizes this problem. His reply is that the requirements of rationality supply norms of how you ought to reason if you are going to reason. I do not think this helps, however. First, as it stands the proposal is too vague. Perhaps I am, and ought to be, reasoning about something, but that still doesn’t mean that I ought to be reasoning from the belief that it’s raining to the belief that either it’s raining or V.I. Lenin was an off-spin bowler for Nottinghamshire County Cricket Club. Second, the proposal simply does not make this original problem go away. Suppose I do in fact engage in the reasoning via disjunction introduction, moving from the belief that it’s raining to the belief that either it’s raining or V.I. Lenin was an off-spin bowler for Nottinghamshire County Cricket Club. My reasoning is correct. But then Hussain’s claim that (correct) reasoning involves coming to satisfy rational requirements still entails that there is some structural requirement that I now satisfy and previously did not satisfy. And this does not seem to be so. Pointing out that it may be that I’m not in a case where I ought to reason has not solved the problem. ²⁴ For a structurally similar example to the one I am about to give, see Broome (2013: 246). ²⁵ Compare: I intend to go to the bar; I believe that to drink five bottles of tequila I must go to the bar; so I intend to drink five bottles of tequila. ²⁶ Nor would you have purchased this compliance at the price of creating a violation somewhere else in your belief states. It is not structurally irrational for you to be in a combination of states where you intend to call Hilda, believe that to get your car fixed you must call Hilda, and intend to get your car fixed. What went wrong was how you got to these states, not the combination you ended up with.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
reach that conclusion” (Broome (2013: 219)).²⁷ Here Broome is still speaking of what we have been calling structural rationality. But crucially, on Broome’s view, it is rational permissibility rather than rational requiredness that is most tightly linked to correct reasoning. According to Broome, (correct) reasoning will often bring one into satisfaction of rational requirements, and is one of our main ways of doing this.²⁸ But equally, “in many cases, you commit no offense against rationality by failing to do a piece of reasoning that would have been correct had you done it.”²⁹ Broome develops his account by appealing to the notion of a “basing permission.” A basing permission specifies that it is rationally permissible to base some particular attitude on some other attitude or attitudes.³⁰ According to Broome, “each permission will determine a rule, and reasoning by correctly following that rule will be correct.”³¹ For instance, reasoning by modus ponens is correct because it is always (structurally) rationally permissible to base a belief in q on believing p and believing if p then q. Here is the way Broome writes out the basing permission that corresponds to the modus ponens rule (very slightly adjusted to match my notation): Modus Ponens Permission. Rationality permits N that N believes q at some time on the basis of believing p at some time and believing (if p then q) at some time.³² Now, it is not immediately obvious what this kind of construction, with a permission to believe some unspecified variables, means. In interpreting requirements of structural rationality, it’s natural to suppose that variables are implicitly universally quantified over all propositions.³³ So, for example, for all propositions p, you are rationally required not to both (believe p and believe not-p). This is easy to make sense of. What about a permission, like the modus ponens permission, however? Broome tells us that “a basing permission is nothing other than the negation of a basing prohibition.” A basing prohibition is a requirement that prohibits basing some attitude on some other attitudes. Given this, the modus ponens permission amounts to the following: ¬(Rationality requires of N that N does not believe q at some time on the basis of believing p at some time and believing (if p then q) at some time). ²⁷ [Note added at the time of publication, March 2019:] The final version of the present chapter was submitted in January 2017. I subsequently wrote an extended critical discussion of Broome’s book (Worsnip 2018b), one section of which somewhat overlaps my discussion of Broome’s view over the next few pages. Although the critical discussion ended up appearing in print first, it was submitted later, and with the benefit of some comments from Broome on my objection to his view that led me to make some important revisions to it. While I still think the core of the objection given here succeeds, interested readers should consult Worsnip 2018b: 83-89 for the definitive version of it. I am grateful to the editors of the present volume, to Oxford University Press, and to the editors of Problema, for permission to include the overlapping material in both pieces. ²⁸ See Broome (2013: 207). ²⁹ Ibid.: 219. ³⁰ Ibid.: 189–90. ³¹ Ibid.: 247; see also 255. ³² Ibid.: 191. ³³ Or attitudes or actions, in other cases.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Unfortunately, the lack of quantifiers makes this claim ambiguous, depending on whether the negation goes in front of the quantifiers or after them: (a) ¬(8N 8p 8q(Rationality requires of N that N does not believe q at some time on the basis of believing p at some time and believing (if p then q) at some time)). (b) 8N 8p 8q ¬(Rationality requires of N that N does not believe q at some time on the basis of believing p at some time and believing (if p then q) at some time). These claims are not equivalent. I am genuinely unsure as to which Broome intends. Plausibly, when Broome says that a basing permission is just the negation of a basing prohibition, the more natural reading is (a). After all, a full statement of a basing prohibition would include its quantifiers. It is (a) that is equivalent to saying that there is no basing prohibition forbidding one from believing q on the basis of believing p and believing (if p then q). On the other hand, if we read the original statement of the modus ponens permission as itself implicitly universally quantified (as Broome’s claims about requirements are), then we get (the logical equivalent of ) reading (b). So I am not sure how to read Broome here. I’ll now argue that neither reading manages to get Broome what he wants: reading (a) is too weak to vindicate the proposed connection between structural rationality and correct reasoning, while reading (b) is too strong to be plausible. Begin with (a). (a) just says that, quantifying over all propositions p and q, it is not always forbidden to believe q on the basis of believing p and believing if p then q. This reading of the modus ponens permission is extremely weak. It is equivalent to an existentially quantified reading of the modus ponens permission as originally stated: there are some cases where it is permissible to believe q on the basis of believing p and believing if p then q. But once we fix on this understanding of basing permissions more generally, Broome’s core claim that every basing permission generates a rule of correct reasoning is implausible. For example, consider the following claim: (1) ¬(8N 8p 8q(Rationality requires of N that N does not believe q at some time on the basis of believing p at some time)). What this says is that there are some propositions p and q such that it is permissible to believe q on the basis of believing p. This is presumably true. But surely there is no general rule of correct reasoning that says: From the belief p, derive the belief q. Thus, there is no rule of correct reasoning corresponding to this very weak kind of basing permission. One might now say that the commitment should only be that when it is permissible to believe q on the basis of believing p, it is correct to reason from believing p from believing q. But if that is the relationship between structural rationality and
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
correct reasoning then, likewise, we would say only that when it is permissible to believe q on the basis of believing p and believing (if p then q), it is correct to reason from believing p and believing (if p then q) to believing q. But that is less than what the modus ponens rule says: the modus ponens rule says that quite generally it is correct to reason from believing p and believing (if p then q) to believing q. So this way of developing the view would not vindicate the modus ponens rule (as Broome explicitly promised to do). Consider now claim (b). (b) says, in effect, that it is always rationally permitted to believe q on the basis of believing p and believing (if p then q). This claim, I think, is implausibly strong. Even confining ourselves to purely structural rationality (and so bracketing whether your beliefs in p and (if p then q) are themselves justified), there are cases where, in believing q on the basis of believing p and believing (if p then q), you are not structurally rational. For a first example, suppose that you believe that it is raining, and believe that if it is raining then it is not raining. And suppose that on the basis of these two beliefs, you believe that it is not raining. This combination of states is structurally irrational, since it involves your believing that it is raining and believing that it is not raining; that is, believing contradictory propositions.³⁴ For a second example, return to the long chain of modus ponens deductions that I discussed in Section 2. In this case, I said, there is a slight risk of error associated with each premise in the deduction, and these risks aggregate so that eventually the probability of the conclusion is low. But now let us suppose that the agent herself recognizes these risks of error, and even recognizes how they aggregate. Would she be even structurally rational to base belief on the conclusion of the long chain on its premises? I think not. But if it were always structurally rational to base a belief q on a belief p and a belief (if p then q), then each new conclusion drawn in the chain would be structurally rational, and so ultimately would the final conclusion. I conclude that neither Hussain nor Broome has given us a convincing account of a tight relation between correct reasoning and structural rationality. Of course, there might be other ways of trying to do so. I can think of only one that seems initially promising. One might say that, if one begins with a belief set that does not violate any ³⁴ It might be objected here that the irrationality is not specifically in the basing of the belief that it is not raining on the beliefs that it is raining and that if it is raining, then it is not raining. Rather, it is just in the initial state of believing that it is not raining and believing that if it is raining, that it is not raining. I am not sure about this: there does seem to also be something irrational about forming the belief that it is not raining partly on the basis of the belief that it is raining. But even if we concede the point, it does not save the present way of reading the modus ponens permission as quantifying over all propositions. For Broome makes it clear that he intends the modus ponens permission to be read as shorthand for ‘Rationality permits N that N believes p at some time, N believes (if p then q) at some time, and N believes q at some time on the basis of believing p at some time and believing that (if p then q) at some time’ (see the general form of basing permissions set out on Broome 2013: 190). And, read as quantifying over all propositions, this full statement of the permission is clearly falsified by a case where p and q are contradictories: rationality does not permit this combination of states in such a case, since rationality does not permit believing p and (if p then q) when p and q are contradictories.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
structural requirements, and reasons correctly, one will never be left in a position where one does then violate a structural requirement. That is: reasoning correctly can never introduce any new structural irrationality into one’s belief set. However, I think once again that the long chain of deductions case is a counterexample to this claim.³⁵ Suppose that one believes every (non-derived) premise of the long deduction, and assigns a very high but non-one credence to each such premise. Surely there need be no structural irrationality involved in this combination of states so far. Let us now also add that, recognizing the way that the risk aggregates, one has a credence < 0.5 in the conclusion of the long chain of deductions. This does not seem to introduce any structural irrationality into one’s state either.³⁶ But, one can correctly reason, via a series of modus ponens inferences, from the beliefs in the premises to belief in the conclusion of the long chain of deductions. However, it is structurally irrational (I claim) to believe a proposition while also assigning it credence < 0.5 (thus regarding it as more likely to be false than true). So we have a case where correct reasoning has introduced structural irrationality into a previously structural rational belief state.³⁷
4. Correct Reasoning as a Sui Generis Notion One reaction to what I have argued so far would be to give up on the notion of correct reasoning, or at least that of any systematic rules of correct reasoning that will include modus ponens, the instrumental rule, and so on. However, I do think there is an intelligible sense in which when one reasons with modus ponens, one reasons correctly, and so I am sympathetic to the idea that correct reasoning can be understood as a sui generis notion, one that is not reducible either to justification or to structural rationality. In the space remaining I can only make some brief remarks about such a sui generis notion and its virtues. So far, I have been maintaining a fairly loose fix on the notion of correctness, trying to leave open what exactly it amounts to. But now I want to propose that the notion of correctness as it features in ‘correct reasoning’ should be understood in a way ³⁵ Here I build on a similar argument from another context in Worsnip (2016). ³⁶ If you think it does, there is a slightly more complex way to run the argument. We can make the case one where the long chain of deductions issues in a conclusion that, of logical necessity, is true only if all the premises are true. The simplest way to do this is to make the conclusion the conjunction of all the premises, as in the preface case. Now, one can actually correctly reason from the original non-one credences in the premises to a low credence in the conclusion (given suitable probabilistic independence between the premises). So someone defending the claim that correct reasoning never introduces any structural irrationality will now either have to accept that the low credence in the conjunction does not introduce any structural irrationality, or concede that their own view is false. ³⁷ This is interesting, since some (e.g. Fogal 2015; MacFarlane ms.) think of preface-type cases as involving conflicts between structural rationality and justification, whereby structural rationality requires one to believe the logical conclusions of one’s other beliefs, yet one would not be justified in doing so. My argument suggests that in preface-type cases, believing the logical conclusions of one’s other beliefs can be structurally irrational.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
analogous to how it features in ‘correct belief.’ Several philosophers have defended the idea that there is a notion of correctness for belief such that a belief is correct iff it is true.³⁸ Although, when it comes to belief, correctness is coextensive with truth, correctness is a normative concept in a way that truth itself is not. A further way to bring out the contrast between correctness and truth is to see that correctness can apply to intentions (and actions), whereas truth cannot. What unifies the notion of a correct belief and a correct intention is that in both cases the notion of correctness corresponds to the maximally “objective” (or, as I like to call it, the “super-objective”) reading of “ought,” the one that is relative to all the facts and not at all constrained by the agent’s epistemic situation. For belief, the agent superobjectively ought to believe whatever is true. For intention (or action), the agent super-objectively ought to (intend to) do whatever will in fact satisfy the ideal practical norms. So, for example, if the ideal practical norms tell one to maximize utility, the agent “super-objectively” ought to (intend to) do whatever will in fact maximize utility, even if she is in no epistemic position to know what this is. Clearly, it may not always be justified or rational (in any ordinary sense of those terms) to believe or intend what it is correct to believe or intend. There are beliefs which would be true but are not supported by one’s evidence, and there are actions that would, as a matter of fact, satisfy the ideal practical norms, though one has no way of knowing this. Indeed, in both cases, one’s evidence can strongly suggest that the correct belief or intention is in fact incorrect. In these cases, the correct beliefs and intentions would still (of course) be correct, but would not be justified. I suggest that we should understand the correctness of following rules like modus ponens and the instrumental rule in the same way. It is always, in the super-objective sense, correct to reason by modus ponens or by the instrumental rule, but, as I have been urging in the last two sections, it is not always justified or rational to do so. Why take these rules to be correct as rules of reasoning in the same way that true beliefs are correct as beliefs? Such reasoning obviously does not always yield correct beliefs: reasoning by modus ponens can lead one to false belief. Nevertheless, these rules do preserve correctness (though, as I argued in Section 2, they do not always preserve justification), and this is one reason to think that reasoning by them is correct qua reasoning, where this (as I said in Section 1) focuses on the status of the transition between states rather than the status of the states themselves. To spell this out: since correctness for belief is truth, deductive rules of inference like modus ponens—which are truth-preserving—are also correctness-preserving. That is: if one begins with correct premises and reasons by modus ponens (or any other valid rule of inference), one will always arrive at a correct conclusion. Similarly, the instrumental rule is correctness-preserving: if you super-objectively ought to Φ, and it’s true that to Φ you must Ψ, then you super-objectively ought to Ψ. Again, the
³⁸ See, especially, Shah (2003), Wedgwood (2002, 2013), Gibbard (2005).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
extension to the practical case shows that the notion of correctness-preservation is not just a trivial way of restating the (non-normative) notion of truth-preservation. Interestingly, this suggests an important parallel between deductive reasoning in the theoretical case and instrumental reasoning in the practical case: both are correctness-preserving. This is not to say that all correct reasoning is correctness-preserving;³⁹ nevertheless, that some rule of reasoning is correctness-preserving is at least a good reason to take it to be correct. This picture makes possible a qualified defense of the much-debated claim that, as it is sometimes put, “logic is normative for thought.”⁴⁰ On my account, this is true in the sense that deductive logic supplies us with rules of reasoning that have a normative status and role as rules of correct reasoning—they are not merely inert descriptive principles. But we can also preserve the intuitive thought that there are sometimes situations when we (in a very good sense) shouldn’t reason with deductive rules—in just the same way that often there is a belief that is (as a matter of fact) correct, but that a responsible epistemic agent shouldn’t hold. Moreover, on this picture, all (deductively) valid reasoning is correct reasoning, since all valid reasoning preserves correctness. In Section 1 I bracketed this question as one to be settled by our substantive theory, and in Section 2 I considered some views on which not all valid reasoning is correct reasoning. Given an attempt to maintain a tight link between correct reasoning and justification, it seems that one is under pressure to claim that not all valid reasoning is correct, since reasoning that is only flukily valid, or where the agent is in no position to grasp the validity of the reasoning in question, does not appear to be justified or to preserve justification. But then this leads one into the rather difficult project of non-arbitrarily demarcating the “simple” or “basic” rules which are rules of correct reasoning from other deductively valid inference-patterns that are not to count as correct.⁴¹ On the present picture, by contrast, we do not need to engage in this project, because calling reasoning correct does not commit us to thinking of its instances as justified or justification-preserving. As I have argued, even when one follows very simple rules of reasoning—such as modus ponens—there are small risks of error that can disrupt the preservation of justification. The difference between such rules and much more complex but valid rules of reasoning, then, is not one of kind but of degree. Reasoning by the more complex rules will be unjustified more often, and involve greater disruptions of the preservation of justification, because these complex
³⁹ Most obviously, inductive and abductive reasoning are not (always) correctness-preserving. One might take this to show that there are two interestingly different kinds of rules for reasoning: those that are strictly correctness-preserving and those that are not. One might also claim that the latter category seems to involve a “looser” kind of reasoning that is less strictly rule-governed. ⁴⁰ For discussion, see, e.g., Harman (1986), Sainsbury (2002), Field (2009), and MacFarlane (ms.). ⁴¹ See, e.g., Wedgwood (2012: 279). For discussion of the difficulties of various different ways of identifying such a privileged set of basic rules, see Schechter (Chapter 9, this volume).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
rules tend not to be ones that the agent has even an implicit grasp of when she reasons. This is, however, at least somewhat dependent on the agent in question: some thinkers have greater logical capacities than others, a point that again makes it unattractive to try to demarcate some privileged subset of the valid rules that are also to count as correct, where validity does not entail correctness. What we can say, instead, is that the correlation between correctness and justification will be weaker with complex rules than with simple ones. This leads us to what I suspect many may think is the elephant in the room here. I have tried to pull correct reasoning apart from the justification and rationality of the states that one reasons from and to. And I am also claiming that saying that reasoning is correct does not involve any commitment to saying that the reasoning itself is justified either. But if this is right, and if correctness is best understood as a super-objective normative notion, then shouldn’t we also be able to give some account of the justification of reasoning itself? And perhaps that is what we should really care about. So in giving an account of correct reasoning where correctness is understood in this super-objective way, we have changed the subject. Now, one possibility here—which I have been ignoring until now—is to deny that, properly speaking, justification ever attaches to reasoning itself, as opposed to the states that it begins and ends with. In the theoretical case, the kind of justification involved is supposed to be epistemic justification. But plausibly, reasoning is an act.⁴² Can an act ever be epistemically justified? There is at least a challenge to say how this can be so. That said, I will concede for the sake of argument that we can talk of the justification (as distinguished from correctness) of reasoning. Even given this, however, what I want to deny is that there are rules of justified reasoning in the same way that there are rules of correct reasoning. On the picture I have been arguing for, it can be unjustified to follow pretty well any rule of reasoning, given the right setup and background conditions. So if we are looking for rules of reasoning the following of which will always be justified, we are in for a disappointment. So as long as we are looking for rules of reasoning that have some positive normative status, I maintain, the super-objective notion of correctness is the only notion we have to work with. If that’s right, it’s not changing the subject to fix on this notion of correctness in looking for an intelligible notion of “rules of correct reasoning.” Nevertheless, we can still ask: on what will the justification of the employment of a rule of correct reasoning in some particular case depend? It will at least partly depend, I think, on whether one implicitly grasps the correctness of the rule that one is following (if one is following a rule at all).⁴³ That is the obvious difference between the ordinary, good case of justified reasoning on one hand and the case in ⁴² See, e.g., Broome (2013: 235–42), Pettit (2007). ⁴³ Somewhat similar suggestions are made by Fogal (2015: 60), Pryor (2018), and—for the non-basic rules that he does not regard as always justified to employ—Wedgwood (2012: 279).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
which someone performs a valid inference by “fluke,” or when one makes an inference that is too complex to justifiably proceed in a single step, on the other.⁴⁴ I’ve heard it objected at this point that this claim raises some kind of problem analogous to that which the tortoise foists on Achilles in Lewis Carroll’s (1895) famous parable. I do not think that it does. What the tortoise insists on (and creates so much trouble by insisting on) is treating facts about entailments—about some set of premises entailing a conclusion—themselves as premises of the argument, before the argument is to be accepted as valid. Transposed into the language of reasoning, and rules of reasoning, the idea would be that the modus ponens rule itself should have the relevant entailment-fact included as one of the beliefs that one must reason from. In other words, the modus ponens rule should in fact read: Tortoisey Modus Ponens Rule. From the belief p, the belief if p then q, and the belief (if (p and if p then q), then q),⁴⁵ derive the belief q. As Carroll shows, once we start down this road, we are led into a regress, since one could just as well demand that we also include the belief (if (p and if p then q and (if (p and if p then q), then q)), then q), and so on ad infinitum. However, I have not said that we should modify the modus ponens rule in this way. Rather, I have said that it is a condition of being justified in employing the modus ponens rule—as it originally stood—that one have some (implicit) grasp of its correctness. This is more than a terminological difference, because it does not generate the same regress that we get when we make the tortoisey modification of the modus ponens rule itself.⁴⁶ If the implicit grasp of a rule’s correctness plays a crucial role in one’s justification in employing it, then the notion of a rule of correct reasoning turns out to do important theoretical and explanatory work even when giving an account of the justification of reasoning.⁴⁷ This reinforces, again, that the account of correct reasoning I have been suggesting is not just a way of changing the subject from what we were interested in. We began with an inchoate notion of correct reasoning— presupposing very little about how correctness was to be understood—and we have been looking for sensible, well-defined, theoretically explanatory notions in its neighborhood. I think we have found one in the way that I have proposed that we understand correct reasoning. ⁴⁴ I also think that when one has misleading evidence that one is following a rule of correct reasoning, but isn’t actually doing so, there’s a perfectly good sense in which one’s reasoning is justified. Here I agree with Fogal (2015: 59). ⁴⁵ This already errs in representing the entailment in question merely as a conditional like the original (if p then q) premise. ⁴⁶ Contrast Harman (1986: 7–8), who arguably leads us into a tortoisey mistake when he claims that when one affirms the consequent, one is only making a mistake in the beliefs that one begins with (namely, a belief about what implies what), rather than in proceeding “in accordance with an incorrect rule of revision.” ⁴⁷ Similarly, Wedgwood (2013) argues that the notion of correct belief plays an important role in helping us to understand justified and rational belief derivatively.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
References Bratman, M. (1987). Intention, Plans, and Practical Reason. Cambridge, MA: Harvard University Press. Broome, J. (1999). “Normative requirements,” Ratio, 12: 398–419. Broome, J. (2013). Rationality Through Reasoning. Chichester: Wiley-Blackwell. Carroll, L. (1895). “What The Tortoise Said to Achilles,” Mind, 4: 278–80. Christensen, D. (2004). Putting Logic in its Place. Oxford: Oxford University Press. Darwall, S. (1983). Impartial Reason. Ithaca, NY: Cornell University Press. DeRose, K. (1999). “Introduction: Responding to Skepticism,” in DeRose and Warfield (eds.), Skepticism: A Contemporary Reader. Oxford: Oxford University Press. Field, H. (2009). “What is the Normative Role of Logic?” Proceedings of the Aristotelian Society Supplementary Volume, 83: 251–68. Fogal, D. (2015). Bad Attitudes: Rationality and its Discontents. Doctoral dissertation, New York University. Foley, R. (1993). Working without a Net. Oxford: Oxford University Press. Gibbard, A. (2005). “Truth and Correct Belief,” Philosophical Issues, 15: 338–50. Greenspan, P.S. (1975). “Conditional Oughts and Hypothetical Imperatives,” Journal of Philosophy, 72/10: 259–76. Harman, G. (1986). Change in View: Principles of Reasoning. Cambridge, MA: MIT Press. Hussain, N. (ms.). “The Requirements of Rationality,” draft manuscript, Stanford University. Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. Kelly, T. (2010). “Peer Disagreement and Higher Order Evidence,” in Feldman and Warfield (eds.), Disagreement. Oxford: Oxford University Press. Lasonen-Aarnio, M. (2008). “Single Premise Deduction and Risk,” Philosophical Studies, 141/2: 157–73. Lasonen-Aarnio, M. (2014). “Higher-order Evidence and the Limits of Defeat,” Philosophy and Phenomenological Research, 88/2: 314–45. Lasonen-Aarnio, M. (forthcoming). “Enkrasia or Evidentialism? Learning to Love Mismatch,” Philosophical Studies. Littlejohn, C. (2012). Justification and the Truth Connection. Cambridge: Cambridge University Press. MacFarlane, J. (ms.). “In What Sense (If Any) Is Logic Normative for Thought?,” draft manuscript, University of California, Berkeley. Makinson, D.C. (1965). “The Paradox of the Preface,” Analysis, 25/6: 205–7. Pettit, P. (2007). “Rationality, Reasoning and Group Agency,” dialectica, 61/4: 495–519. Pryor, J. (2013). “Problems for Credulism,” in Tucker (ed.), Seemings and Justification. Oxford: Oxford University Press. Pryor, J. (2018). “The Merits of Incoherence,” Analytic Philosophy, 59/1: 112–141. Sainsbury, R.M. (2002). “What Logic Should We Think With?” Royal Institute of Philosophy Supplement, 51: 1–17. Scanlon, T.M. (2007). “Structural Irrationality,” in Brennan, Goodin, Jackson, and Smith (eds.), Common Minds: Themes from the Philosophy of Philip Pettit. Oxford: Oxford University Press. Schechter, J. (2013). “Rational Self-doubt and the Failure of Closure,” Philosophical Studies, 163/2: 428–52.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Shah, N. (2003). “How Truth Governs Belief,” Philosophical Review, 112/4: 447–82. Sturgeon, S. (2008). “Reason and the Grain of Belief,” Noûs, 42/1: 139–65. Sutton, J. (2007). Without Justification. Cambridge, MA: MIT Press. Way, J. and Whiting, D. (2016). “If You Justifiably Believe That You Ought to Φ, You Ought to Φ,” Philosophical Studies, 173/7: 1873–95. Weatherson, B. (ms.) “Do Judgments Screen Evidence?” draft manuscript, University of Michigan. Wedgwood, R. (2002). “The Aim of Belief,” Philosophical Perspectives, 16: 267–97. Wedgwood, R. (2012). “Justified Inference,” Synthese, 189: 273–95. Wedgwood, R. (2013). “Doxastic Correctness,” Proceedings of the Aristotelian Society Supplementary Volume, 87, 217–34. Williamson, T. (2000). Knowledge and its Limits. Oxford: Oxford University Press. Worsnip, A. (2015). “Possibly False Knowledge,” Journal of Philosophy, 112/5: 225–46. (2016). Worsnip, A. (2016). “Belief, Credence, and the Preface Paradox,” Australasian Journal of Philosophy, 94/3: 549–62. Worsnip, A. (2018a). “The Conflict of Evidence and Coherence,” Philosophy and Phenomenological Research, 96/1: 3–44. Worsnip, A. (2018b). “Reasons, Rationality, Reasoning: How Much Pulling-Apart?,” Problema, 12: 59–93.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
9 Small Steps and Great Leaps in Thought The Epistemology of Basic Deductive Rules Joshua Schechter
1. Introduction On a widespread and plausible picture, reasoning is a rule-governed activity.¹ In our reasoning, we employ various rules of inference. Some of the rules we employ are deductive rules of inference—presumably including versions of Modus Ponens, Reasoning by Cases, Reductio ad Absurdum and the like. Other rules we employ are ampliative rules of inference—presumably including some version of Enumerative Induction or Inference to the Best Explanation. This general picture of reasoning raises an important explanatory question: What explains the fact that we are epistemically justified in employing some rules of inference but not others? Why, for instance, are we justified in employing Inference to the Best Explanation (if indeed we are) but not Inference to the Third Worst Explanation? Earlier versions of this chapter were presented at a workshop on the epistemology of logic at the Arché Centre at the University of St. Andrews, a workshop on epistemology at Queen’s University, Belfast, a graduate seminar led by Maria Lasonen Aarnio at the University of Michigan, a workshop on human reasoning at the University of Notre Dame, a workshop on knowledge and skepticism at UNAM, Mexico City, a conference on reasoning at the University of Konstanz (where my commentator was Alex Worsnip), a conference on logic and inference at the Institute of Philosophy at the University of London, a conference on pluralism at the Inter University Centre in Dubrovnik, the Humanities Center at the University of Connecticut, and at departmental colloquia at UCLA, the University of Texas at Austin, MIT, and Virginia Tech. I’m very grateful to the participants at these events for their helpful questions. Thanks to Brendan Balcerak Jackson, Magdalena Balcerak Jackson, Philip Bold, David Christensen, Dylan Dodd, Sinan Dogramaci, Ned Hall, Jonathan Jenkins Ichikawa, Ben Jarvis, Julien Murzi, Ram Neta, David Plunkett, and Alex Worsnip for helpful comments and discussion. I’d also like to thank Paul Boghossian, Paul Horwich, Christopher Peacocke, and Crispin Wright, each of whom greatly influenced my thinking about this topic. This chapter is something like a prequel to Enoch and Schechter (2008). I’d like to thank David Enoch for the collaboration that led to that paper. ¹ This claim is endorsed by Boghossian (2003), Field (2000), Peacocke (2004), Pollock and Cruz (1999), and Wedgwood (2002), among many others.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
This question is particularly pressing for those rules of inference that we employ as basic in our thought—the rules that we employ but not on the basis of having any beliefs about the rules or employing any other rules.² The difficulty here is in explaining how it can be that we are justified in employing a rule of inference where this justification does not stem from a justified belief that the rule preserves truth, that the rule preserves justification, or that the rule otherwise has some positive epistemic status. A few years back, David Enoch and I wrote a paper called “How Are Basic Beliefforming Methods Justified?”³ In it, we argued for an answer to this question. The answer we gave, in broad outline, is that the justification thinkers have for employing a rule of inference as basic stems from its importance to their thought. A rule of inference is epistemically justified when it is pragmatically indispensable to one of the central projects of rationality. I like this view. It’s the best view I’ve been able to come up with. But I’m much more confident of its overall shape than I am about all of the details and refinements that appear in that paper. What I’d like to do here is to provide something like a prequel to that paper. I don’t want to argue for the specific account that appears in my paper with Enoch, but rather for the claim that we should endorse a view in the general ballpark. In explaining the justification of our basic rules of inference, we should appeal (in part) to features like usefulness or conduciveness or indispensability to important or required cognitive projects. In this chapter, I will focus on the case of deductive rules of inference. This is an interesting special case, one that enables me to sharply raise some tricky issues. I will also get at the topic a bit sideways. I won’t be directly focusing on the question of how thinkers are justified in employing certain deductive rules as basic. Rather, I’ll be focusing on a slightly different question. To make this question salient, it helps to tell a (somewhat goofy) story: A Story The aliens have landed. They have descended on the capitals of the world and asked to be shown examples of human accomplishments. Since they seem friendly, we’ve decided to comply. We show them tall buildings and grand engineering projects. We explain our great discoveries in physics, chemistry, and the other sciences. We present them with masterworks of literature, art, and music. They are appreciative of all that we show them. Then we turn to mathematics. We show them Andrew Wiles’s proof of Fermat’s Last Theorem.⁴ Here is their response: “Hmmm . . . We see that a lot of work went into this. And it certainly highlights some interesting connections between different areas of mathematics. But it’s difficult to see the point of all this work. After all, there is a simple four-line proof: ² See Field (2000) and Wedgwood (2002) for discussions of basic rules. ³ Enoch and Schechter (2008). ⁴ Wiles (1995).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Suppose x, y, z, and n are positive integers with n > 2. Under this supposition, xn + yn ≠ zn. So if x, y, z, and n are positive integers with n > 2 then xn + yn ≠ zn. Therefore, for no positive integers x, y, z, and n with n > 2 is xn + yn = zn. So why did you go to all this bother?”⁵ This story can be used to raise several interesting questions: What is it to prove a claim? In what way is coming up with a proof a genuine accomplishment? What is the nature of mathematical understanding? And so on. I’d like to focus on a different issue. A natural reaction to the story is that something is wrong with the aliens. One problematic feature of the aliens is that they don’t appreciate the value of Wiles’s proof. But there is a deeper problem. Let’s suppose that the aliens really do employ the following rule of inference: (FLT) From the belief that x, y, z, and n are positive integers with n > 2, infer that xn + yn ≠ zn. For instance, this rule licenses directly inferring from the claim that 13, 16, 17, and 5 are positive integers with 5 > 2 to the claim that 13⁵ + 16⁵ ≠ 17⁵. Let’s suppose that the FLT rule is basic in their thought—or at least, that it is as basic in their thought as simple deductive rules are in ours. In employing the rule, the aliens don’t do any rapid mathematical calculations. They don’t survey all of the natural numbers or quickly prove Fermat’s Last Theorem before moving from the premise to the conclusion. They simply infer from the premise directly to the conclusion. Let’s also suppose that the aliens have a psychology similar to our own, and that they employ the very same rules of inference as we do—with the sole exception being the addition of the rule FLT. The alien’s reasoning in their “proof ” does not yield a justified belief. While the aliens have a true belief that Fermat’s Last Theorem is true, and they have come to believe this claim on the basis of employing necessarily truth-preserving rules of inference that are basic in their thought, they are not justified in so believing. Their belief lacks an important positive normative status. As it is tempting to say, the aliens are making giant leaps in their thinking. In their applications of FLT, their movements in thought are simply too big to yield a justified belief. Now contrast the following rule: (MP)
From the beliefs that P and that if P then Q, infer that Q.
⁵ I borrow the idea of using Wiles’s proof of Fermat’s Last Theorem from Boghossian (2003), which uses it for a related purpose. For my purposes here, any substantive logical or mathematical result would do. What is particularly nice about Boghossian’s example is that Fermat’s Last Theorem is relatively easy to state but not at all easy to prove. Berry (2013) and Dogramaci (2015) also make use of Boghossian’s example.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
We employ this rule, or one much like it. In contrast to the aliens, our employment of this rule does seem justified.⁶ Typically, if we are justified in believing P and in believing if P then Q, and we come to infer that Q by employing this rule, the resulting belief is justified. The step from P and if P then Q to Q is small enough to yield a justified belief. The question I’d like to discuss in this chapter, then, is this: What explains the fact that we are justified in employing MP but we would not be justified—and the aliens in my story are not justified—in employing FLT?⁷ The reason this question is vexing is that there is a great deal in common between MP and FLT. They both preserve truth. They both necessarily preserve truth. Indeed, both rules are logically valid. (Or, at least, FLT is logically valid if we add a suitably strong theory of arithmetic as an additional premise.)⁸ MP is basic in our thought. FLT is basic in theirs. We treat MP as obviously correct. They treat FLT as obviously correct. And so on. The difficult question, then, is this: What breaks the symmetry between these two rules?⁹ I should concede that some aspects of my science fictional story may be a bit misleading about this question. Generating or possessing a mathematical proof is not simply a matter of reasoning deductively. But the story helps to throw into sharp relief the difference between rules like MP and rules like FLT, which is the contrast that I’m really interested in. My discussion will proceed as follows. First, I’ll make a few necessary clarifications about the question that I’m asking and the background assumptions that I’m making. Then I’ll canvass several ways of trying to answer the question. In particular, I’ll ⁶ One might claim that the aliens have some justification for employing FLT. But it seems clear that, at the very least, there is a significant difference in degree. So even if one were to claim that the aliens are somewhat justified in employing FLT, there would remain the question of what explains this difference. ⁷ After I wrote the first draft of this chapter, I discovered two papers that raise essentially the same problem—Berry (2013) and Dogramaci (2013). Dogramaci (2013) makes use of a different mathematical example—inferring that the first ten decimal digits of π have more odd digits than even digits—to raise what he calls the “easy/hard question.” I find this example harder to think about than FLT. That is because when we imagine a thinker believing the correct answers to some range of questions about decimal expansions, it is tempting to attribute to the thinker a sub-personal reasoning mechanism that works out the answers. This can affect our judgments about cases. In any case, my thinking about the issues discussed here benefitted from reading those two papers. ⁸ If logical validity is understood in model-theoretic terms, by the categoricity of second-order Peano Arithmetic, second-order Peano Arithmetic will suffice. If logical validity is understood in proof-theoretic terms, it may turn out that we need to use a stronger theory. But here, too, there are good reasons to think that second-order Peano Arithmetic will (more than) suffice. Indeed, Angus MacIntyre has argued that first-order Peano Arithmetic will suffice. Harvey Friedman has conjectured that Exponential Function Arithmetic, which is a relatively weak finitely axiomatizable subtheory of first-order Peano Arithmetic, will suffice. See McLarty (2010) for discussion of the technical situation. ⁹ The symmetry can be further enhanced by making a small modification to the story. Suppose that the aliens do not employ MP but instead employ the following rule: (MP ) From the beliefs that P, that if P then Q, and that FLT is truth-preserving, infer that Q. (Suppose that they also endorse rules strong enough to get from the rule FLT to the claim that FLT is truth-preserving.) From our point of view, the aliens are taking a large leap in employing the rule FLT. From their point of view, we’re taking a large leap in employing MP. Why are we right and they wrong?
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
spend a fair bit of space discussing the idea that the crucial disanalogy between MP and FLT is that MP is built into one of our concepts—namely, the concept of the conditional—and FLT is not. I will argue that this account cannot explain the difference in epistemic status between the two rules. Instead, I will argue, we should appeal to usefulness, indispensability, or a closely related property in explaining this difference. Finally, I’ll conclude by making a few remarks about how a view in this ballpark should be developed. The overall structure of my argument is perhaps the least satisfying kind of argument in philosophy: “There is a very difficult problem here. Every view has its problems. The problems for all the other views are devastating. But the problems for my preferred view—while serious—can potentially be answered.” It would be nice to have something better to say in support of my view. But the issues here are difficult and this may be the best we can currently do.
2. Clarifications Before I discuss candidate accounts, let me first briefly make a few clarifications. The first clarification concerns the rule MP. There are several reasons to think that we do not (and should not) employ a Modus Ponens-like rule that is as simple as the rule MP stated above. As Gilbert Harman has argued, we do not and should not routinely infer what logically follows from our beliefs.¹⁰ One reason is that doing so would clutter our minds with irrelevancies. A second reason is that, when we notice that some implausible conclusion follows from premises we accept, sometimes the thing to do is to give up a premise rather than adopt the conclusion.¹¹ These considerations do not only apply to the logical consequence of our beliefs in general. They also apply to the specific case of Modus Ponens. There are additional reasons to think that a Modus Ponens-like rule we employ is more complicated than the simple rule stated above. One important consideration involves the existence of graded beliefs. I’m sympathetic to the idea that we don’t merely have all-or-nothing beliefs, but also degrees of confidence in various propositions. Our deductive rules of inference presumably take these degrees of confidence somehow into account. A different consideration concerns permission and obligation. In some cases, it seems, we are mandated to draw the relevant deductive inference. In other cases, we are permitted but not required to do so. Our deductive rules of inference presumably somehow register the difference between such cases.
¹⁰ Harman (1986). ¹¹ Harman provides two additional arguments for why we should not always infer the logical consequences of our beliefs. (i) It can be rational to have inconsistent beliefs, such as when confronted by the liar paradox. But it would not be rational in such cases to infer arbitrary claims. (ii) We do not have the computational power to recognize all of the logical consequences of our beliefs, even in principle.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
A third consideration concerns suppositions. We can draw deductive inferences within the scope of suppositions, so the deductive rules we employ do not only govern the inference of beliefs from beliefs. These complications notwithstanding, it is plausible that we employ a rule of inference that closely resembles the rule MP. We find Modus Ponens inferences extremely natural to make. When we report or rehearse trains of thought, it seems evident that we make use of a Modus Ponens-like rule. In what follows, then, I will assume that we employ a deductive rule of inference that resembles the rule MP stated above. A second clarification concerns the normative notion at issue in the central question of this chapter. The relevant difference between the rules MP and FLT is a normative one. What exactly is this normative notion? One might think that the core phenomenon here concerns knowledge: MP typically preserves knowledge and FLT does not.¹² That is, we typically come to know the conclusion of an MP inference applied to known premises, whereas the aliens don’t even typically come to know the conclusion of an FLT inference applied to a known premise. I agree with the claim that we typically come to know the conclusion of an MP inference applied to known premises and the aliens do not typically come to know the conclusion of an FLT inference applied to a known premise. But that is not the central normative difference. If the premises of an MP inference are justified— whether or not they are known—the conclusion will typically also be justified. The analogous claim is not true for FLT. So a more general and more fundamental difference is that MP typically preserves justification and FLT does not.¹³ There is a still more fundamental difference. Consider a thinker who forms a belief by applying MP to confidently held but unjustified beliefs. The resulting belief is epistemically problematic—it is an unjustified belief. But the thinker has not done anything wrong in drawing the inference. She has only made one mistake in her reasoning. By contrast, suppose one of the aliens applies FLT to a confidently held but unjustified belief. The alien will have made two mistakes in its reasoning—it will have applied an unjustified rule of inference to an unjustified belief. What this suggests is that a still more fundamental contrast concerns the normative status of inferences and rules of inference rather than beliefs: we are justified in employing the rule MP whereas the aliens are not justified in employing the rule FLT. It is worth saying just a bit about how I am using the term “justified.” One of the lessons of contemporary epistemology is that there may be several different notions of justification. The relevant notion for my purposes here is the one that undergirds ¹² The “typically” is needed to deal with the ways in which the closure of knowledge (or justification) may fail under competent deduction. See Schechter (2013b). ¹³ Some epistemologists distinguish between justification and entitlement. See, for instance, Burge (1993), Dretske (2000), and Wright (2004b). These philosophers may claim that we are not strictly speaking justified in employing basic rules but merely have an entitlement to employ them. In this chapter I use the word “justified” broadly, so as to include entitlement.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
the intuitions we have about the aliens in the story. The intuitive contrast concerns epistemic responsibility: MP is an epistemically responsible rule (for us or the aliens) to employ. By contrast, FLT is an epistemically irresponsible rule (for us or the aliens) to employ. These intuitions about responsibility are connected to the “size” of an inferential step. MP makes only a small step in reasoning. That is part of why it is a responsible rule for us to employ. By contrast, the FLT rule makes a giant leap. That is why it is an irresponsible rule for the aliens to employ. It is an important part of the setup of the case that the aliens are employing FLT as a basic rule in their thought. If, for instance, the aliens were to employ FLT on the basis of possessing a proof that the rule is truth-preserving, they would not count as epistemically irresponsible. What is irresponsible is employing FLT as a basic rule in thought. The challenge at the heart of this chapter is to explain what breaks the symmetry between valid rules employed as basic. Given these clarifications, the central question of this chapter can be restated as follows: What explains the fact that thinkers (or at least, thinkers broadly like us) are epistemically responsible in employing MP as a basic rule in thought but are not epistemically responsible (or at least, are significantly less responsible) in employing FLT as a basic rule in thought? In answering this question, one needn’t present a justification of MP. We’re not looking for some argument that will make thinkers justified in employing MP when they weren’t already. We’re also not looking for considerations that will convince a skeptic (or an agnostic) about deductive reasoning. Rather, what we’re looking for is an explanation of the epistemic difference between the rules. In particular, we’re looking for a necessary condition on justification (or a necessary part of a sufficient condition or . . . ) that can distinguish between MP and FLT. And this condition had better be one that is normatively relevant. It is worth noting that there is the same kind of phenomenon for non-deductive rules as there is for deductive rules. For instance, a rule of inference that takes a large collection of experimental data as its input and generates a scientific theory as its output in a single step is not one that we are justified in employing as basic. This is so even if the scientific theory that is the output of the rule is the theory that we would ultimately come to upon careful reflection on the experimental data. It is simply too big a leap to go from data to theory in a single step. One needs to work out the theory. The same phenomenon also arises in other cases—for instance, in the generation of our everyday empirical beliefs and our moral judgments about difficult cases.¹⁴ In the end, we’re going to want a unified answer to the question of why thinkers are justified in employing certain rules of inference (and belief-forming methods
¹⁴ Schechter (2017).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
more generally) as basic—an answer that applies to deductive and inductive rules as well as to the belief-forming methods that govern perception, the imagination, memory, and so on. In particular, we’re going to want a unified theory of when a step in thought counts as “too large,” and an explanation of why belief-forming methods that involve steps that are too large are unjustified. In what follows, however, I will largely restrict my attention to the case of deductive reasoning.
3. Candidate Views Those clarifications made, let me now discuss candidate views of the normative difference between MP and FLT. I won’t be discussing specific accounts so much as general approaches. To get a sense of the difficulties involved with finding a plausible view, consider the following three accounts of the justification of rules of inference employed as basic in thought. The first view appeals to truth-conduciveness. This is a natural place to look for a normatively relevant property. After all, in some important sense, truth is the goal (or, at least, a goal) of inquiry. Reliabilism. Thinkers are pro tanto justified in employing a rule of inference as basic in thought if (and by virtue of the fact that) the rule is conditionally reliable in the sense that it tends to yield truths from truths.¹⁵ If one is interested in knowledge instead of justification, one might appeal to a different truth-related property—for instance, safety. One might say that a rule of inference (employed as basic) is knowledge-preserving if it is safe in the sense that it couldn’t easily lead from truth to falsity.¹⁶ Given my focus on justification rather than knowledge, I’ll focus on reliability rather than safety, but the main problem for Reliabilism will apply to safety-based accounts, too.¹⁷ The second view appeals to a psychological notion, namely, psychological unavoidability: Psychological Unavoidability. Thinkers are pro tanto justified in employing a rule of inference as basic in thought if (and by virtue of the fact that) reasoning with the rule is psychologically unavoidable for the thinkers.¹⁸
¹⁵ An influential version of Reliabilism appears in Goldman (1979). Reliabilist views have also been endorsed by Alston (1988) and Swain (1981), among many others. ¹⁶ Safety-based accounts of knowledge have been endorsed by Pritchard (2005), Sosa (1999), and Williamson (2000), among others. ¹⁷ Many forms of virtue epistemology understand epistemic virtue in reliabilist terms and so face the very same difficulty. See, for instance, Sosa (2007). There are forms of virtue epistemology that emphasize responsibility rather than reliability. But I am not aware of any proposal within the virtue responsibilist tradition that offers an answer to the guiding question of this chapter. ¹⁸ A version of this view is defended in Dretske (2000).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
This view may be motivated by appeal to an epistemic ought-implies-can principle. Given that a thinker cannot avoid employing some particular rule of inference, one might claim that the thinker is not unjustified—that is, epistemically irresponsible— in employing the rule in reasoning. The third view appeals to phenomenology: Phenomenology. Thinkers are pro tanto justified in employing a rule of inference as basic in thought if (and by virtue of the fact that) applications of the rule are accompanied by the appropriate phenomenology.¹⁹ For instance, one might claim that thinkers are justified in employing a rule as basic if applications of the rule are accompanied by feelings of obviousness, clarity, or rational compulsion. The idea here is that just as we (plausibly) are justified in forming beliefs about our immediate surroundings by taking how things perceptually appear to us as a guide to how things are,²⁰ so too are we justified in reasoning in certain ways by taking feelings of obviousness (or clarity or rational compulsion, or the like) as a guide to how to reason. After all, one might ask, what else do we have to go on but how things seem to us? These three views are very different from one another. And each of them faces difficulties specific to the view in question. For instance, against Reliabilism, there are plausible counterexamples to reliability as either necessary or sufficient for epistemic justification.²¹ More generally, reliability has no intuitive connection with epistemic justification (when understood as epistemic responsibility)—the mere fact that a rule of inference tends to be truth preserving does not make it responsible to employ.²² A difficulty facing Psychological Unavoidability is that it makes epistemic justification much too cheap—on this view, a thinker can be justified in employing any rule whatsoever, so long as the thinker has the right kind of psychological incapacity. Finally, against Phenomenology, I’m suspicious of the claim that there is any such phenomenology that typically accompanies our reasoning. Most of the time, our reasoning doesn’t come along with an accompanying feeling of obviousness, clarity, or rational compulsion. Sometimes, there is such a feeling—for instance, when one is carefully reflecting on a simple inference—but such feelings are certainly not as common as philosophical folklore would suggest.
¹⁹ See Bengson (2015), Chudnoff (2013), and Huemer (2005) for views on which phenomenology is relevant to the justification of basic beliefs. See Dogramaci (2013) for a view on which phenomenology is relevant to the justification of basic inferential steps. ²⁰ Pryor (2000). ²¹ The New Evil Demon problem is plausibly a counterexample to necessity. See Lehrer and Cohen (1983) and Cohen (1984). BonJour’s case of the reliable clairvoyant and Lehrer’s case of Mr. Truetemp are plausibly counterexamples to sufficiency. See BonJour (1980) and Lehrer (1990). ²² Boghossian (2003, p. 228).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
These problems are pressing. But for present purposes we don’t need to go into all of that. There is a straightforward problem that applies to all three views. Namely, none of the proposals can distinguish between MP and FLT. Both rules are perfectly reliable. They do not merely have a tendency to move from truths to truths, but are necessarily truth preserving. So appealing to reliability cannot be used to distinguish between the two rules. (And mutatis mutandis for safety.) Similarly, we can imagine that our employment of MP and the alien’s employment of FLT agree as regards to their psychological unavoidability—the aliens cannot avoid employing FLT any more than we can avoid employing MP. So appealing to psychological unavoidability cannot be used to distinguish between the two rules. And, finally, we can imagine that our employment of MP and the aliens’ employment of FLT are alike as regards phenomenology. Applications of the two rules can have the same accompanying feelings of obviousness, clarity, and rational compulsion. So appealing to phenomenology cannot be used to distinguish between the two rules, either. These three views do not exhaust the range of options. There are many other accounts of the justification of employing rules as basic in thought. Here, for instance, are four more accounts: Acquaintance. Thinkers are pro tanto justified in employing a rule of inference as basic in thought if (and by virtue of the fact that) the thinkers are acquainted with (or have cognitive contact with, or are otherwise “in touch with”) the validity of the rule.²³ Evolution. Thinkers are pro tanto justified in employing a rule of inference as basic in thought if (and by virtue of the fact that) the rule is part of a cognitive mechanism that (i) has an evolutionary function that is appropriately epistemic and (ii) is well suited for performing that function.²⁴ Simplicity. The logical consequence relation has fine structure in the sense that some logical entailments count as direct and others count as indirect. Thinkers are pro tanto justified in employing a rule of inference as basic in thought if (and by virtue of the fact that) the rule corresponds to a direct entailment. Brute Fact. There is no explanation of why thinkers are pro tanto justified in employing certain rules of inference as basic in thought and not others. It is a brute normative fact.²⁵ ²³ This view is sometimes discussed under the heading of “rational insight” or “intuition.” However, those terms are also applied to views that appeal to phenomenology instead. (Indeed, the two kinds of views are not always carefully distinguished.) Gödel (1964) can be read as providing a rational insight-based view of our knowledge of set theory. BonJour (1997) proposes such a view of a priori knowledge more generally. ²⁴ See Millikan (1984) for an evolutionary account of knowledge. See Plantinga (1993) for a view of warrant that makes use of the notion of the proper function of a cognitive system. Plantinga, however, explicitly rejects evolutionary accounts of proper function. See Bergmann (2006) for a proper functionbased view of justification. ²⁵ Horwich (2005). Dogramaci (2015) and Field (2009) also end up committed to a brute fact view. In part to make the view seem less implausible, Dogramaci puts forward an account of the function of epistemic predicates and Field puts forward an expressivist account of epistemic justification.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
I don’t have the space to discuss these views in any real detail. But it is worth briefly gesturing at some of the problems that arise for them. There are a number of serious problems facing Acquaintance.²⁶ It is not clear what acquaintance (or cognitive contact or “being in touch”) with facts about validity could come to. It is mysterious how we could come to be in touch with such facts. How is this supposed to work—noetic rays? Even if we can make sense of what cognitive contact is and how we could have it, it is not clear that such contact provides justification. After all, being in causal contact with (e.g.) elementary particles in our environment does not by itself provide any justification for having beliefs about the particles. Finally, it is not clear that this view can explain the normative difference between MP and FLT. If we can be in cognitive contact with the validity of MP, why can’t the aliens be in cognitive contact with the validity of FLT? Against Evolution, we can imagine creatures that are just like us except that they are not the products of evolution. For instance, consider a swamp civilization—an advanced society of creatures physiologically like us, but which was created en masse by a powerful bolt of lightning striking swamp gas roughly a hundred years ago.²⁷ Despite not being the product of an evolutionary process, the swamp creatures seem just as justified as we are in employing MP as basic in their reasoning. Indeed, the creatures seem justified in just the same way as we are, whatever that is. A second issue is that, once again, it is not clear that this view can explain the normative difference between MP and FLT. One can imagine scenarios in which the aliens evolved to employ FLT as part of a cognitive mechanism that has an appropriately epistemic aim. The Simplicity view faces the problem of explaining the difference between direct and indirect logical entailments. (If the direct entailments are the ones built into the logical concepts, the view ends up collapsing into a version of the conceptual approach discussed below.) But a more serious problem concerns the epistemic part of the proposal. Why should reasoning in accord with direct entailments have a different normative status than reasoning in accord with indirect entailments? Compare the case of physics. Some of the truths of physics are more fundamental than others. Indeed, one of the tasks of microphysics is to look for the fundamental laws—the physical laws that ground the rest. But it is not plausible to claim that a thinker is justified in believing the fundamental laws or in “reasoning in accord” with them merely because they are fundamental. Nomological or metaphysical fundamentality doesn’t bestow any special epistemic status on beliefs or rules. So there is reason to think that logical fundamentality doesn’t bestow a special epistemic status, either. Finally, against the Brute Fact view, there are three main concerns. First, there are very many rules that we are justified in employing and many rules that we would be ²⁶ See Boghossian (2001) for discussion. ²⁷ This is a variant of the case of Swamp man, originally introduced in Davidson (1987) for a different purpose.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
unjustified in employing. It is intuitively implausible that there is no feature that distinguishes between the two classes. Second, there is a dialectical concern: if we are challenged to explain why we are justified in employing one of our rules of inference, on this view there is absolutely nothing we can say in response. This is unpalatable. To be sure, to be justified in employing a rule one needn’t be in a position to have something to say in defense of the rule. And one certainly needn’t be in a position to convince a committed opponent. But, surely we as theorists should be able to say something in response to the challenge that we ourselves would find reassuring. Finally, MP and the other deductive rules that we are responsible in employing have a feature in common that is not shared by FLT. In particular, reasoning in accord with these rules involves taking a “small step.” By contrast, reasoning in accord with FLT involves taking a “giant leap.” This suggests that there is some theory to be had that explains the difference in normative status. It is not just a brute fact.
4. Conceptual Competence Let me now turn to the main target of this chapter—the idea that we should appeal to the nature of concepts or meanings.²⁸ I find the view concerning mental concepts more attractive than the view concerning linguistic meanings, so I will focus on it. (The same issues will arise for linguistic meanings, mutatis mutandis.) A natural thought is that a deductive inference counts as a small step if it is built into one of our logical concepts, and not otherwise. MP is a small step because it is built into the concept of the conditional. In contrast, FLT is not built into any concept. That’s all well and good, but the question we’re really interested in is the normative question about the difference in their epistemic statuses. What explains this difference? The natural suggestion to make here is that any rule built into a concept is justified for a thinker to employ as basic. This view involves two main claims. The first is a claim about the nature of concepts: (i)
There are rules of inference “built into” concepts.²⁹
The second is a claim about epistemic justification: (ii) If a rule of inference is built into a concept that a thinker possesses, the thinker is pro tanto justified in employing that rule as basic in thought. ²⁸ Versions of this approach can be found in Gentzen (1934/5), Prawitz (1965), Belnap (1962), Dummett (1978; 1991), Tennant (1978), Peacocke (1992; 1993; 2004), Boghossian (2000; 2001; 2003), and the papers collected in Hale and Wright (2001). See Horwich (1997; 2000; 2005), Schechter and Enoch (2006), and Williamson (2003) for objections to concept-based accounts of justification and knowledge. ²⁹ This thesis naturally fits with a conceptual-role semantics (or meta-semantics). See Fodor and Lepore (1991) and Williamson (2003) for objections to that view.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Why think this? It is an extension of the notion of an analytic truth. Consider the claim that all bachelors are unmarried. What explains our justification for believing this claim? A natural idea is that it is somehow built into the concept bachelor that all bachelors are unmarried, and this is what explains our justification for believing the claim. The proposal here is that we extend this line of thought beyond beliefs to rules of inference. Given this line of thought, the obvious suggestion to make is that MP is built into the concept of the conditional. FLT is not built into any genuine concept. This is what explains the difference in their normative status. And this is what explains why an MP inference counts as a small step and an FLT inference counts as a large step. Problem solved! Well, not so fast. On claim (i), one might well ask: What does “built into” come to? One suggestion is that a rule is built into a concept just in case employing the rule as basic is part of what’s needed to count as thinking thoughts that use (as opposed to mention) the concept. Employing the rule is required to “possess” the concept in some minimal sense. This suggestion won’t quite work. It is very easy to use a concept in thought. Compare the case of names. I can pick up a new name by overhearing a conversation about someone with that name and then come to think thoughts about the person with that name. For instance, if I overhear someone using the name “Jocko,” I can come to have thoughts about Jocko—for instance, I can wonder whether Jocko is good at darts. Similarly, I can pick up a new concept just by hearing a word that expresses the concept.³⁰ I can then deploy that concept in thought. For instance, even if all I’ve heard is the phrase “Lie group,” I can still think thoughts involving the concept of a Lie group—for instance, I can wonder what a Lie group is. The typical response to this problem is to move to a different account of what “built into” comes to and a more stringent account of concept possession. On this second view, possessing a concept requires having sufficient competence with or mastery of the concept.³¹ On this proposal, a rule is built into a concept if employing the rule as basic is part of what’s needed to count as having sufficient mastery of the concept. I’m not completely clear on what mastery of a concept is supposed to be. And, as Timothy Williamson has argued, there are problems with the claim that MP is built into the concept of the conditional in even this sense.³² To repeat his example, the logician Vann McGee has mastered the concept of the conditional if anyone has. But he does not endorse MP in its full generality. A theorist might master the concept of the conditional (in any reasonable sense of “master”) but refrain from employing ³⁰ There are some constraints. For instance, presumably one needs to have some sense of the syntactic category of the relevant expression. But such constraints are minimal. ³¹ See Peacocke (1992, ch. 1). ³² Williamson (2003).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
MP—and perhaps even lack the disposition to employ MP—on the basis of (perhaps mistaken) theoretical considerations.³³ So there are problems with making sense of the idea that there are rules that are built into concepts, and with the claim that MP is built into the concept of the conditional. (There is also the question of why FLT cannot be built into a genuine concept.) But let me put these issues aside. What I want to focus on is the normative claim, claim (ii). The problem with this claim is that it is false. Thinkers are not justified in employing every rule built into the concepts they possess.³⁴ The most striking examples of this are pejorative concepts and other “thick” normative concepts that involve false claims. Consider, for instance, Michael Dummett’s example of the xenophobic concept boche.³⁵ “Boche” was a derogatory term used during World War I by French soldiers to refer to Germans. (I don’t use a contemporary racist or xenophobic term for the obvious reasons.) Plausibly, the constitutive rules for boche are something like the following: From so-and-so is German, infer that so-and-so is a boche. From so-and-so is a boche, infer that so-and-so is brutish and uncivilized. Racist and xenophobic concepts like boche plausibly count as genuine concepts. Surely, thinkers have had thoughts involving them. But by employing these rules, thinkers can infer that arbitrary Germans are brutish and uncivilized. Thinkers are not justified in employing such rules merely because they are concept-constituting. One might quibble over this particular example,³⁶ but there are many others. Consider, for instance, the thick normative concepts involved in Victorian sexual morality or in medieval conceptions of honor. These concepts involve problematic normative claims, claims that thinkers are not justified in believing merely in virtue of possessing the relevant concepts. For examples that are less normatively loaded, we can look at the concepts involved in failed physical theories. For instance, consider the concept of phlogiston, which includes a commitment to some substance being released during combustion. Similarly, consider the concept of caloric, which includes a commitment to there being a fluid that is responsible for heat. Indeed, we can also look at the concepts involved in successful physical theories—such as the concept of entropy or the concept of rest mass. At least given a conceptual role-based treatment of concept possession, these concepts involve commitments to substantive claims about the world. I’m not claiming that no thinker has ever been justified in employing the rules built into these concepts. Indeed, I think that we are justified in employing the rules
³³ See Fodor and Lepore (1991) for related worries. ³⁴ Prior (1960) famously introduced the logical constant “tonk” to try to demonstrate this point. However, it is plausible that “tonk” does not stand for a genuine concept. ³⁵ Dummett (1973, p. 454). ³⁶ Williamson (2009).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
built into the central concepts of our current physical theories and that some of our predecessors were justified in employing the rules built into the central concepts of past theories. My point is, rather, that possessing these concepts comes with substantive commitments about the world. In such cases, thinkers are not justified as employing the relevant rules as basic in their thought just because they possess the relevant concepts. They may be justified in employing the rules because they have some theoretical reason to think that the rules are acceptable (or on the basis of reasonably trusting relevant authorities). But they have to somehow earn the right to endorse the relevant theories and possess the relevant concepts. They are not justified in employing the rules merely because they possess the concepts in question. Very generally, it is easy for arbitrary unjustified rules to be built into concepts. So we should not think that thinkers are pro tanto justified in employing as basic any rule that is concept-constituting. The explanation for why MP is justified and FLT is not cannot just be because MP is concept-constituting and FLT is not.
4.1 Harmony To handle this problem, a natural thought is to somehow restrict the rules that thinkers are justified in employing in virtue of being concept-constituting. More specifically, we can restrict the rules that are justified to those rules that are built into “good” concepts. It is not clear how to distinguish between good and bad concepts in general. For the specific case of logical concepts, however, there have been several proposals. Such proposals are often discussed using Dummett’s term “harmony.”³⁷ A good concept is a harmonious concept. On the new proposal, then, we replace claim (ii) with the following normative claim: If a rule is built into a harmonious concept that a thinker possesses, the thinker is pro tanto justified in employing that rule as basic in thought.
When is a concept (or conceptual role) harmonious? Proposed accounts of harmony have been used for several different purposes. Sometimes the aim is to demarcate the logical—that is, to specify which concepts are logical concepts. Sometimes the aim is to specify which (broadly logical) conceptual roles yield a genuine concept. In the current context, harmony is not being used to do either of those jobs. Rather, it is being used to specify which concepts bestow a positive epistemic status on their constitutive rules. Keeping that in mind, the most promising idea is to understand harmony as (something like) Conservativeness. Conservativeness. Adding the concept and its constitutive rules to one’s preexisting inferential practice does not license any new inferences from premises that ³⁷ Dummett (1991). See Steinberger (2011) for discussion of various kinds of harmony.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
only contain concepts from the pre-existing practice to a conclusion that only contains concepts from the pre-existing practice.³⁸ This constraint can be motivated by a natural picture of the justification of rules. On this picture, certain packages of rules are “epistemically innocent.” In some sense, they cannot lead us astray. Conservativeness captures this idea—if a concept is conservative over a pre-existing inferential practice, adding the concept does not disturb the pre-existing practice. The normative status of the rules that help to constitute harmonious concepts derives from the fact that they are epistemically innocent rules, where innocence is cashed out as (something like) Conservativeness.³⁹ This picture fits with the notion of a basic analytic rule. The original idea behind analyticity was that analytic truths are epistemically innocent. They are “relations among ideas” rather than “matters of fact.” They are “explicative” rather than “ampliative.” They are “merely verbal propositions” rather than “real propositions.” And so on. The background thought is that analytic truths impose no substantive requirements on the world. They are trivial, harmless, and innocuous. And that is what explains their normative status. On this view, the normative claim (ii) can be restated as: If a rule is built into an epistemically innocent concept that a thinker possesses, the thinker is pro tanto justified in employing that rule as basic in thought.
If this is the right picture, the fundamental normative principle here presumably doesn’t merely apply to rules that are concept-constituting. There is no motivation for thinking that concept-constitution is normatively relevant either by itself or in combination with epistemic innocence. What is doing the real epistemic work in justifying a rule is its epistemic innocence. So the restriction of the normative claim to concept-constituting rules is ad hoc. Taking this line of thought seriously, then, the fundamental normative principle is something like the following: If a rule of inference is epistemically innocent (or is part of an epistemically innocent package of rules), the thinker is pro tanto justified in employing that rule as basic in thought, whether or not it is built into a concept. This is an attractive view. There are a couple of technical concerns that one might have here, at least assuming that epistemic innocence is identified with Conservativeness. First, one might worry that Conservativeness is too weak a requirement. There can be rules that are individually ³⁸ See Belnap (1962). Dummett (1991, p. 252) calls this “total harmony.” ³⁹ Indeed, even theorists who claim that we don’t need to restrict a concept-based account of justification to harmonious concepts will presumably endorse a similar picture. They will presumably claim that the rules built into concepts are automatically epistemically innocent, because it is a constraint on genuine concepts that they are (something like) conservative.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
conservative over a background inferential practice but that are jointly inconsistent over the background practice. Which of these rules would the relevant thinkers be justified in adding to their practice? This may not be a problem in the end. We might say that thinkers are justified in adding either of the rules, but not both. Second, one might worry that Conservativeness is too strong a requirement. There are logical concepts (presumably with justified constitutive rules) that are not conservative over the relevant background inferential practice. For instance, classical negation is not conservative over the inferential practice that includes the usual rules for the conditional. Adding a truth predicate or second-order quantifiers to firstorder Peano arithmetic does not yield a conservative extension.⁴⁰ Adding third-order quantifiers to second-order logic also does not yield a conservative extension.⁴¹ This may not be a problem in the end, either. We might say that Conservativeness is sufficient for epistemic innocence but not necessary for it. There is, however, a much bigger problem with the proposal. The rule FLT is epistemically innocent, too. If we add FLT to our background inferential practice, we get a conservative extension. That is, after all, part of what Andrew Wiles showed in proving that Fermat’s Last Theorem was true. So this proposal cannot distinguish between MP and FLT. More generally, the proposed view cannot distinguish between small steps and giant leaps in thought. In many cases, a giant leap is simply a shortcut—it is a way of skipping many small steps. And if each of the small steps is conservative, any shortcut will be, too. Perhaps, then, Conservativeness is the wrong way to understand epistemic innocence. Perhaps a different understanding of epistemic innocence is called for? I don’t think that pursuing this line of thought will help, at least if MP and other deductive rules are supposed to count as innocent. No matter how we understand the notion of epistemic innocence, taking a shortcut will presumably count as innocent. If each of a sequence of steps cannot lead us astray, skipping from the beginning to the end in one step cannot lead us astray, either.⁴² The trouble with this entire line of thought, then, is that FLT is just as innocent as MP. Epistemic innocence cannot be used to distinguish between the two cases. So what then? It seems to me that there is a better idea to pursue. An advocate of the concept-constitution view should appeal to a different normatively relevant feature—not innocence, but importance. The concept of the conditional is not just some run-of-the-mill concept. What is distinctive about the conditional is just how ⁴⁰ Peacocke (2004, pp. 18–21). ⁴¹ I owe this observation to Marcus Rossberg. ⁴² It is plausible that we are justified in believing simple analytic truths, such as the truth that bachelors are unmarried, because such claims are in some sense epistemically innocent. Even if this is correct, however, the relevant notion of epistemic innocence is extremely restrictive, and cannot play a role in explaining our justification for employing deductive rules such as MP. One of the morals of this chapter is that MP does not pattern with simple analytic truths.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
important it is in our thinking. Certain concepts are important—they are useful or indispensable for central cognitive projects. The conditional is such a concept. Negation is such a concept. And so on. This explains the normative status of their constitutive rules. On this view, we should replace (ii) with the following normative principle: If a rule is built into an important concept that a thinker possesses, the thinker is pro tanto justified in employing that rule as basic in thought.⁴³ MP is such a rule. FLT is not. This is what explains the difference between the two rules. This strikes me as an attractive approach. It looks promising as a way of distinguishing between MP and FLT. And importance is a plausible candidate for a normatively relevant feature. Again, we should presumably generalize this idea beyond concept-constituting rules.⁴⁴ The fundamental normative principle in the ballpark does not concern concepthood but importance. It is something like the following: If a rule of inference is important (or is a necessary part of an important package of rules), the thinker is pro tanto justified in employing that rule as basic in thought, whether or not it is built into a concept.⁴⁵ Notice that this view does not face the problem that the epistemic innocence view faces. FLT is a shortcut—it is a way of skipping many small steps. If each of the small steps is an application of an epistemically innocent rule, the shortcut would itself seem to be epistemically innocent. But the same does not hold true of importance. Even if each of the small steps is an application of an important rule, the shortcut itself need not be important. Being able to take a shortcut may not be important if one can take the long way around. So this is the direction that I suggest that we take. ⁴³ There are accounts in the literature that seem to rely on something like this principle. Boghossian (2003) explains the fact that MP transmits justification by claiming that MP is concept-constituting of the conditional, which is a concept that plays an important role in inquiry by enabling us to formulate concepts that are appropriately hedged. Hale (2002) argues that certain minimal inference rules are justified because they constitute the concepts with which we can investigate doubts about the validity of rules. Wedgwood (2011) claims that basic rules of inference are justified if possessing certain basic cognitive capacities requires that we employ the rules. (Wedgwood includes the possession of concepts among the relevant cognitive capacities.) ⁴⁴ If we make this generalization, there is some hope that we will be able to explain the normative status of our basic ampliative rules. Such rules are important in our thought but they are not built into any of our concepts. Notice, too, that ampliative rules are not epistemically innocent—they can lead us astray. See Schechter and Enoch (2006). ⁴⁵ Besides the accounts mentioned in footnote 43, there are other accounts in the literature that tie justification (or a related status) to something like importance. See, for instance, Reichenbach’s (1938; 1949) pragmatic justification of induction, Wright’s (2004a; 2004b) accounts of our entitlement to claim knowledge of cornerstone propositions and basic logic, and Shapiro’s (2009) account of the epistemic status of mathematical axioms.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
5. Developing the Account Let’s take stock. There are several strategies one might explore in trying to explain why we are epistemically responsible in employing MP as basic but the aliens in my story are not epistemically responsible in employing FLT as basic. There are two constraints on such a view. First, the view must appeal to a normatively relevant feature of the rules. Second, the view must be able to explain the normative difference between MP and FLT. As we have seen, satisfying these constraints is no easy feat. All of the approaches I discussed turned out to either be implausible or to violate the two constraints, with the sole exception of the view that thinkers are epistemically responsible in employing as basic those rules that are important or part of an important package of rules. To determine whether this approach is viable, there are two questions that must be answered. First, are our fundamental deductive rules really so important, and if so, in what ways are they important?⁴⁶ Second, how exactly should the account be further developed? What are the principal challenges for the account, and how should they be answered? In the remainder of this chapter, I’ll briefly discuss the second of these questions. The main point of this chapter is to advertise a general approach, not to provide a specific account. But I do want to gesture in the direction of some of the issues that arise. My hope is to convince you that, while there are difficulties, they are much less worrisome than the problems facing other approaches. I will consider six issues. For each of these issues, I’ll propose at least one line of response. The proposals I will make are not the only possible responses. But they are what I currently take to be the most plausible suggestions. The first issue is this: I’ve described the “important” rules as useful or indispensable. What are they useful or indispensable for? For rules to gain normative status on the grounds that they are useful or indispensable, they had better be useful or indispensable for something important. They can’t just be useful or indispensable for any old project. That is because the normative status of the relevant projects is in some way transmitted to the rules. The normative status of the projects comes first. In response, I suggest that we understand “useful or indispensable” as being useful or indispensable to a rationally required project—a cognitive project that is rationally required for all thinkers broadly like us to engage in.⁴⁷ Plausible examples of such projects are explaining the world around us, predicting important phenomena, planning for future contingencies, deliberating over how to act, and evaluating our own behavior and patterns of thinking. A thinker like us who is not engaging in one of these projects in at least some small way is rationally defective. This raises the question of what makes a project rationally required. Why are some projects rationally required (for instance, explanation, prediction, and deliberation)
⁴⁶ See Brandom (2000, ch. 1), Evnine (2001), and Schechter (2013a) for relevant discussions. ⁴⁷ On a variant view, one might restrict the relevant cognitive projects to those that are epistemic—e.g., explaining, predicting, and evaluating one’s thinking.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
and others not (for instance, creating a large stock of knock-knock jokes, concocting the best recipe for cheese soufflés, or becoming a world-class athlete, musician, or philosopher)? I don’t have an answer to this question. Perhaps it is a brute fact. But notice that it is much more plausible that it is a brute fact that we have a rational obligation to explain the world around us than it is that we are epistemically responsible in employing the rule MP as basic in our thought. The second issue is that there are counterexamples to the view if we construe usefulness or indispensability in purely causal terms. Suppose that someone will thwart my pursuit of an important cognitive project if I don’t employ a certain rule. For instance, suppose that a tobacco company will thwart my efforts at explaining the world around me if I don’t employ the following rule: (T) From the belief that such-and-such is a tobacco product, infer that such-andsuch is harmless. Surely, this doesn’t bestow any epistemic justification on my employment of the rule T. Alternatively, suppose that someone will substantially aid one of my important cognitive projects if I employ a certain rule. For instance, suppose that a (slightly more benevolent) tobacco company will give me an enormous research grant— enabling me to further my project of explaining the world—if I employ rule T. This also doesn’t bestow any epistemic justification on my employment of the rule. Perhaps I ought, in some sense, to employ the rule. Perhaps I don’t count as practically irresponsible if I employ it. But I don’t count as epistemically responsible in employing the rule. In response to this concern, we had better provide an account of usefulness or indispensability that is not purely causal. We had better say something like the following: to yield epistemic justification, a rule has to be useful or indispensable to a rationally required project in the sense that by employing the rule the thinker can successfully engage in the project. It is by employing the rule, and not merely as a causal consequence of employing the rule, that the thinker can successfully engage in the project. The third issue is that there are useful rules of inference that take what intuitively count as large steps in thought. Such rules should not count as epistemically justified in virtue of their usefulness. One illustration of this is the following variant of Modus Ponens: (MP+) From the beliefs that P and that if P then Q, infer both that Q and that FLT is truth-preserving. This rule is stronger than MP (assuming the presence of Conjunction Elimination). So it is at least as useful as MP. Why doesn’t it end up epistemically justified according to my account? The answer has to be that we should make use of a notion closer to indispensability than to usefulness. MP+ is at least as useful as MP. But MP is indispensable in a way
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
that MP+ is not. One way to develop this thought is to note that in employing MP+ one takes on additional commitments. MP is a more minimal rule. A good rule is one that makes a minimal set of commitments (or a sufficiently minimal set of commitments) for the job it is supposed to do. The fourth issue concerns a different kind of variant of Modus Ponens. In particular, there are variants of Modus Ponens that are only useful given powerful auxiliary rules. Consider, for instance, the following rule: (MP ) From the beliefs that P, that if P then Q, and that FLT is truthpreserving, infer that Q. This rule is weaker than MP. In the presence of this rule, employing FLT (or believing that FLT is truth-preserving) is highly useful and perhaps even indispensable. But we certainly don’t want to say that a thinker has justification for employing FLT in virtue of employing MP . The answer to this worry is presumably that we should not compare rules one at a time. Rather, we should compare entire packages of rules. The package of rules containing MP and FLT takes on more commitments than does the package of rules that contains MP. That is why the package containing MP is epistemically justified and the package containing both MP and FLT is not. The fifth issue concerns yet another kind of variant of Modus Ponens. There are useful variants of Modus Ponens that have ad hoc restrictions. Consider, for instance: (MP*) From the beliefs that P and that if P then Q, infer that Q—but only if Q does not concern narwhals. This rule is (nearly) as useful as MP is. If employing MP can enable one to successfully engage in a rationally required project, employing MP* can do so, too. Moreover, this rule is more minimal than MP—it is not committed to Modus Ponens working when reasoning about narwhals. The worry, then, is that MP will not count as indispensable to a rationally required project. I suggested above that MP+ is not indispensable since we could employ the more minimal MP. So why shouldn’t we say that MP is not indispensable since we could employ the still more minimal MP*? The natural response to this worry is to point to the fact that MP* is an ad hoc rule. According to this suggestion, there is a trade-off between minimality and ad hocness. MP is less minimal but more principled. That is why employing MP is epistemically justified. This response raises the challenge of providing a principled account of ad hocness. It also requires providing an explanation of the connection between ad hocness and epistemic responsibility. But this strikes me as a plausible line of thought to develop. The final issue that I’d like to raise is that it may be that no individual rule or package of rules is strictly speaking indispensable to a rationally required project. Rather, what may be indispensable is to employ one out of some set of alternatives. For instance, in classical propositional logic, we can take conjunction and negation as
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
primitive. Alternatively, we can take the conditional and negation as primitive. Or we can make use of the Sheffer stroke. But we do not want to say that the existence of these alternatives entails that we are not justified in using any deductive rule as basic. The natural response to this worry is to say that using any one of the packages of rules would be epistemically responsible. So long as the package of rules is sufficiently minimal and non-ad hoc, thinkers are pro tanto justified in employing the rules in the package as basic. There are several more issues that one could raise besides. But this provides a flavor of the difficulties that arise in developing the view. These difficulties, while pressing, strike me as much less severe than the difficulties besetting the other approaches. Putting this all together, here is one way to develop the view: • Thinkers like us are rationally required to engage in certain cognitive projects irrespective of their goals and desires. Such projects include explaining the world around them, predicting important phenomena, deliberating over what to do, planning for the future, and evaluating their own behavior and patterns of reasoning. • A thinker is pro tanto epistemically justified in employing as basic each rule of inference in a package of rules if (and by virtue of the fact that) the package of rules is “pragmatically indispensable” for successfully engaging in a rationally required cognitive project, irrespective of whether the thinker is aware of this fact. • A package of rules is pragmatically indispensable for successfully engaging in a cognitive project just in case it is possible (in the relevant sense) to successfully engage in the project by employing the rules and the package of rules is sufficiently minimal and non-ad hoc. These clauses presumably need further refinement. And there are several clarifications of key terms that are needed. For instance, what exactly do “successfully engage,” “possible (in the relevant sense),” “minimal,” and “non-ad hoc” come to?⁴⁸ But I suggest that we adopt a view in the ballpark. It is the most promising way to explain the difference in normative status of MP and FLT.
6. Conclusion Recall the central motivating question of this chapter: What explains the fact that thinkers (or at least, thinkers broadly like us) are epistemically responsible in employing MP as a basic rule in thought but are ⁴⁸ See Enoch and Schechter (2008) for some of the needed clarifications. That paper gives a somewhat different account of pragmatic indispensability.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
not epistemically responsible (or at least, are significantly less responsible) in employing FLT as a basic rule in thought? The answer I propose is that thinkers are (pro tanto) epistemically responsible in employing rules that are indispensable to rationally required projects. MP is such a rule. FLT is not. This explains the difference in their epistemic status. In arguing for this view, I leaned heavily on the difference between small steps and big leaps of thought. On my view, then, whether an inferential step is a small step depends on whether the relevant rule is pragmatically indispensable to a rationally required project. This will depend, at least in part, on broad facts about the constitution of the relevant agent and broad facts about the nature of the world. So, perhaps surprisingly, whether a step is small depends on the agent and the world. Of course, there is much more to say about how exactly to develop this account. But I hope that I’ve convinced you that this is a promising approach to take.
References Alston, William. (1988) “An Internalist Externalism,” Synthese 74: 265–83. Belnap, Nuel. (1962) “Tonk, Plonk, and Plink,” Analysis 22: 130–4. Bengson, John. (2015) “The Intellectual Given,” Mind 124: 707–60. Bergmann, Michael. (2006) Justification without Awareness: A Defense of Epistemic Externalism, Oxford: Oxford University Press. Berry, Sharon. (2013) “Default Reasonableness and the Mathoids,” Synthese 190: 3695–713. Boghossian, Paul. (2000) “Knowledge of Logic,” in Boghossian and Peacocke (2000), pp. 229–54. Boghossian, Paul. (2001) “How are Objective Epistemic Reasons Possible?” Philosophical Studies 106: 1–40. Boghossian, Paul. (2003) “Blind Reasoning,” Proceedings of the Aristotelian Society, Supplementary Volume 77: 225–48. Boghossian, Paul and Christopher Peacocke, eds. (2000) New Essays on the Apriori, Oxford: Clarendon Press. BonJour, Laurence. (1980) Externalist Theories of Empirical Knowledge, Midwest Studies in Philosophy 5(1): 53–73. BonJour, Laurence. (1997) In Defense of Pure Reason, Cambridge: Cambridge University Press. Brandom, Robert. (2000) Articulating Reasons: An Introduction to Inferentialism, Cambridge, MA: Harvard University Press. Burge, Tyler. (1993) “Content Preservation,” Philosophical Review 102: 457–88. Chudnoff, Elijah. (2013) Intuition, Oxford: Oxford University Press. Cohen, Stewart. (1984) “Justification and Truth,” Philosophical Studies 46: 279–95. Davidson, Donald. (1987) “Knowing One’s Own Mind,” Proceedings and Addresses of the American Philosophical Association 60: 441–58. Dogramaci, Sinan. (2013) “Intuitions for Inferences,” Philosophical Studies 165: 371–99. Dogramaci, Sinan. (2015) “Communist Conventions for Deductive Reasoning,” Noûs 49: 776–99.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Dretske, Fred. (2000) “Entitlement: Epistemic Rights without Epistemic Duties?” Philosophy and Phenomenological Research 60: 591–606. Dummett, Michael. (1973) Frege: Philosophy of Language, London: Duckworth. Second edition, 1981. Dummett, Michael. (1978) “The Justification of Deduction,” in his Truth and Other Enigmas, Cambridge, MA: Harvard University Press, pp. 290–318. Dummett, Michael. (1991) The Logical Basis of Metaphysics, Cambridge, MA: Harvard University Press. Enoch, David and Joshua Schechter. (2008) “How Are Basic Belief-forming Methods Justified?” Philosophy and Phenomenological Research 76: 547–79. Evnine, Simon. (2001) “The Universality of Logic: On the Connection Between Rationality and Logical Ability,” Mind 110: 334–67. Field, Hartry. (2000) “Apriority as an Evaluative Notion,” in Boghossian and Peacocke (2000), pp. 117–49. Field, Hartry. (2009) “Epistemology without Metaphysics,” Philosophical Studies 143: 249–90. Fodor, Jerry, and Ernie Lepore. (1991) “Why Meaning (Probably) isn’t Conceptual Role,” Mind and Language 6: 328–34. Gentzen, Gerhard. (1934/5) “Untersuchungen über das Logische Schließen,” Mathematische Zeitschrift 39, 176–210. Gödel, Kurt. (1964) “What is Cantor’s Continuum Problem?” revised and expanded version of a 1947 paper with the same name, in Solomon Feferman, et al. (eds.), Collected Works of Kurt Gödel, Volume II, Oxford: Oxford University Press, pp. 254–70. Goldman, Alvin. (1979) “What is Justified Belief ?” in George Pappas (ed.), Justification and Knowledge, Dordrecht: Reidel, pp. 1–23. Hale, Bob. (2002) “Basic Logical Knowledge,” in Anthony O’Hear (ed.), Logic, Thought and Language, Cambridge: Cambridge University Press, pp. 279–304. Hale, Bob and Crispin Wright. (2001) The Reason’s Proper Study, Oxford: Oxford University Press. Harman, Gilbert. (1986) Change in View: Principles of Reasoning, Cambridge, MA: MIT Press. Horwich, Paul. (1997) “Implicit Definition, Analytic Truth, and Apriori Knowledge,” Noûs 31: 423–40. Horwich, Paul. (2000) “Stipulation, Meaning and Apriority,” in Boghossian and Peacocke (2000), pp. 150–69. Horwich, Paul. (2005) “Meaning Constitution and Epistemic Rationality,” in his Reflections on Meaning, Oxford: Oxford University Press, pp. 134–73. Huemer, Michael. (2005) Ethical Intuitionism, New York: Palgrave Macmillan. Lehrer, Keith. (1990) Theory of Knowledge, Boulder: Westview. Lehrer, Keith and Stewart Cohen. (1983) “Justification, Truth, and Coherence,” Synthese 55: 191–207. McLarty, Colin. (2010) “What Does it Take to Prove Fermat’s Last Theorem? Grothendieck and the Logic of Number Theory,” Bulletin of Symbolic Logic 16: 359–77. Millikan, Ruth. (1984) “Naturalist Reflections on Knowledge,” Pacific Philosophical Quarterly 65: 315–34. Peacocke, Christopher. (1992) A Study of Concepts, Cambridge, MA: MIT Press.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Peacocke, Christopher. (1993) “How Are A Priori Truths Possible?” European Journal of Philosophy 1: 175–99. Peacocke, Christopher. (2004) The Realm of Reason, Oxford: Oxford University Press. Plantinga, Alvin. (1993) Warrant and Proper Function, Oxford: Oxford University Press. Pollock, John and Joseph Cruz. (1999) Contemporary Theories of Knowledge, second edition, Lanham: Rowman & Littlefield. Prawitz, Dag. (1965) Natural Deduction: A Proof-Theoretical Study, Stockholm: Almquist and Wiksell. Prior, Arthur. (1960) “The Runabout Inference Ticket,” Analysis 21: 38–9. Pritchard, Duncan. (2005) Epistemic Luck, Oxford: Clarendon Press. Pryor, James. (2000) “The Skeptic and the Dogmatist,” Noûs 34: 517–49. Reichenbach, Hans. (1938) Experience and Prediction, Chicago: University of Chicago Press. Reichenbach, Hans. (1949) The Theory of Probability, second edition. Translated by E.H. Hutten and M. Reichenbach. Berkeley: University of California Press. Schechter, Joshua. (2013a) “Could Evolution Explain Our Reliability About Logic?” in Tamar Szabo Gendler and John Hawthorne (eds.), Oxford Studies in Epistemology, Volume 4, Oxford: Oxford University Press, pp. 214–39. Schechter, Joshua. (2013b) “Rational Self-Doubt and the Failure of Closure,” Philosophical Studies 163: 529–452. Schechter, Joshua. (2017) “Difficult Cases and the Epistemic Justification of Moral Belief,” Oxford Studies in Metaethics 12: 27–50. Schechter, Joshua and David Enoch. (2006) “Meaning and Justification: The Case of Modus Ponens,” Noûs 40: 687–715. Shapiro, Stewart. (2009) “We Hold These Truths to Be Self-Evident: But What Do We Mean by That?” Review of Symbolic Logic 2: 175–207. Sosa, Ernest. (1999) “How Must Knowledge Be Modally Related to What is Known?” Philosophical Topics 26: 373–84. Sosa, Ernest. (2007) A Virtue Epistemology: Apt Belief and Reflective Knowledge, Volume 1, Oxford: Oxford University Press. Steinberger, Florian. (2011) “What Harmony Could and Could Not Be,” Australasian Journal of Philosophy 89: 617–39. Swain, Marshall. (1981) Reasons and Knowledge, Ithaca, NY: Cornell University Press. Tennant, Neil. (1978) Natural Logic, Edinburgh: Edinburgh University Press. Wedgwood, Ralph. (2002) “Internalism Explained,” Philosophy and Phenomenological Research 65: 349–69. Wedgwood, Ralph. (2011) “Primitively Rational Belief-Forming Processes,” in Andrew Reisner and Asbjrn Steglich-Petersen (eds.), Reasons for Belief, Cambridge: Cambridge University Press, pp. 180–200. Wiles, Andrew. (1995) “Modular Elliptic Curves and Fermat’s Last Theorem,” Annals of Mathematics, 141: 443–551. Williamson, Timothy. (2000) Knowledge and Its Limits, Oxford: Oxford University Press. Williamson, Timothy. (2003) “Understanding and Inference,” Proceedings of the Aristotelian Society, Supplementary Volume 77: 249–93.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Williamson, Timothy. (2009) “Reference, Inference, and the Semantics of Pejoratives,” in Joseph Almog and Paolo Leonardi (eds.), The Philosophy of David Kaplan, Oxford: Oxford University Press, pp. 137–58. Wright, Crispin. (2004a) “Intuition, Entitlement and the Epistemology of Logical Laws,” Dialectica 58: 155–75. Wright, Crispin. (2004b) “Warrant for Nothing (and Foundations for Free)?” Proceedings of the Aristotelian Society, Supplementary Volume 78: 167–212.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
10 With Power Comes Responsibility Cognitive Capacities and Rational Requirements Magdalena Balcerak Jackson and Brendan Balcerak Jackson
1. Introduction Rationality partly concerns what we believe. It is rational to believe things for which we have sufficient reasons, and it is rational to refrain from believing things for which we lack sufficient reasons. But rationality also concerns how we reason. It is rational to follow good rules of inference in our reasoning, for example, and it is irrational to follow bad rules or no rules at all. This remains the case even when the beliefs involved in our reasoning are not ones that it is rational for us to hold, because we lack sufficient reasons for them. Indeed, it remains the case even when the states involved in our reasoning are not beliefs at all, but (for example) mere suppositions we have adopted for the sake of a reductio argument.¹ Despite various disagreements about the nature of rationality, philosophers tend to take it for granted that it binds us all equally as reasoners—that is, that whatever rationality requires of a given subject’s reasoning is exactly what it would require of any subject’s reasoning. This is not to say that everyone ought to engage in exactly the same processes of reasoning. Rather, it is to say that the various individual reasoning processes of different thinkers are all to be assessed according to the same rules or standards. Perhaps you and I begin with different sets of beliefs, are interested in different questions, and arrive at different conclusions. But both of our reasoning processes ought to respect modus ponens, and ought to avoid the fallacy of affirming the consequent. In this chapter, however, we argue that there is an important range of cases that are quite unlike this. These are cases where the status of one’s reasoning as rationally appropriate or not depends on whether or not one has a certain cognitive capacity and is in a position to exercise it. Different, equally rational thinkers often differ in
¹ Our focus here is on epistemic rationality, and on theoretical rather than practical reasoning. But we believe that the main thesis we advance here concerning the former is quite likely to be true of the latter as well.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
the cognitive capacities they possess, and so the reasoning of a thinker who has a certain cognitive capacity can be rationally appropriate, while the exact same process of reasoning would not be rationally appropriate for another subject who lacks that capacity. As we will see in Section 3, the question of whether a given transition in reasoning is rationally appropriate, in our view, is closely tied to the question of what rational requirements there are for the thinker. So, part of what we aim to defend here is the thesis that the rational requirements for a thinker depend (in part) on what cognitive capacities she possesses. Certain cognitive powers bring with them certain distinctive rational responsibilities.² The opposing view—that all the requirements of rationality are universal rather than subject-relative—is virtually never endorsed explicitly. But it can be discerned in the role it plays in guiding attempts to account for the rational status of transitions in reasoning in particular cases. According to our view, the following sort of situation is a genuine possibility: it might be that two thinkers A and B share all the same relevant “premise attitudes,” e.g. they have all the same relevant beliefs or are making all the same relevant suppositions; and yet it might be rationally appropriate for A to make the transition to some “conclusion attitude,” e.g. the belief that p, while it is not rationally appropriate for B to do so. This can happen, in our view, when A’s transition is underwritten by a cognitive capacity that B lacks, by virtue of which it is appropriate to evaluate A’s reasoning according to a different rational standard than B’s reasoning. The orthodox universalist view cannot recognize this as a genuine possibility, because it denies that there can be intersubjective differences in whether or not a given transition in reasoning counts as rational. Any time we encounter a case that appears to be a case of this sort, the orthodox universalist must try to explain away the appearance. In practice, this leads to psychological cum epistemological conjectures about differences in the premise attitudes of A and B, such that anyone who shared A’s premise attitudes would be licensed in reaching the conclusion attitude, but no one with B’s premise attitudes would be licensed in doing so. The disagreement here is not merely abstract. In Section 2 we look at several realistic cases that we think are plausibly cases of exactly the sort under dispute. They are cases in which two similarly situated thinkers have a difference in cognitive capacities that is relevant to some question they are considering, and where we find a corresponding difference in which transitions in reasoning it is rational for them to make. In Section 3 we show how all of these cases can be subsumed under a common account according to which the rational status of a thinker’s reasoning is determined by rational requirements that can vary from thinker to thinker. This way of making sense of the cases is at odds with the orthodox universalist view, however, and so defenders of that view must find some alternative way to account for them. ² The current chapter builds on earlier joint work in which we note some of the ways in which distinctive cognitive capacities play a role in helping to account for epistemic rationality and justification in particular cases; see Balcerak Jackson and Balcerak Jackson (2012, 2013).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
In Sections 4 and 5 we discuss the two most influential universalist strategies, which we label the cognitivist and perceptualist strategies. These strategies will already be familiar to anyone who has spent some time thinking about any of the sorts of cases we discuss below. But we identify some serious difficulties for both. If the arguments of these sections are correct then we have reason to take seriously the idea that the cases are just what they seem to be—cases in which rationality makes demands of one thinker and not another, because the former is able to exercise some special cognitive capacity that the other does not possess. We conclude in Section 6 by discussing some of the distinctive features of the rational requirements that are grounded in thinkers’ cognitive capacities. In what follows we will be talking a great deal about reasoning, and so we should note at the outset that we adopt a fairly expansive conception of what falls under the label ‘reasoning’. We are happy to apply the label to all sorts of cases in which a thinker moves through a sequence of mental states with propositions as their contents, such as beliefs, supposings, perceptions, imaginings etc. This includes cases in which the thinker makes a transition from an attitude with a certain propositional content to another attitude of the same type, but with a distinct propositional content (for example, a transition from a belief that p to a belief that q). It also includes transitions from one type of attitude to another, whether with the same or with distinct contents (for example, a transition from a visual perceptual experience that p to a belief that p). This way of using the label is more permissive than the ways used by some advocates of some of the views we discuss below; many advocates of the perceptualist strategy, in particular, would deny that the cases at issue are cases of reasoning at all. But as far as we can tell, nothing of substance hangs on the choice of labels. And in any case, if the view we defend here is correct then the cases will qualify as instances of reasoning even on more demanding conceptions (although we will not try to establish that here).
2. Four Cases In this section we survey four hypothetical—although not at all far-fetched—cases in which we are naturally inclined to give different appraisals of the reasoning of two thinkers, in a way that corresponds to a difference in their cognitive capacities.
Case 1: Language understanding Suppose that Angela, a fluent native speaker of German, and Barry, a monolingual English speaker, both hear a third party utter the following German sentence: (1)
Herr Lehmann war in keiner guten Stimmung.
Upon hearing the speaker utter (1), it would be rationally appropriate for Angela, as a competent German speaker, to judge that the speaker’s utterance means that Mr. Lehmann was not in a good mood. This is not to claim that her judgment is
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
necessarily a rational one. She might have very good independent reasons to doubt that the speaker really did utter a sentence with this meaning; perhaps she knows that the speaker has no idea who Mr. Lehmann is. Or she might have very good reasons to lack confidence in her perception of the utterance; perhaps she knows that she very likely misheard the speaker, or that she is suffering from some rare form of aphasia. Rather, the claim is about Angela’s transition in reasoning, from her perception of the sentence uttered to her judgment about its meaning.³ It is an example of the sort of transition that fluent speakers of German, like Angela, make all the time. These transitions are, by and large, rationally appropriate—indeed, it is hard to see how they could not be if language is to serve its function of acquiring and transmitting rational beliefs. For Barry, of course, things are different. He doesn’t understand German at all, and so any particular judgment about what was said would be mere guesswork or speculation. If he did somehow manage to stumble onto the conclusion that the utterance meant that Mr. Lehmann was not in a good mood, we would regard this as a lucky guess, not as a conclusion he has reached in a rationally appropriate way. This case is likely to provoke a response like the following. Doesn’t Angela’s competence in German partly consist in knowledge (in some sense) of the meanings of German sentences? And isn’t Angela’s reasoning rationally appropriate because this knowledge, combined with her awareness of what sentence the speaker has uttered, allows her to arrive at the conclusion that the utterance meant that Mr. Lehmann was not in a good mood? Meanwhile, Barry is not in a position to arrive at the same conclusion rationally, because—as a non-German speaker—he lacks knowledge of the meanings of German sentences. For now, we just note that this response is an instance of what we called the cognitivist strategy in Section 1. While Angela and Barry both have the belief that the speaker has uttered (1), for the universalist this is not enough, by itself, to make it rational for either of them to conclude that the utterance meant that Mr. Lehmann was not in a good mood. But Angela has further background beliefs that allow her to bridge the rational gap between the belief about which sentence was uttered and the belief about its meaning. Thus, the rational difference between Angela and Barry is traced to a difference in premise attitudes. We return to this strategy in Section 4.
³ There are some subtleties about how exactly to describe the contents of Angela’s perception of the utterance. Does she need to hear a certain phonological contour? Or does she need to hear the utterance as having a certain syntactic structure, or as being the performance of a certain phatic act (in the sense of Austin 1975)? Such subtleties need to be addressed within a full account of the rationality of language understanding. But we will not pause over them here, since our focus is on the features in common among cases of several different sorts, most of which have nothing to do with language understanding. (However, we will return to the question of whether Angela and Barry have the same auditory perception, and to analogous questions about some of the other cases to follow, in Section 5.)
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Case 2: Face recognition Most of us have the capacity to recognize familiar people by looking at their faces, but some do not. Prosopagnosia is a rare disorder of face perception (affecting up to 2.5 percent of the population) in which the ability to visually recognize faces is impaired, while other aspects of visual processing as well as intellectual functioning remain intact. Prosopagnosics cope with this disorder in daily life by relying heavily on other cues to identify people, such as voice or clothing.⁴ Suppose that Frank is a person with average face-recognition skills, while his brother Oliver has prosopagnosia. Strolling together through the streets one day, their attention is captured by a black and white photo in a shop window, showing only the central part of a woman’s face. This happens to be a photo of their sister Polly, but one that lacks the usual kinds of cues that Oliver and other prosopagnosics rely on to compensate for their deficit. If Frank is seeing clearly, paying attention, and so on, then it would be rationally appropriate for him to judge that this is a photo of his sister Polly—or, more cautiously, that the face in the photo looks like his sister’s face. As in Case 1, this is a claim about Frank’s reasoning, not about the states that are its starting and ending points. Indeed, suppose that Frank acquires very strong independent evidence that the face in the photo does not look like his sister’s face; suppose that a dozen extremely reliable witnesses tell him it is actually a photo of a man with a beard. In these circumstances, not only should Frank give up his judgment that the face looks like his sister’s, he should also judge that he must have misperceived the facial features in the photo. It would be irrational for him to continue to think that the face in the photo looked exactly like that and yet concede that it did not look like his sister. This is a symptom of the fact that, for Frank, the appearance of the face as having certain features rationally is what licenses the transition to the conclusion that the face looks like his sister’s.⁵ What about Oliver? Perception intuitively provides Oliver with the same information it provides Frank: he sees the same shapes and colors, and he sees them as eyes, nose, mouth, and so on. But given his condition and the lack of external cues, if we were to ask Oliver whether the face in the photo looks like his sister, it is clear that his answer would be no better than a guess. It would not be a rationally appropriate inference. This case is likely to provoke a different sort of response than Case 1. When Frank views the photo, doesn’t it simply look to him like a photo of his sister? That is, isn’t it part of how his visual experience represents things that the face in the photo is the face of his sister? If so, then the conclusion he reaches is simply a matter of taking his perception at face value. This is a rational thing to do, at least in the absence of ⁴ The study of prosopagnosia has contributed to the view that face recognition constitutes a specific dedicated cognitive system in the human mind/brain. ⁵ Here, as in Case 1 and in the cases below that involve perception, we gloss over subtle questions about how to characterize the exact contents of the thinkers’ perceptual states. (See note 3.)
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
defeating conditions, and this is why we judge Frank’s transition to be rationally appropriate.⁶ But because of his condition, Oliver does not perceive the face in the photo as his sister’s face. This is why his perception of the photo alone, in the absence of the usual sorts of additional cues that Oliver relies on, does not make it rationally appropriate for him to reach the same conclusion as Frank. For now, we just note that this is an instance of what we called the perceptualist strategy in Section 1. Like the cognitivist strategy, it seeks to account for the rational difference between the two thinkers in terms of a difference in premise attitudes. But rather than positing additional background beliefs it posits a difference in the representational contents of the perceptual states that the thinkers take as their starting points. We return to this strategy in Section 5.
Case 3: Ultrasound Hannah is twenty weeks pregnant with her first child. At her second ultrasound exam, Hannah is curious whether she will already be able to find out the gender of her future child. At the exam, she lies on the stretcher and Dr. White moves the sensor over her belly, while they both look at the blurry, shifting black and white picture of the fetus on the monitor. Hannah doesn’t have much experience looking at ultrasound images, but nevertheless, she feels sure it’s a girl: “It looks like a girl to me!” Dr. White is surprised. “You’re right,” she says, “it is a girl.” Both Hannah and Dr. White are looking at the same image in the same viewing conditions. And on that basis, both come to the same conclusion about the gender of the fetus. But Dr. White is an experienced doctor with highly developed ultrasound-reading skills. Given what she sees in the monitor, the rational thing for her to do is to conclude that the fetus is a girl. Hannah, by contrast, is a complete ultrasound novice; her conclusion that the fetus is a girl has more the character of a hunch or a gut feeling than a rationally appropriate bit of reasoning. Suppose that further tests reveal that the fetus is actually male. Upon learning this, Dr. White is under rational pressure to judge that she must have missed or misinterpreted something in the ultrasound image. After all, judging that the fetus was female was the right thing for her to do given the way the ultrasound image appeared, and yet this judgment turns out to have been wrong. This is analogous to the point about Frank and the photo of the man with the beard in Case 2. But notice that Hannah is under no such rational pressure. This is because, for Hannah, there is no rational link between the appearance of the ultrasound image and the judgment that the fetus is female.
Case 4: Mind reading Aisha and her classmate Hassan are watching the other children in their class take a sports exam. Aisha is an average teenager. Hassan has a specific form of autism ⁶ Versions of this response may disagree about whether the rational appropriateness of Frank’s response also depends on him having good reason to think that his visual perceptions (or perceptions of faces in particular) are generally reliable.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
spectrum disorder that makes it extremely difficult for him to predict the emotional reactions of others in the way that most people do. He does alright in familiar situations, or when he gets explicit verbal cues, but he lacks the ability to put himself in the shoes of other people. One by one, the children have to go through an obstacle course set up in the gym. Next up is Selina, widely known to be the most athletic girl in class. Selina eagerly starts on the first obstacle, but halfway through she loses her grip and lands with an awkward thump on the mat. The other children laugh, and Selina quits the exercise. Observing the sequence of events, Aisha immediately judges that Selina feels embarrassed. But Hassan doesn’t know what to think. Does Selina think it was funny, like the other children do? Is she angry? Is she embarrassed? We can suppose that there are no relevant differences between what Hassan and Aisha see and hear, and that neither of them is more familiar with Selina or better informed about how people behave in general. But Aisha has the capacity to observe the scene, draw on her background knowledge, and—perhaps after ‘putting herself in Selina’s shoes’—arrive at the conclusion that she must have felt embarrassed. This transition is rationally appropriate for Aisha; indeed, it is just an instance of a kind of reasoning that most of us engage in more or less automatically all the time, and via which we typically arrive at rational judgments about other people’s mental states. But Hassan’s capacities in this area are markedly diminished, and for him the judgment that Selina was embarrassed, rather than that she was amused or angry, would be an irrational leap. The phenomena on display in Cases 1–4 each have their own enormous (and largely non-overlapping) literature. We make no attempt to engage in great detail with any of these specific literatures, although some of it will come up in Sections 4 and 5. Rather, we think it will be more helpful to take a step back: to articulate a unified account that captures what is common to reasoning in all of these areas, and to weigh this account against the other broad strategies that one encounters again and again in the literature on each area.
3. Subject-relative Rational Requirements The cases in Section 2 are all cases in which a given bit of reasoning is rationally appropriate for one thinker, while what appears to be the same bit of reasoning is not rationally appropriate for another thinker. When Angela hears the speaker utter (1) it is rationally appropriate for her to conclude that the utterance meant that Mr. Lehmann was not in a good mood. But even though Barry also hears the speaker utter (1), it would not be rationally appropriate for him to draw the same conclusion. Likewise, when Dr. White observes the ultrasound image it is rational for her to conclude that the fetus is female; but when Hannah observes the same image it is not. What accounts for these differences? Our aim in this section is to outline our answer to this question. We do not here argue for our answer directly, although we do try to show that it has some intuitive appeal. Our main support for it comes in
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Sections 4 and 5, where we argue that attempts to account for the rational differences in orthodox universalist terms face serious explanatory difficulties. We need to begin with some framework for evaluating transitions in reasoning more generally. The framework that we find most helpful accounts for the rational appropriateness of transitions in terms of there being rational requirements for the thinker that sanction the transition in question. To illustrate, suppose that Ben believes that Konstanz is in Germany, and also believes that if Konstanz is in Germany then its residents speak German. And suppose that he reasons from these beliefs to the conclusion that the residents of Konstanz speak German. Clearly this bit of reasoning is rationally appropriate. On our view, this is because it is sanctioned by a rational requirement for Ben that we can formulate as follows: (MP) Rationality requires that: if Ben believes that p and that p→q then he believes that q. (MP) is a so-called “wide-scope” requirement. It says only that Ben is required to believe that q conditional upon believing that p and that p→q; it does not say that he is outright required to believe that q.⁷ And indeed, he might not be: even though he believes that p and that p→q, his reasons for p or for p→q might be very weak or nonexistent, and he might have much more compelling reasons against q. If so then it would be irrational, all things considered, for him to believe that q. This is compatible with (MP) because (MP) articulates a relational fact about rationality, a fact about the rational connection between the beliefs that p and that p→q, on the one hand, and the belief that q on the other. It is this feature that makes wide-scope rational requirements suitable for appraising transitions in reasoning—which, as we noted in Section 1, can be appraised as appropriate or inappropriate independently of the rational status of the states that they are transitions between. One might wonder: In what sense does rationality require Ben to believe that q, given that he believes that p and that p→q? Surely Ben would not automatically count as irrational for failing to draw out this particular consequence of his beliefs. Perhaps this just isn’t an interesting or relevant consequence for him at the moment, or perhaps it is more important to focus on other consequences. So, isn’t (MP) much too demanding? Or consider another case. Suppose that Ben believes that p and believes that q, and reasons from these beliefs to the belief that p&q. This transition is rationally appropriate, and so on our view is sanctioned by a requirement like the following: (CI) Rationality requires that: if Ben believes that p and that q then he believes that p&q.
⁷ See Broome (1999, 2013). One might worry whether the sense in which the requirement to believe that q is conditional upon having the other beliefs is adequately captured by the English “if . . . then” construction employed in (MP). But such worries are tangential to our purposes here.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
But if Ben is required to believe all the conjunctions of things he believes, he will be forced into an endless process of acquiring more and more conjunctive beliefs. Surely this can’t be right. These concerns are reminiscent of—indeed, just are versions of—well-known difficulties that have been raised for attempts to see logic as providing rules or norms for reasoning.⁸ We cannot hope to fully dispel them here. But in our view, what they show is that it is too simplistic to think of requirements like (MP) and (CI) as rules for forming or revising beliefs. They are better thought of as rules for settling questions. It is true that Ben is not automatically required to believe that q just because he believes that p and that p→q. But if he asks himself whether q then rationality certainly does require him to settle the question in favor of q rather than ¬q—given, of course, that he continues to believe that p and that p→q. (If other, more compelling considerations require him to settle the question in favor of ¬q then he can respect the requirement in (MP) by giving up his belief that p or that p→q.) Likewise, Ben is not required to form the belief that p&q just because he believes that p and that q. But Ben might start wondering whether p&q, for whatever reason, and if he tries to settle the question, it had better be in favor of p&q rather than ¬(p&q)—given, once again, that he continues to believe that p and that q.⁹ In our view, then, a bit of reasoning is rationally appropriate when it is sanctioned by a wide-scope rational requirement for the thinker.¹⁰ Wide-scope rational requirements themselves are understood as rules for the thinker for settling questions. This is not to say that a bit of reasoning is rationally appropriate only when it is, in fact, done in an effort to settle a question. Ben might not be entertaining the question whether q at all; perhaps he just forms the belief that q because he happens to notice that it follows from his other beliefs. Still, his reasoning counts as rational because it is an instance of doing what the rules for settling questions tell him to do. Rational requirements corresponding to basic logical rules, such as (MP) and (CI), are plausible examples of rational requirements that sanction transitions in reasoning. But we do not assume that logical rules exhaust such rational requirements, nor that all requirements are universal. Quite the contrary: our view is that the reasoning by our expert subjects in Section 2 is sanctioned by requirements that go far beyond the ones corresponding to basic logical rules.
⁸ See Harman (1984) and MacFarlane (2004). ⁹ There is a close connection between the view of (wide-scope) rational requirements as rules for settling questions and a contrastivist view of rational requirements (see perhaps Snedegar 2017). This is because the possible answers to a question can be seen as providing the contrast class invoked by the contrastivist. There is also a nice fit between this way of thinking about rational requirements and the view of reasons developed by Pamela Hieronymi, according to which reasons for believing that p are considerations that speak affirmatively to the question whether p (see Hieronymi 2005, 2011). ¹⁰ We won’t try here to spell out in detail what it is for a requirement to sanction a bit of reasoning, since our focus is on the prior question of what sorts of rational requirements there are for thinkers.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
The requirements we have in mind are formulated using the notion of acceptance. Acceptance, as we intend it, is a highly general propositional attitude that includes standing belief and occurrent judgment, as well as conscious inclinations to believe and perceptual experiences whose veracity the subject is currently prepared to take for granted. For example, Aisha accepts that Selina is the most athletic girl in class because it is a standing belief of hers that this is so. And Angela accepts that the speaker uttered (1) because she hears the speaker as uttering (1) and she does not doubt that her perception is veridical. Aisha can stop accepting that Selina is the most athletic by abandoning or revising her belief; Angela can stop accepting that the speaker uttered (1) by no longer taking it for granted that her auditory experience is veridical. In some cases, the content of a state of acceptance might be extremely difficult to articulate explicitly. When Dr. White observes the ultrasound image she comes to accept that the image has certain characteristics, but we may not be able to articulate these characteristics with any precision in ordinary language. Let us return now to our first case, of Angela and Barry and the German utterance. In our view, Angela’s reasoning in this case is rational because it is sanctioned by a rational requirement roughly like the following: (GA) Rationality requires that: if Angela accepts that the speaker uttered (1) then she accepts that the utterance meant that Mr. Lehmann was not in a good mood. Like (MP) and (CI), (GA) is a wide-scope requirement. This is as it should be. As we noted in Section 2, it might not be rational, all things considered, for Angela to accept that the utterance meant that Mr. Lehmann was not in a good mood. Whether it is or not depends inter alia on whether she has any reasons to reject this conclusion, and on whether it is rational for her to accept that the speaker uttered (1). The purpose of (GA) is to capture what is, for Angela, a rational relation between accepting that the speaker has uttered a certain sentence and accepting that the speaker has uttered a sentence that has a certain meaning. This relation holds independently of the rational status of the states of acceptance that it relates. Of course, (GA) is not the only rational requirement for Angela that links specific German sentences to their meanings. There is one for each sentence of German (or at least for each sentence of the fragment of German that Angela has the capacity to understand). Moreover, it is extremely plausible that these requirements are not fundamental, but rather emerge as consequences of more abstract and general rules or requirements for Angela for reasoning in the relevant domain. These include rules about what to do with sentences that have the subject-predicate structure exemplified by (1), with sentences in the past tense, with sentences containing proper names and quantified noun phrases (like “keiner guten Stimmung”), and so on. This is not to say that Angela knows or accepts these rules, any more than she must know or accept (GA) (or (MP), for that matter) in order for it to be a rational requirement for her.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Whether we can expect to find representations of the rules anywhere in her psychology depends on how her capacity to understand German is in fact implemented. Also like (MP) and (CI), (GA) should be understood as functioning as a rule for settling questions rather than as a rule for forming or revising beliefs, or states of acceptance more generally. It does not command Angela to form the belief that the utterance meant that Mr. Lehmann was not in a good mood, not even conditionally upon accepting that the speaker uttered (1). Rather, it tells her how to settle a certain question should it come up.¹¹ This too is as it should be. It might not currently matter to Angela what the speaker’s utterance means; it might be more useful to focus on other inferences that she can make—e.g. that the speaker has a Bavarian accent, that she interrupted Angela, etc. If so then Angela would in no way be irrational for failing to judge that the utterance meant that Mr. Lehmann was not in a good mood. But if the question comes up—for example, because Barry asks her—then, as a fluent speaker of German, she ought to answer that the speaker’s utterance meant that Mr. Lehmann was not in a good mood, unless she is prepared to revise her opinion about what sentence was uttered. For her to do otherwise would be for her to make a rational mistake. This is not to say that it is a general rule of rationality for all thinkers that one answer this question in this way. For Barry, and for other non-German speakers, it would not be a rational mistake to suspend judgment. But this is only an obstacle to recognizing (GA) as a genuine rational requirement for Angela when it is conjoined with the assumption that all genuine rational requirements are universal, i.e. that there is no intersubjective variation in rational requirements. In our view, this assumption is to be rejected. There are intersubjective variations in rational requirements, because what rationality requires of one depends partly on the cognitive capacities that one has at one’s disposal. Barry has no capacity to understand German utterances, and for him any conclusion about the meaning of the utterance would be no better than a wild guess. But Angela does have the capacity to understand German utterances, and one of its main duties is precisely to help her bridge gaps like these between premise attitudes and conclusion attitudes, to help her successfully negotiate certain transitions in thought that would otherwise be irrational leaps. This difference in capacities mean that there are different rational requirements for how Barry and Angela are to settle questions about the meanings of German utterances. Angela is required to settle them by consulting the special cognitive capacity she has for answering questions of just these sorts, and this is what generates requirements like (GA) that link sentences to their meanings. There is no such requirement for Barry because he has no special cognitive capacity for answering these sorts of questions.
¹¹ As noted above, this is not to say that the transition is rational for Angela only if the question does come up. The transition counts as rational in general because it is sanctioned by the requirements for Angela for settling questions.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
This is why the transition in Case 1 is rationally appropriate for Angela but would not be for Barry. Why do differences in cognitive capacities underwrite differences in rational requirements in this way? Ultimately, one’s full answer to this is likely to depend on one’s foundational views about the nature of rationality. But notice that we are already quite familiar with analogous connections between capacities and normative requirements of other sorts. Most would agree, for example, that a doctor flying on an airplane has some responsibility to assist a fellow passenger who suddenly falls ill, while the other passengers who lack medical training have no such responsibility. And Spider-Man recognizes a special duty to try to stop supervillains plotting to destroy the city, partly in virtue of the fact that he has extraordinary powers that give him the capacity to do so. Though less dramatic, the case of Angela is of an analogous sort. She has a rational responsibility to answer questions about the meanings of German utterances in certain ways, partly in virtue of the fact that she has a cognitive power that lets her do so.¹² Our account of the other cases in Section 2 follows the same pattern as our account of Case 1. In some cases, the capacity in question is one that is typical or statistically normal for human thinkers. For example, most ordinary thinkers have a capacity to recognize people’s faces on the basis of visually perceptible facial features. This capacity generates rational requirements for ordinary thinkers that link states of acceptance that the person has such-and-such facial features to states of acceptance that the person is (or looks just like) so-and-so. Different thinkers have the capacity to recognize different faces, which is why it can be rational for one person to judge that a perceived face is the face of her brother, say, while for most other subjects this would be an irrational leap. Prosopagnosics, who have a systematic deficit in their capacity for facial recognition, are a more extreme example of this kind of difference, and this is what accounts for the rational difference observed in Case 2. Similarly, typical thinkers have a highly developed capacity to track the mental states of others, and because they have this capacity, rationality requires them to utilize it in answering the questions about others’ mental states that arise in the course of trying to explain and predict their behavior. But rationality makes fewer demands of thinkers whose ‘mind-reading’ capacities are more limited, such as thinkers with autism spectrum disorders like Hassan in Case 4. In other cases, the capacity in question is better thought of as a special kind of expertise. Case 1 is like this; Angela’s capacity to understand German gives her a kind of expertise that is only had by a small subset of neurotypical thinkers. So is Case 3: through experience and training, Dr. White has a specialized capacity to interpret the visually perceptible features of ultrasound
¹² Angela’s responsibility can be mitigated by limitations in her German capacity, such as when she encounters a German expression that she never learned the meaning of. Likewise if something interferes with her ability to exercise her capacity, such as when neuroscientists are subjecting her speech centers to a powerful electromagnetic field.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
images that most ordinary thinkers do not have. It is this capacity that underwrites the rationality of her inference about the sex of the fetus, an inference that would be irrational for most of us to make. Our account leaves it open how, exactly, the cognitive capacity is implemented in any particular case. Perhaps Angela’s German capacity is encapsulated in a specialpurpose language module, and perhaps Aisha’s ability to ascribe mental states to others is more distributed. Perhaps Dr. White’s capacity to interpret ultrasounds relies on classical computational mechanisms, perhaps it relies on a suitably trained connectionist network. Our account does not require us to prejudge any of these issues—nor should it, as they are all difficult research questions for cognitive science. On our account, the rational differences in the cases are to be explained, not by making speculations about how the cognitive capacities at issue are implemented, but by drawing out the connection between those capacities and what rationality requires of thinkers who possess them. As we will see in Sections 4 and 5, the orthodox universalist accounts cannot say the same.
4. Problems for the Cognitivist Strategy We have already encountered an instance of the cognitivist strategy in response to Case 1, the case of German competence. According to a very familiar story, competence with a natural language partly consists in a certain body of knowledge that includes knowledge of the meanings of sentences. If this is correct, Angela’s reasoning about the utterance of (1) can be seen to be rationally appropriate because her knowledge of meaning in German provides her with “auxiliary premise attitudes” that allow her to make a rational inference from her information about which sentence the speaker utters. Barry lacks these auxiliary premises, and so he is not in a position to rationally draw the same conclusion as Angela. On the orthodox universalist view of rational requirements, the rational difference between Angela and Barry must be traced to a difference in the premise attitudes they reason from. The cognitivist proposal about competence with German provides a specific answer to the question of what those premise attitudes are. This sketch of the cognitivist proposal for Case 1 is oversimplified, of course, and needs to be refined in various ways. For starters, Angela’s competence with German is systematic and productive, and so the most plausible version of this strategy attributes to Angela knowledge of a finite compositional meaning theory from which the meanings of individual German sentences can be deduced. No such theory is consciously accessible to Angela, and so this knowledge must be said to be tacit or implicit in some sense. This makes it hard to view Angela as having knowledge, strictly speaking, of the theory, or even to view her as believing it in the ordinary sense. Thus, we find Chomsky (1980) recommending that we abandon the label “tacit knowledge” in favor of “cognizing” for the attitude that competent speakers are said to bear to the grammar of their language.) But it is crucial to the cognitivist account
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
that Angela has some sort of belief-like premise attitudes towards the claims of the relevant theory, and moreover that those attitudes play a role in her reasoning that renders them relevant for determining whether or not her reasoning lives up to all the requirements that rationality places upon it.¹³ Versions of the cognitivist strategy can be—and often have been—advanced to account for each of the cases discussed in Section 2. Perhaps most familiar is the cognitivist strategy for accounting for cases like Case 4, the mind-reading case. According to the well-known theory theory, Aisha’s capacity to ascribe mental states to others is underwritten by a tacitly known theory of mind which she applies in particular situations to guide her ascriptions of attitudes to others and to predict their behavior.¹⁴ On this view, thinkers with autistic spectrum disorders lack a fully developed theory of mind (or perhaps have trouble accessing it). This proposal allows for an explanation of the rational difference between Aisha and Hassan that is exactly analogous to the cognitivist’s explanation of the rational difference between Angela and Barry. Aisha and Hassan have the same starting beliefs about the sequence of events involving Selina. But Aisha’s theory of mind provides her with additional auxiliary premise attitudes that allow her to rationally bridge the inferential gap to the conclusion that Selina is embarrassed. Since Hassan lacks these additional premise attitudes, it would be irrational for him to reach the same conclusion. Something similar can be said for Dr. White in Case 3: her ultrasound training and experience allows her to visually identify various abstract features of ultrasound images, and she knows principles that correlate these features with physiological characteristics of the fetus, such as its sex. As in the other cases, much of this knowledge may be tacit: she might not be able to explicitly describe the abstract features of the ultrasound image that she relies on, or state the principles that correlate them to fetal characteristics. Nevertheless, this tacit knowledge provides Dr. White with additional premise attitudes that put her in a position to make rationally appropriate transitions in reasoning that her patient Hannah is unable to make. Finally, consider Frank’s capacity to recognize faces in Case 2. It would be odd to describe Frank as having tacit knowledge of a “theory of faces” analogous to Angela’s tacit knowledge of a theory of linguistic meaning, or Aisha’s tacit folk psychological theory. But psychologists have explored the idea that Frank might have tacit knowledge of a “face-space”: a multi-dimensional space of abstract facial features, such as features having to do with proportion and facial geometry, which allows Frank to
¹³ For recent accounts along these lines, see Gross (2010), Heck (2006), Lepore (1997), and Longworth (2008). Some theorists hold that linguistic competence involves tacit knowledge, in a sense, but argue that having tacit knowledge ultimately just consists in the possession of some reliable mechanism for mapping linguistic expressions onto meanings (see for example Matthews 2003). Others hold that it just consists in having dispositions to behave in certain ways in light of one’s desires or intentions (Dwyer and Pietroski 1996). Such views do not qualify as instances of the cognitivist strategy in our sense. ¹⁴ For example, see Carruthers (2009), Gopnik and Wellman (1992), and Gopnik and Meltzoff (1997).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
organize information about the faces of people with whom he is familiar.¹⁵ If so then perhaps he draws on this information when identifying perceived faces, in order to reason roughly as follows: the face I am currently perceiving is located in region Ri of my face-space; Ri is within the Polly region; so, the face I am currently perceiving is (probably) Polly’s. This picture is compatible with various explanations of what is going on with Frank’s prosopagnosic brother Oliver. Perhaps his condition is caused by a deficit in his ability to construct or maintain a face-space. Or perhaps it is caused by a deficit in his ability to extract the information from perception that would allow him to locate the face currently being perceived within his face-space.¹⁶ In any case, the explanation of the rational difference between Frank and Oliver can proceed along analogous lines to the cognitivist explanations of the other cases. It is important to realize that the cognitivist strategy—unlike our view developed in Section 3—relies essentially on a certain kind of psychological hypothesis about how the cognitive capacity in question is actually implemented in the thinker. Put crudely, the cognitivist strategy is to see the expert thinker as doing something very much like going through the steps of a good argument—whether a deductively valid argument, as cognitivists about language understanding tend to assume, or a defeasible one, as cognitivists about face recognition would likely say. The extra premises that are needed for the argument to qualify as good are to be provided by the tacit knowledge that the cognitivist posits. In order to do this, however, tacit knowledge needs to be psychologically robust. For example, some theorists hold that linguistic competence involves tacit knowledge of language, but go on to say that this ultimately just consists in the possession of some reliable mechanism for mapping linguistic expressions onto meanings.¹⁷ Tacit knowledge in this sense merely labels the transition in attitude from linguistic expression to meaning. It does nothing to explain why the transition is rationally appropriate.¹⁸ In order to do its explanatory work, then, the cognitivist strategy needs to see the cognitive capacity as being implemented by mental states and processes that are, in some hard-to-specify sense, sufficiently like paradigmatic cases of actually reasoning through the steps of a good argument. But whether any of the capacities in play in the cases in Section 2 actually work like this is an open question from the point of view of empirical research in cognitive science. This initial concern in fact points toward a deeper problem for the strategy. Consider Aisha’s reasoning in Case 4. According to the cognitivist account, there is a set of propositions p₁, . . . , pn about Selina, and a folk psychological theory ¹⁵ For example, see O’Toole (2011). ¹⁶ There may be different explanations for distinct types of prosopagnosia with different etiologies; see Behrmann et al. (2011). ¹⁷ See, for example, Matthews (2003). ¹⁸ Others hold that having tacit knowledge is just a matter of having dispositions to behave in certain ways in light of one’s desires or intentions (see, for example, Dänzer 2016 and Dwyer and Pietroski 1996). But tacit knowledge in this sense presupposes the availability of some other explanation of the rationality of the transitions in question, since the attitudes it recommends attributing are just whatever attitudes would be needed to rationalize the thinker’s behavior in light of her desires.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
consisting of principles P₁, . . . , Pm, such that it would be rationally appropriate for anyone who accepted all of p₁, . . . , pn and P₁, . . . , Pm to arrive at the conclusion that Selina was embarrassed. (CM) Rationality requires that: if one accepts that p₁, . . . , pn and one accepts that P₁, . . . , Pm then one accepts that Selina was embarrassed. In Case 4 it is stipulated that Aisha and Hassan both accept that p₁, . . . , pn, so that both satisfy the first conjunct of the antecedent of (CM). According to the cognitivist, Aisha also satisfies the second conjunct of the antecedent of (CM) by virtue of her tacit knowledge of a folk psychological theory; this is why her reasoning in the case is rationally appropriate. In other words, the cognitivist is committed to counting tacit knowledge of P₁, . . . , Pm in the same way as other attitudes of acceptance (such as belief) when we assess Aisha’s reasoning to determine whether or not it is sanctioned by (CM). However, this commitment leads to incorrect predictions about the rational status of Aisha’s reasoning in a wide range of cases. For example, focus on some psychological principle Pi that is alleged to be included in Aisha’s tacit knowledge, and suppose that we convince her to accept that if Pi is true then some further proposition r is true. (Perhaps we provide her with overwhelmingly persuasive expert testimony that if Pi is true then humans must have evolved from early gorillas rather than hominid apes.) Since ex hypothesi Aisha counts as accepting that Pi by virtue of her tacit knowledge, once she comes to accept that if Pi then r it should be rationally appropriate for her to conclude that r; her reasoning should be sanctioned by (MP) above. But this is clearly wrong: if Aisha were to conclude that r, this would be no better than a wild guess. (If we ask Aisha whether humans evolved from early gorillas rather than hominid apes, she is clearly not under any rational pressure to answer yes.) It is easy to see how to generate further incorrect predictions along these lines: for any genuine rational requirement R for Aisha, it should be possible to construct a hypothetical case in which R would sanction some bit of reasoning, so long as we take Aisha’s tacit knowledge of P₁, . . . , Pm into consideration, but where intuitively her reasoning should not qualify as rationally appropriate. One might worry that this objection does not give sufficient weight to the cognitivist’s qualification of the knowledge posited as being tacit. Isn’t it characteristic of tacit knowledge that p that the thinker might not be able to raise her knowledge to conscious awareness and reflect on it? The thinker might not even possess the concepts that would be needed to form an ordinary conscious belief that p. Most importantly, isn’t it characteristic of tacit knowledge that a thinker’s tacit knowledge that p is inferentially isolated from the rest of her beliefs and other attitudes?¹⁹ If so then it is no good objecting that it would be irrational for Aisha to conclude that r in the case just described. This is precisely what we would expect if the knowledge in question were merely tacit.
¹⁹ See Evans (1981).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
The cognitivist cannot have it both ways, however. To respond to the objection by insisting that tacit knowledge is inferentially isolated is, in effect, to concede that merely tacit knowledge that Pi should not count towards determining whether her reasoning to the conclusion that r is rationally appropriate. This is to concede that Aisha’s tacit knowledge should be ignored when measuring her reasoning against the rational requirements. But in order to explain Case 4, the cognitivist needs to insist that Aisha’s tacit knowledge should not be ignored. Either Aisha’s tacit knowledge is relevant for assessing her reasoning, or it is not. But it cannot be both. The cognitivist could try to insist that Aisha’s tacit knowledge is only relevant for assessing her reasoning in a certain range of cases, such as Case 4 and other cases in which her folk psychological capacity seems to play an important role in her reasoning. But this seems like an entirely ad hoc suggestion. It proposes a special class of premise attitudes that are ordinarily invisible to rational assessment, but that become visible precisely when they need to be in order to get the right results. Such a suggestion is motivated more by the need to preserve the orthodox universalist picture of rationality than by any prospect of giving a satisfying account of the cases. Before turning to the perceptualist strategy, we should emphasize that we do not take the problems raised in this section to speak against (or in favor of) tacit knowledge views taken as empirical hypotheses about how the cognitive capacities in question are implemented in thinkers like us. Perhaps competence in German involves tacit knowledge of a compositional semantics for German. And perhaps the capacity to recognize faces involves mental representations of relative similarity along a number of abstract dimensions, and perhaps thinkers recognize faces by means of formal manipulations of these representations. Our contention has been that hypotheses like these, even they are correct, are not adequate to explain the conditions under which the transitions in reasoning we have been considering are rationally appropriate. It is much more promising to give an explanation of the sort we sketched in Section 3, in terms of rational requirements to exercise one’s cognitive capacities for answering questions in specific domains, however those capacities turn out to be implemented.
5. Problems for the Perceptualist Strategy We already briefly encountered an instance of the perceptualist strategy in response to Case 2, the case of face recognition. What explains the difference in rationality between Frank and Oliver in that case? Many will find it tempting to say that Frank’s capacity to recognize faces is partly constituted by (or at least brings with it) an ability to simply see faces as the faces of specific people he knows. When Frank looks at the photo, he does not only see shapes and colors, or eyes, nose and mouth in a certain orientation. His visual experience also represents it to him as a photo of Polly’s face. Judging that it is a photo of Polly’s face is then merely a matter of deciding to endorse what his visual experience is telling him. By contrast, since Oliver does not have a
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
normally functioning facial recognition capacity, his visual experience does not represent the photo as a photo of Polly’s face. This is a very different sort of explanation than the cognitivist explanation sketched in Section 4. But like the cognitivist strategy, it too seeks to explain the case in terms of a difference in premise attitudes. Frank and Oliver have perceptual experiences (and corresponding states of acceptance) with different contents, and so it is not rationally appropriate for them to draw the same conclusions.²⁰ The perceptualist strategy can be—and in most cases has been—extended to each of the other sorts of cases discussed in Section 2. The most influential alternatives to cognitivist accounts of language understanding, for example, are versions of the perceptualist strategy.²¹ According to such accounts, when Angela hears the speaker utter (1), it is part of the content of her auditory experience (or of a conscious, perception-like experience that accompanies it) that the speaker said that Mr. Lehmann was not in a good mood. It is rationally appropriate for Angela to judge that this is what the speaker said because, in the circumstances described, it is rationally appropriate for her to take her experience at face value. The most influential recent alternatives to cognitivist accounts of mind-reading cases like Case 4 are so-called direct perception approaches, which are also versions of the perceptualist strategy.²² According to these approaches, when Aisha observes the sequence of events involving Selena she simply perceives her as being embarrassed. Autism spectrum disorders like Hassan’s, on this kind of approach, are understood as involving or leading to difficulties in the ability to have these sorts of perceptual experiences. It is not hard to see how to develop a closely analogous story for Case 3, the ultrasound case. Perhaps when Dr. White looks at the ultrasound image, she simply perceives the fetus as female, even though an ordinary observer like Hannah would not perceive the fetus as female (or as male). For Dr. White to conclude that the fetus is female, then, is just for her to take her perception at face value.²³
²⁰ As indicated in Section 1, many perceptualists would not want to describe Frank as engaging in a process of reasoning at all; he simply accepts that things are as they perceptually seem to be to him. Given our very broad usage of the label, however, Frank does count as engaging in reasoning: he undergoes a transition from one attitude—a visual perceptual state with a certain content—to another attitude—a judgment or belief with the same content. It does not matter for our purposes whether or not we use the label “reasoning” to describe Frank’s thinking in a case like this. ²¹ See, for example, Azzouni (2013), Brogaard (2018), Fricker (2003), and Hunter (1998). ²² See, for example, Carruthers (2015), Gallagher (2008), Gallagher and Zahavi (2008), Green (2010), Krueger (2012), Lavelle (2012), McNeill (2012, 2015), Reddy (2008), and Smith (2010, 2015). A more traditional opponent of the theory theory is the so-called simulation theory, according to which Aisha is able to simulate the chain of events from Selena’s perspective—to ‘put herself in Selena’s shoes’—and then infer that Selena is embarrassed via introspection on her own experience (see, for example, Goldman 2006, Gordon 1996, and Heal 1996). We set the simulation theory aside because it does not easily extend to the other sorts of cases we are interested in here. ²³ Such a proposal goes beyond the weaker claim that Dr. White’s ability to detect sex via ultrasound is acquired via visual learning. Paradigmatic cases of visual learning are cases in which the perceiver develops an ability to consciously detect more visual features of the stimulus, or to consciously detect more finegrained differences among visual features. But the sex of the fetus is not a visual feature of the ultrasound
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
It is important to note that the perceptualist strategy, like the cognitivist strategy, essentially relies on a particular sort of hypothesis about how the cognitive capacities in question are actually implemented in thinkers like us. The perceptualist’s guiding idea is to account for the rational status of the expert reasoner’s judgment in each case on the model of perceptual judgment more generally. If we are to take this guiding idea literally then we need to be prepared to attribute to the expert reasoner conscious perceptual experiences whose contents go far beyond familiar low-level properties such as being red or being round, to include high-level properties such as meaning that Mr. Lehmann was not in a good mood, being embarrassed, and being a female fetus. However, it is a matter of intense debate among philosophers and cognitive scientists whether perceptual experience actually can represent such highlevel properties.²⁴ The perceptualist strategy depends on a particular outcome of this debate. Moreover, the most plausible—although still extremely controversial—account of how such high-level properties might be able to make their way into the contents of perceptual experience is via cognitive penetration, whereby the contents of the thinker’s perceptual experiences are somehow influenced by her beliefs, memories, and other cognitive states.²⁵ But if one’s perceptual experience represents some high-level property being F due to the influence of background cognitive states— including perhaps unfounded beliefs or irrational biases—it is no longer clear why (and in what circumstances) it is still rationally appropriate to take perceptual experiences as of something’s being F at face value. If Aisha has an unjustified belief that Selena is angry at her, for example, and this makes it visually appear to Aisha that Selena is angry, then it is not at all obvious that it is rationally appropriate for Aisha to trust her perception and take it to confirm that Selena really is angry.²⁶ The perceptualist can avoid commitment to high-level contents of perception by retreating from claims about perceptual experience, strictly speaking, to claims about a broader class of perception-like conscious seemings.²⁷ For example, instead of saying that Aisha literally visually (or otherwise) perceives Selena as being embarrassed, the perceptualist can say that when Aisha observes the sequence of events it consciously seems to her that Selena is embarrassed. Similarly, it might be implausible that Angela literally hears the utterance as meaning that Mr. Lehmann was not in a good mood, but it is more plausible that when she hears the utterance it consciously
image. (See Watanabe and Sasaki 2015 for a useful discussion of visual learning, and Chudnoff 2018 for a discussion of its epistemic significance.) ²⁴ For optimism about this see Siewert (1998) and Siegel (2010), for pessimism about this see Brogaard (2013) and Byrne’s contribution in Byrne and Siegel (2016). ²⁵ For vigorous debate about a recent set of empirical and methodological challenges to cognitive penetration, see Firestone and Scholl (2015). ²⁶ This example is discussed in Siegel (2012). ²⁷ We borrow the term from Huemer (2007).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
seems to her as though this is what it means.²⁸ However, while this move might insulate the perceptualist from worries about high-level perceptual contents, it does nothing to help with worries about correctly capturing the rational status of the thinkers’ judgments. Conscious seemings can have all kinds of etiologies; if it merely consciously seems to Aisha that Selena is angry at her, why should this be any rational basis at all for concluding that Selena really is angry at her? The perceptualist is thus faced with a balancing act analogous to the one for the cognitivist discussed in Section 4, to temper its commitments about the actual psychological processes involved in the thinker’s exercise of her cognitive capacities without compromising its ability to account for the rational status of her reasoning. Our most fundamental objection to the perceptual strategy, however, is that it fails to adequately capture the inferential character of the experts’ thinking in the cases described in Section 2. The perceptualist strategy aims to subsume all of the cases under a single universal, non-subject-relative rational requirement like the following: (PC) Rationality requires that: if one perceives (or it consciously seems to one) that p then one accepts that p. According to the perceptualist, it is this rational requirement that sanctions Angela’s reasoning in Case 1: she perceives the utterance as meaning that Mr. Lehmann was not in a good mood, and the transition to the conclusion that this is what the utterance means is rationally appropriate for her, as per (PC). But as we noted when discussing the case, to say that Angela’s transition is rationally appropriate is not to say that the belief she arrives at is rational for her to hold. She might have very strong independent reasons to believe that the speaker could not have uttered a sentence with this meaning. (Perhaps she knows that the speaker has no idea who Mr. Lehmann is, or that the speaker could not possibly have any interest in this topic.) What is it rational for Angela to do in this kind of situation? This depends on further features of the case, of course, but one thing that it will in many cases be rational for her to conclude is that she was mistaken in having taken the speaker to utter sentence (1). (Perhaps the speaker uttered “Nehmann” rather than “Lehmann,” or “einer” rather than “keiner.”) Notice that this is exactly what we would expect if Angela’s reasoning is sanctioned by a requirement like (GA) from Section 3 (repeated below), as it is on our view. (GA) Rationality requires that: if Angela accepts that the speaker uttered (1) then she accepts that the utterance meant that Mr. Lehmann was not in a good mood. Angela can obey the requirement in (GA) by coming to accept that the utterance meant that Mr. Lehmann was not in a good mood. But as we noted above, she can also obey it by giving up her state of accepting that the speaker uttered (1). ²⁸ Fricker (2003) calls conscious seemings like these quasi-perceptions of meaning; see also Hunter (1998).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Both transitions are sanctioned equally by (GA), and—as we have just seen—both transitions would be rationally appropriate. By contrast, there is only one way that Angela can obey (PC), namely, by coming to accept that the utterance meant that Mr. Lehmann was not in a good mood. She cannot obey (PC) by giving up her perception (or conscious seeming) of the utterance, because her perception is not under her reflective control.²⁹ But even if she could, this would not explain why it is rationally appropriate for Angela to revise her attitude about which sentence was uttered. (PC) cannot sanction any revisions in Angela’s attitudes about which sentence was uttered; it is simply not concerned with any such attitudes. Analogous problems can be raised for the perceptualist account of the other cases discussed in Section 2. For example, Aisha might have very good independent reason to doubt that Selena was embarrassed, and this can make it rationally appropriate for her to revise one or more of her beliefs about Selena and the chain of events that she had previously held. (Perhaps the other kids weren’t laughing nearly as much as she thought, or perhaps Selena was not as invested in her athletic reputation as Aisha had been assuming.) Or perhaps an amniotic DNA screening shows conclusively that Hannah’s fetus is male, so that Dr. White rationally concludes that her perception of the ultrasound image as having such-and-such features must have been mistaken. (PC) is powerless to explain the rational appropriateness of the thinker’s transition in cases like these.³⁰ One might worry that our objection ignores the defeasible nature of (PC), which is surely what the perceptualist intends. After all, rationality surely does not require that one take one’s current perception that p to settle the question whether p even in the face of good reason to suspect that one’s perception is malfunctioning or likely to be incorrect. And from the perspective of the perceptualist, isn’t that exactly what is happening in the kinds of cases just considered? However, our objection is not that (PC) sanctions transitions that it shouldn’t—an objection that could be rebutted by pointing out that the sanction (PC) provides is defeasible, and plausibly defeated in the cases at issue. Rather, our objection is that (PC) fails to sanction transitions that any adequate account should. Our objection relies on the observation that expert thinkers can draw rational connections between properties such as being an utterance of (1) or having such-and-such facial geometry, on the one hand, and properties like meaning that Mr. Lehmann was not in a good mood or being the face of Polly on the ²⁹ Kolodny (2005) argues that any genuine wide-scope rational requirement must be one that the thinker can obey in more than one way, as Angela can in the case of (GA) but cannot in the case of (PC). Also, Balcerak Jackson (2016) argues that the notorious “bootstrapping” worries for dogmatist accounts of perceptual justification arise out of commitment to wide-scope requirements like (PC). All of this suggests that the perceptualist should replace (PC) with a narrow-scope requirement like the following: If one perceives that p then: rationality requires that one accept that p. A narrow-scope requirement like this clearly does nothing to explain why it can be rationally appropriate for Angela to conclude that she misheard the utterance in the case described. ³⁰ This problem for the perceptualist strategy as applied to language understanding is developed in more detail in Balcerak Jackson (2017).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
other. These rational connections are revealed by transitions from attitudes concerning the former to attitudes concerning the latter, but they are revealed no less by transitions in the other direction as well. Our account recognizes and accounts for these rational connections, as does the cognitivist account of Section 4—albeit in a way that we have argued is ultimately deeply problematic. But the perceptualist account fails to recognize them at all.
6. Conclusion It is striking how persistent the tendency is, across the areas we have been discussing, to see questions about rationality in terms of a choice between cognitivist and perceptualist approaches. One sign of this is that dissatisfaction with one alternative is often treated as motivation for the other. For example, perceptualist explanations of language comprehension have become increasingly widely endorsed among epistemologists of language who are dissatisfied with explanations in terms of tacit knowledge of a theory of meaning. And worries about the high-level perceptual contents apparently called for by the perceptualist explanation of judgments such as Dr. White’s ultrasound reading tend to motivate cognitivist views according to which such judgments are better construed as post-perceptual inferences grounded in the expert’s (perhaps tacit) knowledge. In our view, however, both strategies lead us in the wrong direction. The cognitivist strategy leads us in the direction of looking for special kinds of attitudes that can play the role of bridge premises; but as we saw in Section 4, the prospects for actually using such attitudes to explain rationality are dim. The perceptualist strategy leads us in the direction of trying to force the cases into the model of ordinary perceptual justification; but as we saw in Section 5, this ignores the fact that reasoning in the areas in question is a matter of tracing rational connections among propositions. Perhaps, then, it is time to think seriously about what an alternative to both strategies would look like. The account developed in Section 3 is a promising avenue to explore. On that account, some of the rational requirements for a thinker arise because of the specific cognitive capacities she possesses. When she has some specialized capacity whose competent exercise in the circumstances will help her to settle some question under consideration, rationality requires her to exercise it competently and follow its dictates. It is rational requirements like these that sanction the thinker’s reasoning in the cases examined in Section 2, even in the absence of any tacit bridge premise attitudes and even in the absence of any direct perceptual (or perception-like) access to the propositions in question. This picture leaves many questions open, perhaps most pressing the question of exactly which sorts of cognitive capacities generate substantive rational requirements, and why. We speculate that the answer to these questions ultimately lies in a better understanding of epistemic normativity—that is, a better understanding of the facts that determine how it is appropriate for one to conduct oneself, epistemically
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
speaking, in various circumstances. But these are questions for future work. For now, we conclude with three observations about rational requirements that can be drawn from the present discussion. First, it is extremely plausible that the substantive requirements generated by cognitive capacities are process rather than state requirements.³¹ What rationality requires of the expert is that she competently exercise her cognitive capacity in making transitions in thought in a certain domain. To do so is to go through a process that takes one from accepting some input propositions p₁, . . . , pn to an act of accepting whatever proposition one’s cognitive capacity yields as the appropriate conclusion to draw from p₁, . . . , pn in the circumstances—or else, if one is unwilling to do so, to an act of abandoning or revising one’s attitudes towards p₁, . . . , pn. A thinker must go through the process to satisfy this requirement; it is not enough merely to see to it, in one way or another, that one’s overall state of mind conforms to a certain structural description. Thus if the account developed here is on the right track, there are at least some genuine rational requirements on how we are to reason, and not just requirements on what our overall state of mind is to be like at any given time. Second, the rational requirements we have been discussing are in tension with a deflationary conception of rational requirements in general as merely helping to articulate what it is for a subject to count as rational. This might be the correct attitude to have towards formal coherence requirements like (MP) and (CI); perhaps we are rationally required to conform to basic principles of deductive inference simply because doing so is part of what it takes to count as rational rather than irrational. But reasoning as Angela does about the utterance of (1), as Aisha does about Selina, or as Dr. White does about the ultrasound, goes well beyond what it takes merely to count as rational. Subjects who lack their capacities are in no way irrational for failing to reason as they do. What are rational requirements, if not mere conditions in the definition of what counts as rational? One suggestion is that they should be seen as articulating ideals. Perhaps rational requirements are principles to which the ideal reasoner would perfectly conform. But this description is a poor fit for the substantive requirements generated by cognitive capacities. Even an ideal reasoner might not be a speaker of German. And there are no particular people that an ideal reasoner, as such, can be expected to be able to identify on the basis of their facial features. It would be bizarre to suggest that one should seek to improve one’s ultrasound-reading skills insofar as one strives to approximate the ideal reasoner. It is more plausible to think that the rational requirements for a particular thinker are principles to which an ideally rational version of she herself, with the capacities she actually possesses, would perfectly conform. If so then it needs to be recognized that the pursuit of ideal rationality might lead each of us in a different direction.
³¹ The distinction is from Kolodny (2005).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
References Austin, J.L. 1975. How To Do Things with Words. Clarendon Press. Azzouni, Jody. 2013. Semantic Perception: How the Illusion of a Common Language Arises and Persists. Oxford University Press. Balcerak Jackson, Brendan. 2017. Against the perceptual model of utterance comprehension. Philosophical Studies forthcoming. Balcerak Jackson, Magdalena. 2016. Perceptual fundamentalism and a priori bootstrapping. Philosophical Studies v. 173, pp. 2087–103. Balcerak Jackson, Magdalena and Balcerak Jackson, Brendan. 2012. Understanding and philosophical methodology. Philosophical Studies v. 161, pp. 185–205. Balcerak Jackson, Magdalena and Balcerak Jackson, Brendan. 2013. Reasoning as a source of justification. Philosophical Studies v. 164, pp. 113–26. Behrmann, Marlene et al. 2011. Impairments in face perception. In A. Calder et al. (eds.), The Oxford Handbook of Face Perception. Oxford University Press. Brogaard, Berit. 2013. Do we perceive natural kind properties? Philosophical Studies v. 162, pp. 35–42. Brogaard, Berit. 2018. In defense of hearing meanings. Synthese v. 195, pp. 2967–83. Broome, John. 1999. Normative requirements. Ratio v. 12, pp. 398–419. Broome, John. 2013. Rationality Through Reasoning. Wiley Blackwell. Byrne, Alex and Siegel, Susanna. 2016. Rich or thin. In Bence Nanay (ed.), Current Controversies in the Philosophy of Perception. Routledge. Carruthers, Peter. 2009. How we know our own minds: the relationship between mindreading and metacognition. Behavioural and Brain Sciences v. 32, pp. 121–82. Carruthers, Peter. 2015. Perceiving mental states. Consciousness and Cognition v. 36, pp. 498–507. Chomsky, Noam. 1980. Rules and Representations. New York: Columbia University Press. Chudnoff, Elijah. 2018. The epistemic significance of perceptual learning. Inquiry Inquiry: An Interdisciplinary Journal of Philosophy v. 61, pp. 520–42. Dänzer, Lars. 2016. Sentence Understanding: Knowledge of Meaning and the Rationalintentional Explanation of Linguistic Communication. Mentis Publishing. Dwyer, Susan and Pietroski, Paul. 1996. Believing in language. Philosophy of Science v. 63, pp. 338–73. Evans, Gareth. 1981. Semantic theory and tacit knowledge. In S. Holtzman and C. Leich (eds.), Wittgenstein: To Follow a Rule. Routledge & Kegan Paul. Firestone, Chaz and Scholl, Brian J. 2015. Cognition does not affect perception: evaluating the evidence for “top-down” effects. Behavioural and Brain Sciences v. 39, e229. Fricker, Elizabeth. 2003. Understanding and knowledge of what is said. In A. Barber (ed.), Epistemology of Language. Oxford: Oxford University Press. Gallagher Shaun. 2008. Direct perception in the intersubjective context. Consciousness and Cognition v. 17, pp. 535–43. Gallagher, Shaun and Zahavi, Dan. 2008. The Phenomenological Mind. Oxford: Routledge. Goldman, Alvin. 2006. Simulating Minds. Oxford: Oxford University Press. Gopnik, Allison and Meltzoff, Andrew. 1997. Words, Thoughts, and Theories. Cambridge: MIT Press. Gopnik, Allison and Wellman, Henry. 1992. Why the child’s theory of mind really is a theory. Mind and Language v. 7, pp. 145–71.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Gordon, Robert. 1996. “Radical” simulationism. In P. Carruthers and P. Smith (eds.), Theories of Theories of Mind. Cambridge: Cambridge University Press. Green, Mitchell. 2010. II—Perceiving emotions. Aristotelian Society Supplementary Volume 84, pp. 45–61. Gross, Steven. 2010. Knowledge of meaning, conscious and unconscious. Baltic International Yearbook of Cognition, Logic, and Communication v. 5, pp. 1–44. Harman, Gilbert. 1984. Logic and reasoning. Synthese v. 60, pp. 107–27. Heal, Jane. 1996. Simulation, theory, and content. In P. Carruthers and P. Smith (eds.), Theories of Theories of Mind. Cambridge: Cambridge University Press. Heck, Richard. 2006. Reason and language. In C. Macdonald and G. Macdonald (eds.), McDowell and His Critics. Oxford: Blackwell Publications. Hieronymi, Pamela. 2005. The wrong kind of reason. Journal of Philosophy v. 102, pp. 437–57. Hieronymi, Pamela. 2011. Reasons for action. Proceedings of the Aristotelian Society v. 111, pp. 407–27. Huemer, Michael. 2007. Compassionate phenomenal conservatism. Philosophy and Phenomenological Research v. 74, pp. 30–55. Hunter, David. 1998. Understanding and belief. Philosophy and Phenomenological Research v. 58, pp. 559–80. Kolodny, Nico. 2005. Why be rational? Mind v. 114, pp. 509–63. Krueger, Joel. 2012. Seeing mind in action. Phenomenology and the Cognitive Sciences v. 11, pp. 149–73. Lavelle, J.S. 2012. Theory-theory and the direct perception of mental states. Review of Philosophy and Psychology v. 3, pp. 213–30. Lepore, Ernest. 1997. Conditions on understanding language. Proceedings of the Aristotelian Society v. 97, pp. 41–60. Longworth, Guy. 2008. Linguistic understanding and knowledge. Noûs v. 42, pp. 50–79. MacFarlane, John. 2004. In what sense (if any) is logic normative for thought? Unpublished manuscript. . Matthews, Robert. 2003. Does linguistic competence require knowledge of language? In A. Barber (ed.), The Epistemology of Language. Oxford University Press. McNeill, William E.S. 2012. On seeing that someone is angry. European Journal of Philosophy v. 20, pp. 575–97. McNeill, William E.S. 2015. Seeing what you want. Consciousness and Cognition v. 36, pp. 554–64. O’Toole, Alice. 2011. Cognitive and computational approaches to face recognition. In A. Calder et al. (eds.), The Oxford Handbook of Face Perception. Oxford University Press. Reddy, Vasudevi. 2008. How Infants Know Minds. Harvard University Press. Siegel, Susanna. 2010. The Contents of Visual Experience. Oxford University Press. Siegel, Susanna. 2012. Cognitive penetrability and perceptual justification. Noûs v. 46, pp. 201–22. Siewert, Charles. 1998. The Significance of Consciousness. Princeton University Press. Smith, Joel. 2010. Seeing other people. Philosophy and Phenomenological Research v. 81, pp. 731–48. Smith, Joel. 2015. The phenomenology of face-to-face mindreading. Philosophy and Phenomenological Research v. 90, pp. 274–93. Snedegar, Justin. 2017. Contrastive Reasons. Oxford University Press. Watanabe, Takeo and Sasaki, Yuka. 2015. Perceptual learning: towards a comprehensive theory. Annual Review of Psychology v. 66, pp. 197–221.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Reasoning and Reasons
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
11 When Rational Reasoners Reason Differently Michael G. Titelbaum and Matthew Kopec
Different people reason differently, which means that sometimes they reach different conclusions from the same evidence. We maintain that this is not only natural, but rational. In this chapter we explore the epistemology of that state of affairs. First we will canvass arguments for and against the claim that rational methods of reasoning must always reach the same conclusions from the same evidence. Then we will consider whether the acknowledgment that people have divergent rational reasoning methods should undermine one’s confidence in one’s own reasoning. Finally we will explore how agents who employ distinct yet equally rational methods of reasoning should respond to interactions with the products of each others’ reasoning. We find that the epistemology of multiple reasoning methods has been misunderstood by a number of authors writing on epistemic permissiveness and peer disagreement.
1. Denying Uniqueness We claim that there are multiple, extensionally non-equivalent, perfectly rational methods of reasoning. Nowadays the opponents of this view rally behind what has come to be called “the Uniqueness Thesis.” To understand our disagreement with Thanks to Jochen Briesen, Rachael Briggs, Stewart Cohen, Juan Comesaña, Maria Lasonen-Aarnio, Sarah Moss, Clinton Packman, Baron Reed, Darrell Rowbottom, Elliott Sober, Peter Vranas, Roger White, and a number of anonymous referees for comments on earlier versions of this material; to audiences at the Australian National University, the final Bellingham Summer Philosophy Conference, the Konstanz Reasoning Conference, the University of Bristol, the University of Michigan—Ann Arbor, Northwestern University’s Sawyer Seminar in Social Epistemology, the University of Colorado—Boulder, and the 2010 meeting of the American Philosophical Association’s Pacific Division; and to the participants in Titelbaum’s spring 2011 seminar at the University of Wisconsin—Madison on the Objectivity of Reasons. Titelbaum’s work on this chapter was supported by a Vilas Associates Award from the University of Wisconsin—Madison and a Visiting Fellowship from the Australian National University. Kopec’s work on this chapter was supported by an Australian Research Council DECRA Grant, DE180101119, for his project entitled “Making More Effective Groups.”
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
.
them, and the arguments they make for their side, it will help to analyze exactly what the Uniqueness Thesis says. When Richard Feldman introduced the Uniqueness Thesis in his (2007), he defined it as follows: This is the idea that a body of evidence justifies at most one proposition out of a competing set of propositions (e.g., one theory out of a bunch of exclusive alternatives) and that it justifies at most one attitude toward any particular proposition. (p. 205)
Describing himself as “following Feldman,” Roger White (2005)¹ argued for the Uniqueness Thesis, but defined it this way: Given one’s total evidence, there is a unique rational doxastic attitude that one can take to any proposition. (p. 445)
Those two theses do not say the same thing. In fact, Feldman’s thesis says two distinct things (it’s a conjunction), and White’s thesis says something that is identical to neither of Feldman’s conjuncts. The first thing Feldman says relates evidence to propositions, talking about which propositions are justified by a body of evidence. The second thing Feldman says relates evidence to attitudes. White’s thesis then relates evidence to rational attitudes taken by people. So we really have three theses here: Propositional Uniqueness. Given any body of evidence and proposition, the evidence all-things-considered justifies either the proposition, its negation, or neither. Attitudinal Uniqueness. Given any body of evidence and proposition, the evidence all-things considered justifies at most one of the following attitudes toward the proposition: belief, disbelief, or suspension. Personal Uniqueness. Given any body of evidence and proposition, there is at most one doxastic attitude that any agent with that total evidence is rationally permitted to take toward the proposition. Propositional Uniqueness is not identical to the first conjunct of Feldman’s Uniqueness Thesis, but is entailed by that conjunct. Attitudinal Uniqueness is Feldman’s second conjunct. Personal Uniqueness is White’s Uniqueness Thesis.² We have framed ¹ Though Feldman’s article was officially published in 2007, a draft had been circulating for a number of years before that. This explains how White could be “following Feldman” despite the fact that White’s publication date came first. ² For reasons to prefer “at most one” formulations of Uniqueness, see Kopec and Titelbaum (2016, pp. 190–1). Notice also that even if some attitude towards a proposition is rationally permissible for an agent, it might also be rationally permissible for that agent to adopt no doxastic attitude toward that proposition, for instance because she has never entertained it. Personal Uniqueness concerns only how many attitudes are permissible for an agent to adopt toward a proposition once she assigns it some attitude. To streamline argumentation we will set aside this complication and assume that all agents under discussion have assigned attitudes to all relevant propositions.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
the three theses in qualitative terms, assuming that the attitudes under study are belief, disbelief, and suspension of judgment. Analogous theses exist for other types of doxastic attitudes, such as quantitative degrees of belief. (Propositional: given any body of evidence and proposition, the evidence confirms the proposition to a specific degree; Attitudinal: the evidence justifies at most one specific degree of belief in the proposition; etc.) In what follows we’ll jump between these different framings of the theses depending on whether we’re discussing full beliefs or credences. The three theses are arranged in the order in which many epistemologists argue from one to another. Feldman, for instance, seems to think that a body of evidence justifies belief in a proposition only if it justifies that proposition (and justifies disbelief only if it justifies the negation, etc.). So he moves from Propositional Uniqueness to Attitudinal Uniqueness. Feldman then assumes that rationality requires an agent to adopt the attitude supported by that agent’s total evidence, which takes him to something like Personal Uniqueness (though he doesn’t include this conclusion as a conjunct of his official Uniqueness Thesis). In general, each thesis does seem necessary for the ones that come after. It’s difficult to maintain that a unique attitude is rationally required of any agent with a particular body of evidence (Personal Uniqueness) without tracing that requirement back to a unique relation between the evidence and that attitude (Attitudinal Uniqueness). It is then difficult to establish a unique relation between the evidence and attitude without relying on some unique relationship between the evidence and the proposition toward which that attitude is taken (Propositional Uniqueness). When first exposed to the Uniqueness debate, many philosophers intuitively reject the thesis on the grounds that it’s too cognitively demanding—especially in its degree-valued formulations. Perhaps Attitudinal Uniqueness is true and for any body of evidence there is a unique credence that evidence supports. But can we really expect agents to perfectly discern that credence, down to arbitrarily many decimal points? While the relation of evidence to attitudes may be precise, agents should be granted a bit of leeway in approximating rational attitudes. If evidence E justifies a credence in H of exactly 0.7, an agent could be rational while assigning H a credence anywhere roughly in that vicinity. This position denies Personal Uniqueness while leaving Attitudinal and Propositional intact. (It therefore shows that while Attitudinal Uniqueness may be necessary for Personal, it is not sufficient.) Similarly, one could outline a position that denies Personal and Attitudinal Uniqueness while leaving Propositional intact. But we wish to deny Uniqueness on a much deeper level—we deny Propositional Uniqueness (in both its qualitative and quantitative forms), and thereby deny all the forms of Uniqueness above. We do this because we don’t believe there are evidential support facts of the sort Propositional Uniqueness implies. How can one deny the existence of facts about evidential support? It’s important to see exactly what sort of facts we’re denying. Propositional Uniqueness asserts the existence of a two-place function defined over all pairs of propositions. Assuming any
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
.
body of evidence can be represented as a conjunctive proposition, Propositional Uniqueness asserts the existence of a function that takes any ordered pair of evidence proposition and hypothesis proposition and returns what we might call a “justificatory status.”³ (In the qualitative formulation that status is either justification of the proposition, anti-justification, or neither. In the degreed formulation the status is a numerical degree of support.) We’re happy to admit that there may be some pairs of evidence and hypothesis that determine a justificatory status all on their own. At least one of the authors thinks this occurs in deductive cases: if the evidence entails the hypothesis, then it all-things-considered justifies that proposition; and if the evidence refutes the hypothesis then it justifies its negation. But deductive cases are a very special case among arbitrarily selected pairs of propositions. For many other evidence/hypothesis pairs, support facts obtain only relative to a third relatum; absent the specification of that third relatum, there simply is no matter of fact about whether the evidence justifies the hypothesis.⁴ The third relatum in question is a method of reasoning. Methods of reasoning are ways of analyzing evidence to draw conclusions about hypotheses. (We will also sometimes refer to them using White’s (2005) and Schoenfield’s (2014) terminology of “epistemic standards.”)⁵ Some methods of reasoning, while distinct, are extensionally equivalent: given the same evidential inputs they will always yield the same outputs. For instance, you and I might both be perfect at addition, yet apply different algorithms in our heads to calculate sums. Yet many methods of reasoning are extensionally nonequivalent. We claim that a body of evidence supports a particular hypothesis only relative to a rational reasoning method that concludes that hypothesis from that evidence. And since there are multiple, extensionally nonequivalent rational reasoning methods, there isn’t always a univocal fact of the matter about whether some evidence supports a particular hypothesis. A version of this view is familiar to formal epistemologists: Subjective Bayesianism denies Propositional Uniqueness in exactly the manner we have been describing. In general, Bayesians hold that any rational agent can be represented as adhering to a particular “hypothetical prior” function crh. The agent’s credences at a given time can be obtained by conditionalizing her hypothetical prior on her total evidence at that time. A body of total evidence E supports a hypothesis H for the agent just in case crh(H | E) > crh(H). Notice that facts about evidential support are therefore relative to the hypothetical prior of the agent in question. We can think of an agent’s hypothetical prior as representing her epistemic standards—antecedent to the influence of any contingent evidence, the hypothetical prior encodes how an
³ If evidence is factive, then the conjunction representing an agent’s total evidence must always be logically consistent, so the function under discussion need not be defined for ordered pairs containing inconsistent evidence propositions. ⁴ Compare the discussion at Kelly (2014, pp. 308ff.). ⁵ Which may in turn be related to Lewis’s (1971) “inductive methods.”
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
agent would respond to any package of evidence she might encounter, and which bodies of evidence she would take to support which hypotheses. Some Bayesians—we’ll call them “Objective Bayesians”—believe there is a unique rational hypothetical prior.⁶ In that case, whether a body of evidence supports a hypothesis is simply a matter of what that hypothetical prior says about the pair. So while evidential support is relative to that hypothetical prior, we need not treat it as an additional input to the evidential support function, since it will always have a constant value (so to speak). If there is only one rational hypothetical prior, Propositional Uniqueness is true. Yet many Bayesians (“Subjective Bayesians”) believe multiple hypothetical priors are rationally acceptable. Two rational individuals could apply different hypothetical priors—representing extensionally nonequivalent epistemic standards—so that the same body of evidence supports a hypothesis for one of them but countersupports it for the other. For many proposition pairs, there simply are no two-place justification facts of the sort Propositional Uniqueness asserts. Why would one take the seemingly radical step of denying Propositional Uniqueness and admitting multiple perfectly rational, extensionally nonequivalent reasoning processes? Each author of this chapter has his own reasons. Kopec (2018), roughly speaking, views epistemic rationality as a subspecies of goal-oriented practical rationality. Among an agent’s practical goals are various epistemic goals; it’s then epistemically rational for the agent to hold those attitudes that constitute the most effective means of pursuing her epistemic goals. Different agents are permitted to have different epistemic goals, so rational agents may vary in the conclusions they draw from identical bodies of evidence. Titelbaum (2010) argues that if there is a unique evidential support relation that extends beyond deductive cases, it must treat some predicates differently from others (think of “green” and “grue”). For agents to determine which bodies of evidence support which hypotheses, they must be able to differentiate the preferred predicates. If predicate preference must be determined from empirical facts, it will be impossible for agents to make that determination, since they must know which predicates are preferred before they can determine what the empirical evidence supports. So one is left with either an extreme externalism on which agents cannot determine what their evidence supports, or an extreme apriorism on which preferred predicates, natural properties, or some such can be discerned a priori. Titelbaum would rather deny Propositional Uniqueness than adopt either of those other extreme positions.⁷
⁶ The “Subjective/Objective Bayesian” terminology has been used in a variety of ways in the Bayesian literature, and we don’t want to wade into that history here. For purposes of this chapter one can treat our use of these terms as stipulative. A classic example of an Objective Bayesian position in our sense is Carnap’s early theory of confirmation in his (1950). Meacham (2014) uses the term “Impermissive Bayesianism” for what we are calling “Objective Bayesianism.” ⁷ Titelbaum (2010) argues against the existence of a three-place evidential support relation “evidence E favors hypothesis H₁ over hypothesis H₂.” This allows the argument to address contrastivist views which deny the existence of two-place evidential support relations (E justifies H) but accept such three-place
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
.
2. Against Permissivism Roger White calls any position that denies the Uniqueness Thesis “permissivist.” We will now review some of the arguments against permissivism.⁸
2.1. Consensus Many anti-permissivist arguments are motivated by concerns about rational consensus. Feldman, White, and others have been very concerned with cases of interpersonal disagreement—cases in which agents disagree with their peers about some important matter despite possessing the same (relevant, total) evidence with respect to it. Feldman writes about two detectives on the same criminal case, White about members of a jury. There seems to be a deep concern that if permissivism is correct some such confrontations may be ultimately unresolvable. Academics—like other professional seekers of information and understanding— spend a great deal of time disagreeing with each other, citing evidence in an attempt to bring others along to their own point of view. If permissivism is true, there may be cases in which each of two disagreeing agents will say that she’s responding to the available evidence in a perfectly acceptable manner, and each agent will be correct. This raises the specter of in-principle unresolvable disagreements, and may make us wonder why we put so much effort into convincing our peers. This concern is related to a long-standing worry about Subjective Bayesianism. Philosophers of science have worried that if Subjective Bayesianism is correct—if rational scientific inquirers may reason differently from the same experimental results—we will be hard-pressed to account for consensus among working scientists about which experimental results support which hypotheses. Moreover, when disagreements arise as to the proper interpretation of results, no resolution may be available, as each party’s reasoning may be perfectly rational. Subjective Bayesianism (and permissivism in general) seems to undermine a desirable objectivity in science.⁹ When authors worry about consensus in science (and in reasoning more generally), it’s often unclear which of a number of issues they are worrying about. First, they may be concerned to explain either descriptive or normative facts. Under the former heading, one wonders how to explain existing consensus in science about which theories are best supported by extant evidence. Unless scientists are by-andlarge competently tracking an evidential support relation constant for all of them,
relations. Since contrastivism will not be at issue in this chapter, we will focus on arguments for and against a two-place relation. ⁸ For a more comprehensive survey of arguments and motivations that have driven epistemologists to Uniqueness, see Titelbaum and Kopec (ms). ⁹ Kelly (2008) characterizes this notion of objectivity explicitly in terms of agreement: “Objective inquiry is evidence-driven inquiry, which makes for intersubjective agreement among inquirers.”
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
there seems no way to explain the large amount of scientific agreement we observe.¹⁰ Notice that we can draw a further distinction here about precisely what data are to be explained. Are we meant to explain the fact that different groups of scientists, after inter-group consultation, come to agree on which hypotheses are supported? Or must we explain the fact that different groups of scientists, without consulting, independently favor the same hypotheses on the basis of similar bodies of evidence? Call these phenomena “descriptive agreement after consultation” and “descriptive agreement in isolation.” An immediate response to these descriptive concerns is to deny that consensus is all that common among working scientists (thereby denying the putative phenomena to be explained). If there is objectivity to science, it is revealed not by actual scientists’ opinions, but instead by our presumption that they would reach consensus under ideal conditions.¹¹ This is a normative consensus concern—the notion that inquirers should draw the same conclusions from the same bodies of evidence. For instance, the great Subjective Bayesian L.J. Savage¹² writes of his opponents, It is often argued by holders of necessary and objectivistic views alike that that ill-defined activity known as science or scientific method consists largely, if not exclusively, in finding out what is probably true, by criteria on which all reasonable men agree. The theory of probability relevant to science, they therefore argue, ought to be a codification of universally acceptable criteria. Holders of necessary views say that, just as there is no room for dispute as to whether one proposition is logically implied by others, there can be no dispute as to the extent to which one proposition is partially implied by others that are thought of as evidence bearing on it. (1954, p. 67)
Again, we can distinguish a norm that inquirers should agree in isolation from a norm that they should agree after mutual consultation. Later in this chapter we will demonstrate how consensus after consultation (both descriptive and normative) can be achieved on a permissivist position. This will show that consensus after consultation concerns provide no compelling argument for Uniqueness. That leaves the concern for normative consensus in isolation. But it’s highly controversial that scientists in isolation working on the same evidence are rationally required to draw the same conclusion—the thesis would be severely contested by most historians and philosophers of science working in the wake of Kuhn (1970). More to the point dialectically, the claim that reasoners working individually would, ¹⁰ For example, here’s Earman (1992) complaining about Bayesian convergence results: “What happens in the long or the short run when additional pieces of evidence are added is irrelevant to the explanation of shared judgments about the evidential value of present evidence” (p. 149). ¹¹ Cf. Williams (1986, ch. 8) and Wright (1992). ¹² In the preface to his (1972), Bruno de Finetti wrote of the recently deceased Savage that “it was clear how much was yet to be expected from his clarifying spirit for the success in our task: to relieve science and mankind from the strange superstitious prejudice that the obvious subjective probability feelings could or should be related to, or even replaced by, some hypothetical notion that, in some indefinable sense, could be called objective” (p. vi).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
.
if rational, draw the same conclusions from the same evidence is tantamount to Personal Uniqueness. So it can hardly be used as a premise to argue for Uniqueness.
2.2. Justificatory arbitrariness When confronted by the suggestion that rational agents with the same evidence might disagree because they have different epistemic standards (what he calls “starting points”), Richard Feldman writes, Once people have engaged in a full discussion of issues, their different starting points will be apparent. And then those claims will themselves be open for discussion and evaluation. These different starting points help to support the existence of reasonable disagreements only if each side can reasonably maintain its starting point after they have been brought out into the open. . . . Once you see that there are these alternative starting points, you need a reason to prefer one over the other. (2007, p. 206)
It’s interesting that Feldman poses this as a challenge about agreement after consultation (“Once people have engaged in a full discussion . . . ”). Presumably, though, a more general point is being dramatized by the dialectical staging. If Uniqueness is true, exactly one method of reasoning is rationally correct, so there is no choice among methods for a rational agent to make. But if multiple methods are permissible it seems an agent must maintain the standard she does for some reason—the kind of reason she could cite in a confrontation with individuals employing different methods. The agent seems to need a reason not only to apply her own methods, but to prefer them to the other rational options.¹³ A permissivist may reply by denying that such reasons are required. On this line, an agent’s epistemic standards constitute the point of view from which she evaluates reasons and evidence. That point of view cannot have—and does not need— evidential support.¹⁴ Alternatively, the permissivist may grant that reasons are required for applying one (rationally permissible) epistemic standard rather than another, but permit such reasons to be non-evidential.¹⁵ This approach nicely fits views on which an agent’s methods of reasoning may depend on epistemic or practical goals. To get to Uniqueness, one needs not only the position that conflicting epistemic standards must be adjudicated on the basis of reasons, but also that such reasons must be evidential. After all, Uniqueness maintains that rational conclusions supervene on evidence; under Personal Uniqueness, rational agents with the same evidence will always draw the same conclusions. Thus Personal Uniqueness embodies a particularly strong
¹³ By using words like “choice” and “maintain” we don’t mean to suggest anything voluntaristic—an agent need not have chosen to adopt or maintain her epistemic standards at any particular point. An agent may possess a particular attribute (such as a moral code) for which she has reasons and for which she is justified despite never having explicitly chosen to adopt it. ¹⁴ Compare Schoenfield (2014, §2.2). ¹⁵ cf. Podgorksi (2016, p. 1928).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
form of evidentialism.¹⁶ A theorist already committed to such evidentialism will have reason to endorse Uniqueness, but again we’ve found a premise that is too close to the conclusion to provide an independent argument. And absent a commitment to strong evidentialism, it’s difficult to see why an agent can’t justify her choice of epistemic standards on non-evidential grounds.¹⁷
2.3. Causal arbitrariness Feldman’s concern above was a concern for justificatory arbitrariness—a concern that once an agent recognizes her method of reasoning is just one among the rationally permitted many, she will be unable to maintain it without a specific kind of reason. Strictly speaking this is an attack on acknowledged permissive cases, not permissive cases in general. An acknowledged permissive case is one in which not only are multiple rational methods available, but the agent also recognizes that fact. Epistemologists such as Stewart Cohen (2013) and Nathaniel Sharadin (2015) have suggested that while unacknowledged permissive cases are possible, acknowledged permissive cases are not.¹⁸ While it may be true that multiple methods of reasoning are rational in a particular case, recognizing this multiplicity may be corrosive to our epistemic practices. We’ve just seen that if the corrosion is supposed to come from justificatory arbitrariness—the lack of reasons for maintaining one standard rather than another—the permissivist has responses available. But another kind of arbitrariness may be of concern: we may worry that permissivism allows epistemically arbitrary causal factors to influence a rational agent’s beliefs. Katia Vavova (2018) nicely articulates the concern about arbitrary causal influences on belief: The fact that you were raised in this community rather than that one is neither here nor there when it comes to what you ought to believe about God, morality, or presidential candidates. Yet factors like upbringing inevitably guide our convictions on these and other, less charged, topics. The effect is not always straightforward—perhaps you wouldn’t be so liberal if you hadn’t been raised in a liberal household, or perhaps you wouldn’t be such a staunch atheist if your parents hadn’t been so profoundly religious—but it is disturbing either way. . . . It’s tempting to think that we should believe what we do because of evidence and arguments— not because of where we were born, how we were raised, or what school we happened to attend.
¹⁶ See Kelly (2008), Ballantyne and Coffman (2012), Ballantyne and Coffman (2011), and Kopec and Titelbaum (2016) for precise discussion of the logical relations between Uniqueness and various forms of evidentialism. ¹⁷ A Uniqueness defender may allow the unique rational method of reasoning to be justified on the basis of both evidence and a priori considerations (with the assumption that such considerations do not vary across agents). Whether the a priori is itself evidential is a sticky issue. But either way, we wind up with the supervenience of rational conclusions on evidence and a strong evidentialist position. ¹⁸ Though see Ballantyne and Coffman (2012) for an argument that this position is unsustainable.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
.
If that is right, however, and if such influences really are pervasive, then we are irrational in much of what we believe. (2018, pp. 134–5)¹⁹
If Uniqueness is true, then all rational agents have the same (or at least extensionally equivalent) epistemic standards, so it doesn’t much matter how they got them. But if conflicting epistemic standards are rationally permissible, which standards are possessed by a given rational agent will almost certainly be influenced by epistemically arbitrary causal factors. Once the rational agent recognizes this influence, it seems to undermine the rationality of the beliefs recommended by those standards. The trouble with this as an objection to permissivism is that even if Uniqueness is true, epistemically arbitrary causal factors still influence a rational agent’s beliefs. According to Uniqueness a rational agent’s beliefs supervene on her evidence. But arbitrary factors (such as the ones Vavova lists above) can influence what body of evidence an agent possesses. Uniqueness defenders don’t see this as a challenge to the view that rationality requires beliefs to be responsive to evidence.²⁰ White, for instance, is highly sanguine about the chance events by which we come to have particular packages of evidence: If I hadn’t studied philosophy I would not believe that Hume was born in 1711. I would, if not disbelieve it, give little credence to that particular year being his birth date. And in fact I just learnt this fact by randomly flipping open one of many books on my shelf and reading where my finger landed. I was lucky indeed to be right on this matter! Of course there is nothing unsettling about this. There is nothing problematic about being lucky in obtaining evidence for one’s belief. (2010, p. 597, emphasis in original)
And yet White is very concerned about the arbitrary events by which rational agents would come to have one epistemic standard rather than another if permissivism were true. Why the asymmetry? We can develop a proposal for how White sees the asymmetry by noting a point he makes repeatedly in a number of his writings. Here it’s important to understand that White believes in following one’s evidence for a very different reason than Feldman does. In their co-authored work on evidentialism, Conee and Feldman write of their evidentialist thesis EJ, “We do not offer EJ as an analysis. Rather it serves to indicate the kind of notion of justification that we take to be characteristically epistemic—a notion that makes justification turn entirely on evidence. . . . We believe that EJ identifies the basic concept of epistemic justification” (2004, pp. 83–4). At least when it comes to epistemic justification, Feldman takes the link between justification and evidence to hold on something like a conceptual level.
¹⁹ White (2010), Elga (ms), Schoenfield (2014), and Schechter (ms) also discuss the significance of agents’ epistemic standards being causally influenced by epistemically arbitrary factors. ²⁰ Ballantyne (2015) is very concerned about arbitrariness in the packages of evidence we receive, but he is not concerned to argue for Uniqueness.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
White, on the other hand, holds evidence significant for rationality and justification because of a particular feature evidence possesses: truth-conduciveness. White writes, “In inquiry my first concern is to arrive at a true conclusion regarding the defendant’s guilt. And it is not clear why I should be so concerned with having my beliefs appropriately based unless this is conducive to the goal of getting things right” (2014, p. 316, emphasis in original). To remain neutral among various positions about what’s epistemically important, we have been using the term “epistemically arbitrary” without precisely defining it.²¹ But it’s fairly clear that, for White, causal processes are objectionably “arbitrary” when they have no tendency to pick out from among the standards available those that are more truth-conducive.²² Now not every epistemologist agrees with White that rationality is so focused on truth. But the position is fairly common, and adopting it is not obviously identical to adopting the Uniqueness Thesis, so we will grant it arguendo to see where it leads.²³
2.4. Evidence and truth In that 2014 article White writes, “If there is evidence available strongly supporting one verdict, then it is highly probable that it supports the correct verdict” (p. 315); “In a non-permissive case where the evidence directs us to a particular conclusion, following the evidence is a reliable means of pursuing the truth” (p. 315); and “Common wisdom has it that examining the evidence and forming rational beliefs on the basis of this evidence is a good means, indeed the best means, to forming true beliefs and avoiding error” (p. 322). We could sum up these sentiments with the slogan “Most evidence isn’t misleading.” On the other hand, “In a permissive case . . . if either conclusion can be rationally held it would be natural to expect around a 50–50 split of opinions. In this case only about half of the inquirers will be correct in their conclusions” (p. 315). (This is why White repeatedly suggests that in a permissive case, applying a rationally permitted reasoning method would be no more likely to yield a true belief than flipping a fair coin.) So perhaps this is the key disanalogy: it’s not distressing that an agent’s particular batch of evidence was selected for her on the basis of arbitrary factors, because most batches of evidence rationally lead us to the truth. It is, however, distressing that if permissivism is true an agent’s epistemic standards were selected for her on the basis of arbitrary factors, because she’s got no better chance of reaching the truth by enacting those standards than if she had flipped a fair coin. ²¹ Notice that if “epistemically arbitrary” meant “arbitrary with respect to the evidence,” the motivation for Uniqueness under consideration would be question-begging. The current question is whether it’s rationally problematic for epistemically arbitrary factors to influence belief. If that were just the question whether it’s rationally permissible for non-evidential factors to influence belief, answering it would simply be reasserting one’s position on Uniqueness. ²² Cf. Vavova (2018): “An irrelevant influence for me with respect to my belief p is one that (a) has influenced my belief that p and (b) does not bear on the truth of p” (p. 136). ²³ For a different response to White’s truth-conduciveness concerns about permissivism, see Meacham (2014, §5.3).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
.
Epistemologists often say—both in print and in conversation—that most evidence isn’t misleading.²⁴ It is unclear to us not only why one should believe this slogan, but even what it is supposed to mean. Start with the fact that for a permissivist, there will in many cases be no such thing as what a body of evidence supports on its own, so a fortiori there will be no facts about whether what the evidence supports is true. In a permissive case it’s the pairing of a body of evidence and a method of reasoning that indicates conclusions, and it’s that pairing that can be assessed for accuracy. But let’s see if we can support the slogan from a Uniqueness point of view, on which there are always facts about what conclusions a body of evidence supports on its own. The next question to ask is whether evidence is factive. If the point of asserting that most evidence isn’t misleading is to advise an agent seeking truth to base her beliefs on rational conclusions from what she takes to be her evidence, then it’s unclear whether we can assume all evidence is factive. After all, in evaluating that advice we might want to take into account that most of the agents applying it will be doing so on the basis of bodies of (what they take to be) evidence that include falsehoods. Nevertheless, let’s further grant the factivity of evidence so as to make the best case for the slogan we can. If evidence is factive, then at least evidence that entails a conclusion isn’t misleading with respect to that conclusion. (Anything entailed by a truth is true!) Yet if White’s goal in endorsing the slogan is to make evidencefollowing on a Uniqueness regime look more reliable than applying one’s standards on a permissive view, entailing evidence isn’t going to help him make that case. Any plausible permissivist view will require every rationally permissible epistemic standard to get the deductive cases right (at least if evidence is factive). For example, every hypothetical prior permitted by Subjective Bayesianism handles those cases correctly. So now imagine Uniqueness is true, grant the factivity of evidence, and focus on non-deductive cases. What would we be asserting if we said that in most of those cases evidence is not misleading, and how might we support such a claim? First, the slogan involves a “most” claim, but suggests no particular measure over the infinite number of potential non-deductive evidential situations. Second, even once we’ve granted Uniqueness, any claim that evidence is non-misleading must still be relative—relative to the hypothesis we’re wondering whether that evidence is misleading about. A given agent’s body of total (factive) evidence is probably misleading with respect to some hypotheses and non-misleading with respect to others. The slogan defender must therefore hold that for most non-entailing evidence/ hypothesis pairs, the evidence supports the truth about that hypothesis. Presumably
²⁴ Just to select an example that happens to appear in the same volume as White’s later Uniqueness piece (and with no intention to pick on this author in particular), Comesaña (2014, p. 240) baldly asserts, “If everything tells in favor of H is true, then most likely H is true” (where “everything” refers to an agent’s total evidence).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
to avoid worries about counting the infinite space of such pairs, the sloganeer will back off to some claim about bodies of evidence actually possessed by real humans and hypotheses actually entertained by them. But even within this limited domain our evidence is often misleading in a systematic and widespread fashion. It’s very plausible that, even when interpreted in perfectly rational fashion, humankind’s total evidence concerning the physical behavior of the smallest bits of matter was hugely misleading for most of human history. (And it’s probably the case that the bodies of evidence possessed by the majority of living humans are still misleading with respect to that domain.) The best position for the defender of the slogan that most evidence isn’t misleading is to maintain that with respect to everyday, useful hypotheses that come up in the ordinary course of life, most people possess bodies of evidence that generally aren’t misleading. This fact helps explain why we tend to have true beliefs in that domain and are able to navigate the world as successfully as we do. The trouble is, the permissivist can give a similar defense of the claim that with respect to everyday, useful hypotheses that come up in the ordinary course of life, most people possess reasoning methods that (when applied to the bodies of total evidence they tend to have) generate beliefs that tend to be true. Not only is this claim explanatory in its own right; it may also be explainable by natural and cultural selection. These days whole areas of cognitive science tease out how humans are wired to process bodies of evidence they typically receive and explain why such coded heuristics might have helped us get things right in the environments in which we evolved. For instance, Bayesian vision scientists hypothesize that the human visual system employs “priors” that process retinal stimuli on the assumption that lighting sources come from above.²⁵ This tends to be a fairly reliable assumption, and it’s obvious why we might have evolved to make it. In maintaining that typical reasoning methods are typically reliable,²⁶ the permissivist need not think that one unique method of reasoning is the most reliable (relative to typical bodies of evidence and typical hypotheses) and therefore rationally singled out. It’s very plausible to maintain (especially given the counting difficulties involved) that a number of reasoning methods do roughly equally well across typical evidence and hypothesis pairings, with some methods doing better on some occasions and some methods doing better on others.²⁷ ²⁵ For citations see Adams et al. (2004). (Thanks to Farid Masrour for help with this reference.) ²⁶ In fact, the permissivist need only maintain that typical rationally-permissible reasoning methods are typically reliable. ²⁷ We mention here only to reject as irrelevant the hypothetical prior that is guaranteed to have the highest reliability possible. Consider a hypothetical prior that, relative to any factive body of evidence, assigns credence 1 to every proposition that’s true in the actual world, and credence 0 to every proposition that’s actually false. While such a prior could certainly be defined—and God could even write out its values—it doesn’t represent a method of reasoning available in any meaningful sense to non-omniscient folk. Thus its existence doesn’t call into question the rational permissibility of reasoning methods we might actually employ that are admittedly less accurate.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
.
We began this discussion because White wanted to treat arbitrary-standards and arbitrary-evidence cases asymmetrically. Arbitrary evidence was not worrisome because most evidence points toward the truth, so even if your evidence is selected arbitrarily you’re likely to get accurate results. On the other hand, White suggested that if multiple standards are rationally permissible “only about half of the inquirers will be correct in their conclusions.” Yet to the extent we can make sense of the claim that most evidence isn’t misleading, it looks equally plausible to say that most standards aren’t misleading.²⁸ Failing to consider—and then fully understand—the possibility that most permissible standards are truth-conducive is, perhaps, the most significant error made by participants on both sides of the Uniqueness debate. From the supposition that at least one rational reasoning method yields belief in a particular hypothesis and at least one yields belief in its negation (on the basis of the same body of evidence), many authors conclude that arbitrarily adopting a rational method gives the agent a fifty–fifty chance of believing the truth.²⁹ This is like learning that a bin contains at least one red and at least one green jellybean, then concluding that randomly selecting a bean must yield an equal chance of each color. Whether that’s true depends not only on the randomness of the selection process, but also on the overall contents of the bin. If epistemically arbitrary causal factors select standards for you from a set most of whose members are reliable, the fact that your standards were arbitrarily selected from that set is no reason to question their reliability. Here’s where we stand dialectically: either the thesis that most evidence isn’t misleading can be established (on some plausible interpretation), or it cannot. If it cannot, then epistemically arbitrary causal influences are a rampant problem for all truth-centric approaches to rationality, whether they ascribe to Uniqueness or not. If the thesis can be established, then we ought to be able to establish on similar grounds that most rational reasoning methods aren’t misleading.³⁰ Once more, epistemically
²⁸ We’ve been treating the slogan that most evidence isn’t misleading as asserting a contingent, empirical truth. Yet there are views of evidence/rationality/justification on which the slogan can be defended a priori. These include reliabilist theories of justification, and some semantic responses to skepticism (e.g. Putnam 1981 and Chalmers 2007, esp. §7). Suffice it to say that if these approaches provide arguments for the slogan that most evidence isn’t misleading, they will also provide arguments for the position that most rationally permissible methods of reasoning aren’t misleading either. ²⁹ Like White, Schechter (ms, p. 7) assumes that if an agent’s epistemic standards were selected in an arbitrary fashion, that agent is unlikely to have reliable standards. A similar assumption is made by Premise P2a of Ballantyne (2012), which deems a belief irrational if there’s even one nearby possible world in which the agent reached the opposite conclusion based on the same evidence and cognitive capacities. Schoenfield (2014)—who argues for permissivism!—concedes to White that permissivism will undermine the truthconduciveness of epistemic standards in the mind of anyone who doesn’t already subscribe to one of those standards. And in a similar vein, Dogramaci and Horowitz (2016, p. 139) write, “under permissivism . . . rational reasoners cannot be ensured to be as reliable as they can be if uniqueness is true.” ³⁰ One might wonder who’s supposed to be doing all this establishing—must the agent know that most reasoning methods are reliable in order for it to be rational for her to apply one? (Cf. White’s discussion of “sticky pills” at 2005, p. 449) This question comes up for the reliability of evidence as much as it comes up for the reliability of methods of reasoning. But more importantly, this is a standard question in
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
arbitrary causal influences on reasoning methods will provide no more of a problem than arbitrary influences on evidence. Either way, causal arbitrariness provides no special problem for permissivists.³¹ But we think the epistemological significance of multiple rational truth-conducive methods of reasoning is even greater than that.
3. The Reasoning Room In Section 2 we offered some (admittedly fairly armchair) reasons to believe that in most everyday situations, most of the methods of reasoning employed by rational people will be generally truth-conducive, even if some of those methods are extensionally non-equivalent. But even if that’s not true in most everyday situations, it is certainly true in some situations. For instance, though the scientific groups working with the IPCC (International Panel on Climate Change) use different methods of analysis, and often arrive at different predictions for the future of the climate, to the extent we can discern these things (using cross-validation and the like) it seems that each of them is generally reliable. So there’s practical significance in asking questions about the epistemology of such groups, such as: Should an agent’s awareness that her reasoning methods are just one of a number of rationally permitted, equally reliable methods undermine the conclusions of her reasoning? Should such an agent alter her opinions if she encounters another rational agent who’s drawn opposite conclusions? A number of epistemologists have made strong claims about how these questions should be answered in all permissive situations. We want to show that those answers misdiagnose permissive situations containing divergent but widely reliable reasoning methods. To make our case, we will focus on a highly artificial, highly regimented reasoning situation. Like many philosophical examples, this situation allows us to make efficient progress by reducing the number of unknowns and messy moving parts. Nevertheless, we believe the core epistemic features of the situation are shared with many real-life examples, and so allow us to draw important lessons for reasoning in real life.³² epistemology—need one know a method is reliable before it can yield justified beliefs?—to which there are now many standard replies. Perhaps it’s enough for the methods just to be predominantly reliable, even if the agents who employ them cannot establish that fact. Or perhaps agents possess default warrant to believe their methods are reliable absent any concrete evidence to the contrary (compare Wright 2004). Notice that if we follow this line, learning that one’s methods were selected by an epistemically arbitrary causal factor need not supply a defeater for the claim that one’s methods are reliable; a true defeater would also have to establish that the set selected from was not predominantly reliable. ³¹ Schoenfield (2014) offers an alternate response to the causal arbitrariness attack on permissivism. Since she has granted White’s assumption that arbitrarily selected standards must be unreliable, her response is to reorient the debate away from an exclusive focus on truth. ³² See, for instance, the case study in Hicks (2015) of debates over yields of genetically modified crops. Hicks ultimately attributes the controversy to differing “epistemological standards” among the interlocutors, who “have radically different ideas . . . about what kinds of research should be carried out in order to support or undermine a claim” (p. 2).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
.
Here’s the situation: You are standing in a room with nine other people. Over time the group will be given a sequence of hypotheses to evaluate. Each person in the room currently possesses the same total evidence relevant to those hypotheses. But each person has a different method of reasoning about that evidence. When you are given a hypothesis, you will apply your methods to reason about it in light of your evidence, and your reasoning will suggest either that the evidence supports belief in the hypothesis, or that the evidence supports belief in its negation. Each other person in the room will also engage in reasoning that will yield exactly one of these two results. This group has a well-established track record, and its judgments always fall in a very particular pattern: For each hypothesis, nine people reach the same conclusion about which belief the evidence supports, while the remaining person concludes the opposite. Moreover, the majority opinion is always accurate, in the sense that whatever belief the majority takes to be supported always turns out to be true. Despite this precise coordination, it’s unpredictable who will be the odd person out for any given hypothesis. The identity of the outlier jumps around the room, so that in the long run each agent is odd-person-out exactly 10 percent of the time. This means that each person in the room takes the evidence to support a belief that turns out to be true 90 percent of the time. We submit that in the Reasoning Room, it is rationally permissible for you to form the belief your reasoning method suggests is supported by the evidence. The same goes for each other agent in the room. And since at least one of those agents disagrees with you about what belief the evidence supports, this means that at least one agent is rationally permitted to adopt a belief that disagrees with yours. So we are interpreting this example as a permissive case.³³ (Later we’ll discuss how Uniqueness defenders might reinterpret the example.) Interpreted that way, the Reasoning Room is a case in which you and another agent have extensionally nonequivalent, rationally permissible methods of reasoning about a particular kind of evidence, yet each of those methods is truth-conducive in the long run. Interpreted permissively, the Reasoning Room puts the lie to a number of claims that have been made about the epistemology of permissivism. For instance, at one point White writes, ³³ It’s important to note that our position here is stronger than what Podgorksi (2016) calls “dynamic permissivism.” For Podgorski, permissivism is true because different agents are permitted to consider (i.e. reason about) different proper subsets of their evidence. Since these distinct subsets may point in different directions, Podgorski thinks there can be cases in which rational agents with the same total evidence reason to contradictory conclusions. We read the Reasoning Room as permissive in a much stronger sense: we take it that the agents in the room may rationally draw conflicting conclusions from their total evidence.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Supposing [permissivism] is so, is there any advantage, from the point of view of pursuing the truth, in carefully weighing the evidence to draw a conclusion, rather than just taking a beliefinducing pill? Surely I have no better chance of forming a true belief either way. If [permissivism] is correct carefully weighing the evidence in an impeccably rational manner will not determine what I end up believing; for by hypothesis, the evidence does not determine a unique rational conclusion. So whatever I do end up believing upon rational deliberation will depend, if not on blind chance, on some arbitrary factor having no bearing on the matter in question. (2005, p. 448)
If you behave in the Reasoning Room the way we have described, which belief you adopt may depend on an arbitrary factor with no bearing on the matter in question. Nine of you in the room will adopt one belief while the last adopts the opposite; all of you were assessing the same evidence; whatever caused you to have divergent methods of reasoning was not a function of the evidence. It’s also true that in this example the evidence does not determine a unique rational conclusion (because at least two rational people in the room reached opposite conclusions from that evidence). Yet it doesn’t follow that weighing the evidence in a rational manner has not determined what you ended up believing. After all, if you had made a reasoning mistake and misapplied your methods to that same evidence, you would’ve wound up believing something else. And it certainly does not follow that there is no advantage “from the point of view of pursuing the truth” to weighing the evidence over randomly taking a belief-inducing pill. Weighing the evidence according to your standards gives you a 90 percent chance of believing the truth, while taking a belief-inducing pill would give you only a 50 percent chance. White’s comparing reasoning in a permissive case to pill-popping is another way for him to suggest that any epistemically arbitrary choice among rival epistemic standards must leave the agent with a low probability of accurate belief. But we noted earlier that an arbitrary or chancy selection among a number of options, most of which are reliable, yields a high probability of believing truths. In the Reasoning Room, carefully weighing the evidence after arbitrarily selecting one of the available epistemic standards would leave you no better off with respect to the truth than popping a pill that gave you a 90 percent chance of accurate belief. If truth-conduciveness is our sole consideration, that’s not a very good objection to permissivism.³⁴ White does consider the possibility that epistemic standards could be reliable without being rationally unique. He writes:
³⁴ Of course, we could always up the number of agents in the Reasoning Room to bring the long-run reliability score as arbitrarily close to 100 percent as we’d like. Upping the numbers might also make some readers more comfortable with our conclusion that it’s rationally permissible for you to adopt the belief your reasoning says the evidence supports.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
.
It might be suggested that rationally evaluating the evidence is a fairly reliable means of coming to the correct conclusion as to whether P, even if that evidence does not determine that a particular conclusion is rational. But it is very hard to see how it could.³⁵ Even if it is granted that a rational person needn’t suspend judgment in such a situation, just how rational evaluation of the evidence could reliably lead us to the truth in such a case is entirely mysterious. It would have to be by virtue of some property of the evidence whose reliable link to the truth is inaccessible to the inquirer. For if an inquirer is aware that the evidence has feature F, which is reliably linked to the truth of P, then surely it would be unreasonable to believe ∼P. It is hard to imagine what such a truth-conducive feature could be, let alone how it could act on an inquirer’s mind directing him to the truth. (2005, p. 448)
We maintain that in the Reasoning Room, the evidence (alone) does not determine that a particular conclusion is rational. That’s because we view the Reasoning Room as a permissive case, and in permissive cases evidence favors hypotheses only relative to particular methods of reasoning. Yet it is not mysterious in this case how rational evaluation of the evidence reliably leads the agents involved to the truth, and rational evaluation does not do so by virtue of some property whose reliable link to the truth is inaccessible to the inquirer. Recall the IPCC groups. Each of them applies a particular analysis technique to available climate data, checking whether those data have particular features, then using those features to make a prediction. Or think about the ten individual agents in the room, applying their idiosyncratic methods of reasoning. Perhaps one of them evaluates the hypothesis H by virtue of how it trades off simplicity with fit to the evidence. Or perhaps another agent leans toward H on the basis of a particular statistical significance test. As she applies that test, the relevant features of the evidence and hypothesis are perfectly accessible, and it’s not mysterious how such a test could reliably point her toward the truth (even if other tests might point her in a different direction). The Reasoning Room also allows the permissivist to address the distinction between permissive cases and acknowledged permissive cases. In the example it is rationally permissible to adopt the belief that your reasoning suggests is supported by the evidence. At the same time, you are absolutely certain there is at least one person in the room whose reasoning pointed her in the opposite direction. Following her reasoning is just as rationally permissible for her as following your reasoning is for you. So not only do we have two agents in the room who have rationally drawn opposite conclusions from the same evidence; each of them is aware of the existence of a person (indeed, a very nearby person!) with rational beliefs different from her own. Nevertheless, it remains rationally permissible for each agent to maintain her
³⁵ In one of the elided sentences White once more baldly asserts, “Evidence can be misleading—i.e. point us to the wrong conclusion—but this is not common.”
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
own opinions.³⁶ Contra Cohen and Sharadin, it’s possible to have not only permissive cases but acknowledged permissive cases. Now something different would happen if the two agents we were just discussing actually met and began to exchange views. Suppose your reasoning method suggests that your total evidence supports belief in H. So you form a belief in H. You then randomly select another occupant of the room, and ask her what she concluded. Suppose she tells you that as recommended by her reasoning method, she believes ∼ H. We submit that it would then be rational to suspend judgment as to the truth of H. Here’s an intuitive explanation why. Given what you know about the distribution of opinions in the room, you should expect before interacting with your colleague that she will agree with you about the hypothesis. Before interacting you believe H, so you believe eight out of the other nine people in the room also believe H, so you expect a randomly selected peer to agree with you. When you find that she believes ∼H instead, this is a surprising result, which leads you to take much more seriously the possibility that you are the only H-believer in the room. So it would be reasonable for you to suspend judgment on H. For those who’d like a more precise argument, we offer a credal version of the Reasoning Room. Suppose the setup of the room is that for each hypothesis delivered, your reasoning will suggest either that the evidence supports a credence of 0.9 in the hypothesis or a credence of 0.1. You then (rationally permissibly) adopt the credence your reasoning says the evidence supports.³⁷ Suppose, for instance, that you assign credence 0.9 to H. You then randomly select another occupant of the room, and find that her reasoning led her to a 0.1 credence in H. At that point, some basic Bayesian reasoning will lead you to a credence of 0.5 in H.³⁸ This is the credal analog of suspending judgment.³⁹
³⁶ Cf. Podgorksi (2016, p. 1931). Notice also that if we added to the Reasoning Room that the ten reasoning methods were somehow arbitrarily shuffled and assigned to the agents at random, we would have an explicit case in which being aware that your standards are arbitrarily assigned does not defeat the attitudes endorsed by those standards. ³⁷ The credal case allows us to say more about why it’s a good idea to adopt the attitude your reasoning says the evidence supports. In Bayesian terms, this policy has the advantage of being perfectly “calibrated” in the long run. Moreover, if we measure accuracy by a proper scoring rule, it’s the policy that maximizes long-run expected accuracy. ³⁸ This is a straightforward consequence of Bayes’ Theorem. crðHjDÞ ¼ ¼
crðDjHÞ crðHÞ crðDjHÞ crðHÞ þ crðDj∼HÞ crð∼HÞ 1=9 9=10 1=10 1 ¼ ¼ 1=9 9=10 þ 1 1=10 2=10 2
³⁹ What if your randomly selected peer turns out to have the same credence as you in H? Learning of her credence should increase your confidence in the hypothesis above 0.9. (In fact, your credence should go all the way to 1!) This is an instance of an effect noted independently by Casey Hart and by Easwaran et al. (2016). (The latter call the effect “synergy.”)
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
.
The Reasoning Room therefore refutes Thomas Kelly’s claim that if permissivism is true, there can be no reason for an agent to change her attitudes upon encountering a peer who disagrees.⁴⁰ Kelly argues for this claim by describing a case in which I assign a credence of 0.7 to a hypothesis on the basis of my evidence, while admitting it would be equally reasonable to assign a slightly lower credence to that hypothesis on the basis of the same evidence. You, meanwhile, assign a credence slightly lower than 0.7 to the hypothesis on the basis of that evidence, while admitting it would be equally reasonable to assign exactly 0.7. We then meet and exchange views. Responding to the suggestion that after the exchange we should adjust our credences towards each other’s, Kelly writes, That seems wrong. After all, ex hypothesi, the opinion that I hold about [the hypothesis] is within the range of perfectly reasonable opinion, as is the opinion that you hold. Moreover, both of us have recognized this all along. Why then would we be rationally required to change? (2010, p. 119)
The Reasoning Room provides a straightforward answer to Kelly’s rhetorical question.⁴¹ In the credal version of the example you initially assign one credence while being perfectly aware that at least one individual in the same room (entirely rationally) makes the diametrically opposite assignment. Upon randomly selecting an individual from the room and finding out that she made that opposite assignment, it’s rational for you to split the difference between her initial credence and yours. This does not require denying that either her initial assignment or yours was rational given the evidence each of you had at that time. It merely requires admitting that in light of your new total evidence (which includes information about the attitudes of your randomly selected peer), the probability of H is 1/2. This change is motivated not by finding any rational fault in one’s previous attitude, but instead by coming to have evidence that makes a new attitude look more accurate, or truth-conducive.⁴²
⁴⁰ A similar suggestion seems to be made by Feldman at (2007, pp. 204–5). White (2010, n.7) also writes, “If we really think there are [epistemically permissive] cases then even meeting an actual disagreeing peer seems to pose no challenge to one’s belief.” We will focus on Kelly because he goes on to provide an argument for his claim. (Thanks to Ballantyne and Coffman 2012 for the additional citations.) ⁴¹ As Christensen (2016) notes, there are two importantly different kinds of peer disagreement cases. The peer disagreement literature often proceeds under the assumption of Uniqueness, and so assumes that when individuals with the same evidence disagree it must be because one of them has made a mistake in applying the correct epistemic standards to that shared evidence. (This is why peer disagreement cases are often analyzed alongside cognitive malfunction cases.) But in permissive cases there can also be disagreement between agents who have applied their standards correctly to the same evidence, yet happen to have differing epistemic standards. This is the type of case Kelly considers, and the type of case we will be discussing. (For what it’s worth, Titelbaum’s (2015) argument against conciliating in peer disagreement cases applies only to the other type of disagreement, in which the disagreeing parties share epistemic standards.) ⁴² Ballantyne and Coffman (2012) argue against Kelly that it can make sense to split the difference upon encountering a disagreeing peer in a permissive case if neither of the parties initially realized that the case was a permissive one. Christensen (2009) argues that splitting the difference may be sensible when an agent doubts she has applied her own epistemic standards rationally. The Reasoning Room establishes the
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
The Reasoning Room also refutes a claim made by Stewart Cohen (among others).⁴³ Cohen writes, Note that I do not need to encounter a peer at a different credence for there to be accuracy pressure on my credence. Simply recognizing a rational credence different from my own is enough to undermine the rationality of my credence. . . . In such a case, the same pressure exists to revise in the direction of the other credence. (2013, p. 103, emphases added)
In the credal Reasoning Room you are certain before interacting that another rational agent assigns a different credence than your own. This exerts no pressure on you to change your credence of 0.9. Yet actually encountering that rational peer pressures you to drop your credence to 0.5. There can be a significant difference between knowing one is in a permissive case and actually uncovering a particular individual with whom one disagrees. It’s one thing to know that at least one person in a room disagrees with you. It’s another thing to randomly select a peer and find that she disagrees. Such an encounter suggests that disagreement might be not just present, but representative, in which case your opinions should change. We have just seen that if our interpretation of the Reasoning Room is correct, the example accomplishes a number of important things: it refutes a number of charges made against permissivism by White and others, it establishes the possibility of acknowledged permissive cases, and it shows that conciliating in the face of peer disagreement can be compatible with permissivism. Our interpretation assumes that the Reasoning Room is a permissive case, which runs counter to the Uniqueness Thesis. So how might a Uniqueness defender respond to the example? There are a couple of options. First, the Uniqueness theorist might agree that the ten agents in the Reasoning Room all apply different methods of reasoning to the same evidence. In that case (the Uniqueness theorist will say), at most one of those methods is the uniquely correct reasoning method, and when the agents differ in their attitudes toward H at least one of them is irrational in doing so. While this response is available to the Uniqueness defender, it is not particularly interesting at this stage of the dialectic. The point of the Reasoning Room is to demonstrate that if one adopts a permissivist reading of the example, then various conclusions often imputed to permissivism need not follow. Simply denying permissivism as it applies to the example misses the point. But there’s a second, more interesting response available. The Uniqueness theorist might argue that the agents in the Reasoning Room reach different conclusions about stronger thesis that splitting the difference can be rational even in antecedently acknowledged permissive cases where both parties know no rational error has occurred. Notice also that splitting the difference in the Reasoning Room doesn’t involve rejecting one’s old epistemic standards and somehow adopting new ones. Instead, you have a constant set of epistemic standards throughout the example that recommend one attitude towards H before any interaction has occurred, then a different attitude if particular evidence about that interaction comes to light. The epistemic standards one applies in isolation, while yielding different results than someone else’s standards in isolation, may nevertheless direct one to reach agreement with that someone after consultation. ⁴³ See, for instance, Kelly (2005, §5).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
.
H not simply because they have different reasoning methods, but because they are responding to different bodies of total evidence. When you are given hypothesis H to consider, reason through your evidence, and judge that it supports belief in H, your total evidence comes to include the fact that you have reasoned from the original evidence to H. This fact is not possessed by the other agents in the room, so your total evidence differs from theirs. Most importantly, your total evidence differs from that of an agent who has reasoned from the original evidence to ∼H. (Meanwhile that agent possesses evidence you lack about the judgment rendered by her own reasoning.)⁴⁴ Unlike the first Uniqueness defender, this Uniqueness theorist grants that the varying attitudes adopted towards H by the agents in the Reasoning Room are rationally permissible. But those differing attitudes are permissible because they are assigned relative to different bodies of total evidence. So the distinction between Uniqueness and permissivism plays no role in the Reasoning Room, and the example demonstrates nothing about the commitments of permissivism. Again, we have to be careful about the dialectic here. The permissivist offers the Reasoning Room as a case in which the agents’ differing reasoning methods lead them to different conclusions, while the Uniqueness theorist attributes the different conclusions to differences in total evidence. Depending on one’s definition of “evidence,” one could squabble about whether facts concerning one’s own reasoning may count as evidence. But we prefer to avoid such definitional squabbles by noting that the really important question is whether facts about one’s own reasoning are part of one’s relevant total evidence. Out of all of an agent’s evidence, only what’s relevant to a hypothesis may rationally influence her attitudes, and that relation is determined by the agent’s epistemic standards. To deny that the Reasoning Room illustrates permissivist commitments, the Uniqueness theorist must establish in a manner acceptable to permissivists that all rationally permissible epistemic standards treat facts about one’s own reasoning concerning a hypothesis H as evidence relevant to H.⁴⁵ That strikes us as a tall order. In fact, matters are even worse for the Uniqueness defender. Because it seems to us that if one is going to take a restrictive view of what’s rationally permissible in the Reasoning Room, one ought to reach the conclusion that each agent’s evidence about her own reasoning is not relevant to determining her attitude toward H. To see why, let’s very carefully review who has what evidence at what times in the example. ⁴⁴ Though he doesn’t endorse it, Feldman discusses the proposal that an agent’s “strong sense or intuition or ‘insight’ that the arguments, on balance, support her view” counts as evidence for that agent (Feldman 2007, p. 207). He attributes a similar idea to Rosen (2001, p. 88). ⁴⁵ Just to be crystal clear why this conclusion is required: suppose there are at least two distinct rationally permissible epistemic standards that treat facts about H-reasoning as irrelevant to H. Then we could build a Reasoning Room case in which agents with those two standards reach different conclusions about H, and the differences would not be attributable to the differences in their total evidence generated by their awareness of their own reasoning. (Perhaps the Uniqueness theorist could triumph by arguing that even if there are many permissible standards, only one of them treats facts about H-reasoning as irrelevant to H. But that seems an awfully implausible position.)
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Initially, before the hypothesis is provided and any reasoning is performed, everyone in the room shares a common body of total evidence we’ll call E. You then receive the hypothesis H, reason about it, and judge that E supports belief in H. At that point your total evidence is E 0 : the conjunction of E with the fact that you have judged E to support H.⁴⁶ In the meantime, at least one of your peers in the room has taken E, reasoned about it, and concluded that E supports belief in ∼H. So her total evidence is E∗: the conjunction of E with the fact that she has judged E to support ∼H.⁴⁷ If Uniqueness is true, there must be a fact of the matter about whether E supports belief in H or ∼H. Let’s suppose (without loss of generality) that in fact, E supports belief in ∼H. In other words, your reasoning has led you to a false judgment about what E supports. In order for the Uniqueness supporter to accept as rational the attitudes we’ve suggested for each agent at each stage of the example, the Uniqueness supporter will have to say that although E supports belief in ∼H, your belief in H after engaging in your reasoning is rational because E 0 supports belief in H. In other words, while E points to belief in ∼H, your falsely judging the opposite, then adding a fact about the content of that judgment to your total evidence, makes it rational for you to believe H. This is a truly bad idea. Our Uniqueness theorist has now embraced a curious theory of evidential bootstrapping, on which an agent, by falsely judging that her total evidence supports some conclusion, can thereby make it the case that her (new) evidence does indeed support that conclusion. While this is bad enough, consider further your attitude, after performing your reasoning, toward the proposition that E supports belief in H. What attitude toward this second-order proposition is supported by E 0 ? If E 0 supports belief in this proposition, then we have a false proposition made rational to believe by the fact that you have judged it to be true. On the other hand, if E 0 does not support belief in the second-order proposition,⁴⁸ then you continue to rationally believe H on the basis of a judgment that your current evidence does not endorse.⁴⁹ ⁴⁶ It’s significant here that as we envision the Reasoning Room scenario (in both its belief and credence versions), your initial determination about H is made entirely on the basis of first-order evidence E. The facts in the example about the track records of the individuals involved (including yourself ) are there only to help you recognize that you’re in a permissive case, and to drive your reaction to the discovery of a disagreeing peer. If we wanted we could purify this issue by structuring the example so that you gain the track-record information only after forming a judgment about how your first-order evidence bears on H. ⁴⁷ One might worry that this reading assumes a great deal of introspection on your part: that whenever you judge a body of evidence to support a hypothesis, you at the same time notice that you have done so, and the fact that you have done so is added to your evidence. The Uniqueness theorist’s reading of the example could be defanged by suggesting that this sort of introspective awareness isn’t always present, and by stipulating that the Reasoning Room is one case in which it isn’t. But it seems to us that the Uniqueness theorist’s reading is already a bad idea independently of this consideration, so we won’t further pursue the introspection line here. ⁴⁸ As Titelbaum (2015) argues, it cannot. ⁴⁹ Here’s another reason why this reading is a bad idea. We usually think that if something is an important piece of evidence for a conclusion, that evidence can be explicitly cited in favor of the conclusion. In the case at hand a crucial piece of evidence for H is the fact that you have judged E to
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
.
None of these positions is absolutely indefensible, but all of them seem tremendously awkward. Moreover, a Uniqueness defender need not accept them in order to maintain the Uniqueness Thesis. The Uniqueness defender bites these bullets only if she insists that were there any permissive cases, the Reasoning Room would not be one of them (so that the Reasoning Room cannot be used to assess the commitments of permissivism). Whether it’s worth it for the Uniqueness theorist to make this move depends on what motivates her to believe in Uniqueness. For example, a Uniqueness defender driven by concerns about objectivity and/or consensus will not want the proposed reading of the Reasoning Room. Suppose we maintain Uniqueness for the Reasoning Room by counting facts about an agent’s reasoning on a hypothesis as evidence relevant to that hypothesis. Then why not apply the same reading to rational scientific inquirers operating in isolation on the same body of empirical data? The moment one scientist has a thought about the significance of those data not shared by the other inquirers, her evidence will diverge from theirs and allow her to (rationally) reach different conclusions. The Uniqueness defender’s motivating thought that rational scientists confronted with the same data should draw the same conclusions will fall by the wayside.⁵⁰
4. Conclusion The forgoing discussion has revealed a great deal about the epistemology of divergent reasoning methods. While we cannot conclusively establish that real-life reasoning methods are generally reliable, we have seen that cases in which extensionally nonequivalent rational methods are reliable provide important counterexamples to many charges that have been made against permissivism. Such cases also show that arbitrary causal influences on methods of reasoning need not be undermining, and may help explain why rational inquirers come to agreement after consultation. Might there be other reasons for an agent to worry about the possibility that reasoning methods distinct from her own might yield opposing rational conclusions? We will close by raising one more idea that we’ve sensed floating through the Uniqueness literature. Concerns about objectivity often mask concerns for authority. Permissivism (especially in acknowledged permissive cases) requires the agent to maintain a sort of equanimity about the variety of rationally permissible methods of reasoning. Yet support H (after all, without that fact in the body of total evidence, your evidence didn’t support H). Yet would anyone ever cite, as part of their evidence for a hypothesis, the fact that they themselves judged their evidence to support it? ⁵⁰ Since White seems very much motivated by consensus concerns, he should be uncomfortable with this Uniqueness-consistent reading of the Reasoning Room. White also endorses the principle that “a belief can always rationally survive learning the epistemic value of one’s evidence” (2005, p. 450). Yet it does not seem under this Uniqueness reading of the Reasoning Room that your belief in H when your evidence is E 0 survives learning the true epistemic value of the evidence E.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
while recognizing that her own methods are but one rationally permissible option among many, the agent is nevertheless supposed to treat those methods as authoritative—normative for her own case. Permissivism seems to create a tension between respecting other methods as equally valid and ceding the necessary authority to one’s own.⁵¹ It’s important not to commit a level confusion here. Agents adopt doxastic attitudes towards propositions—propositions that often concern objective facts in the world, beyond any ability of the agent to affect their truth-value. But the attitude adopted (belief or disbelief, high or low credence) is a subjective feature of the agent, not part of the attitude’s propositional content. It does not automatically follow from the objectivity of what’s believed that there is any objectivity to the norms for belief. Still, our beliefs and credences play a serious role in our cognitive lives; beliefs in particular embody how we take the world to be. White and Kelly both consider whether permissivism requires “a departure from very natural ways of thinking about evidence and rationality.”⁵² It may be that in order to reason, and in order to properly embrace the conclusions of reasoning, we must take that reasoning to have a kind of authority that is possible only if it is uniquely correct.⁵³ There’s a deep-seated tension in permissivism between rational respect and normative authority; perhaps that tension supplies the best motivation for the Uniqueness Thesis.
References Adams, W. J., E. W. Graf, and M. O. Ernst (2004). Experience can change the “light-fromabove” prior. Nature Neuroscience 7, 1057–8. Ballantyne, N. (2012). The problem of historical variability. In D. Machuca (ed.), Disagreement and Skepticism, Routledge Studies in Contemporary Philosophy, pp. 239–59. Routledge. Ballantyne, N. (2015). The significance of unpossessed evidence. The Philosophical Quarterly 65, 315–35. Ballantyne, N. and E. Coffman (2011). Uniqueness, evidence, and rationality. Philosophers’ Imprint 11, 1–13. Ballantyne, N. and E. Coffman (2012). Conciliationism and uniqueness. Australasian Journal of Philosophy 90, 657–70. Carnap, R. (1950). Logical Foundations of Probability. University of Chicago Press. Chalmers, D. (2007). The Matrix as metaphysics. In T. Gendler, S. Siegel, and T. Cahn (eds.), The Elements of Philosophy. McGraw-Hill. Christensen, D. (2009). Disagreement as evidence: the epistemology of controversy. Philosophy Compass 4, 756–67. ⁵¹ Thanks to Paul Boghossian for discussion on this point. ⁵² The phrase is from White (2014, p. 315); Kelly discusses it at his (2014, p. 309). ⁵³ Compare David Enoch’s argument that realism about normative facts is indispensable for rational deliberation (2011, ch. 3). Something like the tension we’re pointing to may also be at play in Robert Mark Simpson’s “arbitrariness objection” to permissivism (Simpson 2017) and Jonathan Weisberg’s “Instability Problem” (Weisberg ta).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
.
Christensen, D. (2016). Conciliation, uniqueness, and rational toxicity. Noûs 50, 584–603. Cohen, S. (2013). A defense of the (almost) equal weight view. In J. Lackey and Christensen (eds.), The Epistemology of Disagreement: New Essays, pp. 98–120. Oxford University Press. Comesaña, J. (2014). Reply to pryor. In M. Steup, J. Turri, and E. Sosa (eds.), Contemporary Debates in Epistemology, 2nd ed., pp. 239–43. Wiley Blackwell. Conee, E. and R. Feldman (2004). Evidentialism. Oxford University Press. de Finetti, B. (1972). Probability, Induction, and Statistics: The Art of Guessing. John Wiley & Sons. Dogramaci, S. and S. Horowitz (2016). An argument for uniqueness about evidential support. Philosophical Issues 26, 130–47. Earman, J. (1992). Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory. MIT Press. Easwaran, K., L. Fenton-Glynn, C. Hitchcock, and J. D. Velasco (2016). Updating on the credences of others: disagreement, agreement, and synergy. Philosophers’ Imprint 16, 1–39. Elga, A. (ms). Lucky to be rational. Unpublished paper presented at the Bellingham Summer Philosophy Conference on June 6, 2008. Enoch, D. (2011). Taking Morality Seriously: A Defense of Robust Realism. Oxford University Press. Feldman, R. (2007). Reasonable religious disagreements. In L. M. Antony (ed.), Philosophers without Gods: Meditations on Atheism and the Secular Life. Oxford University Press. Hicks, D. J. (2015). Epistemological depth in a GM crops controversy. Studies in History and Philosophy of Biological and Biomedical Sciences 50, 1–12. Kelly, T. (2005). The epistemic significance of disagreement. Oxford Studies in Epistemology 1, 167–96. Kelly, T. (2008). Evidence. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, fall 2008 ed. Kelly, T. (2010). Peer disagreement and higher-order evidence. In R. Feldman and T. A. Warfield (eds.), Disagreement, pp. 111–74. Oxford University Press. Kelly, T. (2014). Evidence can be permissive. In M. Steup, J. Turri, and E. Sosa (eds.), Contemporary Debates in Epistemology, 2nd ed., pp. 298–312. Wiley Blackwell. Kopec, M. 2018. A pluralistic account of epistemic rationality. Synthese 195, 3571–96. Kopec, M. and M. G. Titelbaum (2016). The uniqueness thesis. Philosophy Compass 11, 189–200. Kuhn, T. S. (1970). The Structure of Scientific Revolutions, 2nd ed. University of Chicago Press. Lewis, D. (1971). Immodest inductive methods. Philosophy of Science 38, 54–63. Meacham, C. J. G. (2014). Impermissive Bayesianism. Erkenntnis 79, 1185–217. Podgorksi, A. (2016). Dynamic permissivism. Philosophical Studies 173, 1923–39. Putnam, H. (1981). Reason, Truth, and History. Cambridge University Press. Rosen, G. (2001). Nominalism, naturalism, philosophical relativism. Philosophical Perspectives 15, 69–91. Savage, L. J. (1954). The Foundations of Statistics. Wiley. Schechter, J. (ms). Luck, rationality, and explanation: a reply to Elga’s “lucky to be rational.” Unpublished manuscript. Schoenfield, M. (2014). Permission to believe: why permissivism is true and what it tells us about irrelevant influences on belief. Noûs 48, 193–218.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Sharadin, N. (2015). A partial defense of permissivism. Ratio 28 (2), 57–71. Simpson, R. M. (2017). Permissivism and the arbitrariness objection. Episteme 14, 519–38. Titelbaum, M. G. (2010). Not enough there there: evidence, reasons, and language independence. Philosophical Perspectives 24, 477–528. Titelbaum, M. G. (2015). Rationality’s fixed point (or: in defense of right reason). In T. S. Gendler and J. Hawthorne (eds.), Oxford Studies in Epistemology, Volume 5, pp. 253–94. Oxford University Press. Titelbaum, M. G. and M. Kopec (ms). Plausible permissivism. Unpublished manuscript. Vavova, K. (2018). Irrelevant influences. Philosophy and Phenomenological Research 96, 134–52. Weisberg, J. (ta). Could’ve thought otherwise. Philosophers’ Imprint. Forthcoming. White, R. (2005). Epistemic permissiveness. Philosophical Perspectives 19, 445–59. White, R. (2010). You just believe that because. . . . Philosophical Perspectives 24, 573–615. White, R. (2014). Evidence cannot be permissive. In M. Steup, J. Turri, and Sosa (eds.), Contemporary Debates in Epistemology, 2nd ed., pp. 312–23. Wiley Blackwell. Williams, B. (1986). Ethics and the Limits of Philosophy. Harvard University Press. Wright, C. (1992). Truth and Objectivity. Harvard University Press. Wright, C. (2004). Warrant for nothing (and foundations for free)? Supplement to the Proceedings of the Aristotelian Society 78, 167–212.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
12 The Epistemic Innocence of Optimistically Biased Beliefs Lisa Bortolotti, Magdalena Antrobus, and Ema Sullivan-Bissett
1. Realism and Wellbeing Is realism conducive or inimical to psychological wellbeing?¹ One way to answer this question is to look at competing conceptions of what is involved in depression. According to what we are going to call the traditional view, realism and psychological wellbeing go together and if realism is compromised, so is wellbeing. This view has implications for the goals of psychological therapy: wellbeing is enhanced when realism is restored. The ability to perceive reality as it ‘really’ is is fundamental to effective functioning. It is considered one of the two preconditions to the development of the healthy personality. (Jourard and Landsman 1980, p. 75)
The traditional view of the relationship between realism and wellbeing is supported by the observation that people with depression are both unrealistic and unwell, because negative biases in their thinking processes are responsible for their depressive symptoms including low mood (Beck 1967), and their false beliefs about lack of control over negative events generate a state of helplessness (Seligman 1974). According to what we are going to call the trade-off view, realism and wellbeing do not always go together and, in at least some contexts, an agent’s wellbeing may require unrealistic optimism. The trade-off view emerges as an explicit challenge to The authors acknowledge the support of the European Research Council under the Consolidator grant agreement number 616358 for a project called Pragmatic and Epistemic Role of Factually Erroneous Cognitions and Thoughts (PERFECT). Lisa Bortolotti also acknowledges the support of the Hope and Optimism funding initiative for a project called Costs and Benefits of Optimism. The authors are grateful to the editors and the referees for helpful and constructive comments on a previous version of the chapter. The chapter also benefited from comments by Sophie Stammers, Andrea Polonioli, and Anneli Jefferson. ¹ In this chapter by ‘realism’ we mean an accurate representation of the world as it is, involving neither optimistic or pessimistic. ‘Psychological wellbeing’ is what Caroly Ryff (1989) describes as encompassing self-acceptance, positive relations with others, autonomy, environmental mastery, purpose in life, and personal growth. So, psychological wellbeing includes not merely feeling well but also good functioning.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
the traditional view, and it is based on the fact that the optimism bias is widespread among people who are psychologically well and cannot be found among people who experience certain forms of psychological distress. For instance, people in good psychological health are unreasonably or unrealistically optimistic when they form beliefs about their skills and talents, when they assess their capacity to control external events, and when they predict their future (see, for instance, Brown 1986; Dunning et al. 1989; Helgeson and Taylor 1993; Sedikides 1993). But people who are affected by low mood do not share such an inflated conception of their skills and talents, do not overestimate their capacity to control external events, and predict their future more realistically than people without low mood (see, for instance, Alloy and Abramson 1979; Abramson et al. 1981; Dobson and Pusch 1995; Presson and Benassi 2003; Msetfi et al. 2005). Such findings also have implications for the goals of psychological therapy: wellbeing is improved when the right kind of distortion (such as a doxastic bias leading to optimism) is introduced or reinstated. More important to our purposes here, one implication of the trade-off view as applied to depression is that one cannot both be psychologically well and a realist. Something has to give. Increasingly, we must view the psychologically healthy person not as someone who sees things as they are but as someone who sees things as he or she would like them to be. Effective functioning in everyday life appears to depend upon interrelated positive illusions, systematic small distortions of reality that make things appear better than they are. (Taylor 1989, p. 228) Research on the optimism bias suggests an important divergence from classic approaches to understanding mind and behaviour. It highlights the possibility that the mind has evolved learning mechanisms to mis-predict future occurrences, as in some cases they lead to better outcomes than do unbiased beliefs. (Sharot 2011, p. 945)
There are good reasons to challenge both the traditional view and the trade-off view. The traditional view cannot accommodate the ‘depressive realism’ effect, which proves to be a robust phenomenon across a number of contingency tasks. Depressive symptoms such as low moods support rather than hinder realism, making people more, rather than less, epistemically rational. The trade-off view can easily explain the ‘depressive realism’ effect, but is silent about other forms of psychological distress. Moreover, it needs to account for the fact that some optimistically biased beliefs have significant psychological costs as well as benefits (see Bortolotti and Antrobus 2015 for a brief review). One interpretation of the conflicting empirical results on the effects of optimistically biased beliefs is that correlation between optimistically biased beliefs and wellbeing works when the beliefs reflect moderate optimism but ceases to work when they reflect radical optimism. Radical optimism can give rise to illusions of invulnerability leading to risky behaviours, generate distress and relational issues, and prevent people from anticipating setbacks and preparing for negative outcomes (Sweeney et al. 2006; Schacter and Addis 2007).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
, , - In addition to this, people with low mood are more realistic than people who self-enhance in some circumstances, but those suffering from major depressive disorder (that is, those who experience severe depressive symptoms for an extended period of time and whose low mood interferes with their daily life) are not realists but pessimists, suggesting that wellbeing and realism come apart only to an extent (see, for instance, Lennox et al. 1990; Ackerman and DeRubeis 1991; Dobson and Pusch 1995; McKendree-Smith and Scogin 2000; Fu et al. 2005; Carson et al. 2010; Moore and Fresco 2012; Baker et al. 2012). In this chapter we revisit the relationship between realism and psychological wellbeing. We move the debate forward by proposing a new way to look at optimistically biased beliefs and their effects. In particular, we suggest that, although their adoption and maintenance are the result of biased reasoning, optimistically biased beliefs can confer to agents epistemic benefits that are significant and distinctive. Such epistemic benefits derive from their positive psychological effects. In Section 2, we introduce the notion of epistemic innocence. In Section 3, we describe the phenomenon of optimistically biased beliefs in the context of the literature of positive illusions and self-enhancement, and ask how optimistically biased beliefs are adopted and maintained. In Section 4, we review the main psychological benefits of optimistically biased beliefs as described in the empirical literature. In Section 5, we argue that optimistically biased beliefs have significant epistemic as well as psychological benefits. In Section 6, we argue that without optimistically biased beliefs the benefits described in Section 4 could not be attained. In Section 7, we discuss the implications of the arguments in Sections 5 and 6 for the epistemic innocence of optimistically biased beliefs.
2. Epistemic Innocence Epistemic rationality concerns the relationship between a belief and the evidence for it. We call a belief epistemically irrational when it is not well-supported by the evidence or is not responsive to counterevidence. The notion of epistemic innocence has been developed to describe the status of beliefs that are epistemically irrational, but also have significant epistemic benefits that could not be attained by other means. Epistemic innocence has already been discussed in relation to delusional beliefs (Bortolotti 2015; Bortolotti 2016; Antrobus and Bortolotti 2017), confabulated explanations of actions or decisions guided by implicit bias (Sullivan-Bissett 2015), and cognitions that fail to reflect social inequalities (Puddifoot 2017). The motivation for talking about epistemic innocence in the context of belief evaluation comes from an analogy with the legal notion of innocence defence. In the UK and US legal contexts, there are circumstances in which the agent is not deemed liable for an act that appears to be wrongful (Greenawalt 1986, page 89). There are two senses of innocence that are relevant to innocence defences. The first (justification) applies to an act that is objectionable but it is not condemned because it
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
prevents serious harm from occurring. Innocence here is due to the act being an effective response to an emergency situation. In legal contexts, self-defence is the most common example of this form of innocence. The person is not criminally liable for acting in self-defence even though her act would in other circumstances constitute an offence. The second sense of innocence (excuse) applies to an act that is objectionable but not condemned because the person performing it either could not have done otherwise (e.g., as in duress or compulsion) or did not realise that the act was objectionable (e.g., due to intoxication or insanity). Innocence here is due to the person not being responsible for performing the act. When we apply the notion of innocence-defence to the epistemic domain, we get epistemic innocence. There are two conditions that epistemically irrational beliefs need to meet to qualify for epistemic innocence: first, they need to confer an epistemic benefit b to an agent A at a time t (epistemic benefit); and second, no epistemically preferable belief that would confer b is available to A at t (no alternatives). Some qualifications are in order. A belief can be epistemically innocent without being epistemically justified. For a consequentialist, a belief is justified if the adoption of the belief furthers a legitimate epistemic goal, such as the maximization of true beliefs. For a deontologist, a belief is justified if by adopting the belief the agent fulfills her basic doxastic duties. Some beliefs fail to meet the standards for epistemic justification, but can still be innocent in the sense we are describing. The project of evaluating beliefs from an epistemic point of view is not exhausted by an investigation of the conditions for rational or justified belief. Considerations about whether epistemically irrational beliefs have epistemic benefits are important to agents’ practices and mutual interactions. In some contexts, epistemically irrational beliefs may enable agents to behave in a way that is conducive to acquiring, retaining, and using relevant information, and exercising epistemic virtues. For instance, when people adopt delusions in the context of schizophrenia, this may temporarily reduce a paralysing anxiety they feel about the strangeness of their anomalous experience, and allow them to resume the automated processes involved in learning that were suspended (Bortolotti 2016). In exploring the status of epistemically irrational beliefs we also ask whether agents could adopt less epistemically objectionable beliefs that would confer the same benefits. For instance, agents often offer confabulatory explanations for their attitudes that are not grounded on evidence, but such explanations allow them to think about their attitudes in ways that may be conducive to peer feedback and further personal reflection. It is not clear whether this would happen in the absence of confabulatory explanations. The first condition of epistemic innocence, epistemic benefit, is especially relevant to epistemic evaluation in a consequentialist framework as it aims to establish whether epistemically irrational beliefs have epistemic value. The second condition of epistemic innocence, no alternatives, is especially relevant to epistemic evaluation in a deontological framework as it aims to establish whether agents could believe
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
, , - otherwise than they do and avoid the epistemically irrational belief altogether without giving up the epistemic advantage it confers.
3. Optimistically Biased Beliefs Shelley Taylor (1989) discusses different types of ‘positive illusions’. One has the illusion of control when one overestimates one’s capacity to control independent, external events (e.g., Langer and Roth 1975). One experiences the better-than-average effect or has the illusion of superiority when one regards oneself as above average and overrates one’s performance relative to others in a variety of domains (e.g., Brown 2012; Wolpe et al. 2014). The optimism bias is a tendency to predict that one’s future will be largely positive and will yield progress, and that negative events will not be part of one’s life (e.g., Lench and Bench 2012). In addition to the three classic illusions, there are other related phenomena: for instance, the self-enhancement and self-protection strategies studied by Constantine Sedikides and colleagues explain how people’s overly flattering conceptions of themselves come about and especially how they persist in the face of negative feedback (e.g., Hepper and Sedikides 2012). A striking example of the illusion of control occurs in betting behaviour. People tend to think that they have a better chance at winning when they themselves are rolling the dice in a casino, and consequently they bet more money in those circumstances (Vyse 1997). College professors’ reactions to the question whether they do above average work illustrates the better than average effect: over 90 per cent of them say they do (Cross 1977). They cannot all be right about that. The optimism bias manifests itself when people underestimate the likelihood of experiencing divorce or developing a serious health condition during their lives (Sharot et al. 2011). An example of a self-enhancement strategy is when a man who inherits a successful business from his father claims to be a self-made man, underestimating the role of luck in his success. An example of a self-protection strategy is when a writer’s manuscript is rejected by all the publishers it is sent to, but the writer continues to believe in her talent and blames the state of contemporary literature for the outcome. Positive illusions and self-enhancing or self-protecting strategies can interact with one another. For instance, an agent’s illusions of control and superiority are likely to contribute to optimistic predictions about her future: if she believes that she can control external events and that she is highly competent in a variety of domains, she might well conclude that it is in her power to avoid some negative events in her own life. It was found, for instance, that women with a history of breast cancer had the belief that they could avoid the return of the illness by changing their lifestyle, and that they were more likely than other survivors to be successful at that (Taylor 1983; Taylor et al. 1984; Taylor and Sherman 2008). Although the sense of control people feel is often illusory, not all positive beliefs about the self are false. Some people really are above average, and when they believe
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
they are, their beliefs are not only innocent, they can also be justified and true. However, if over 90 per cent of college professors believe that they are above average, then some of them are evaluating their performance too positively. What reasoning biases lead people to adopt optimistically biased beliefs, and maintain them in the face of conflicting evidence? In the literature, cognitive and motivational biases have been discussed in relation to optimism. These may affect the evidence for adopting or maintaining optimistic beliefs. Take the illusion of superiority. How do people come to believe that, in some domain, they perform better or are more skilled than is warranted by the evidence? One possibility is that agents are incompetent, that is, they are unable to evaluate the evidence at their disposal. This means that they fail to realise how their performance or skill fares against the appropriate standards (Kruger and Dunning 1999). This can result in the belief that one performs better or is more skilled than is warranted by the evidence. How do people come to believe that they are better than average in some domain? Even competent agents who know what it takes to be skilful or talented in a specific domain may neglect information relevant to the comparison between their own performance and skills and those of others, focusing primarily on evidence about themselves. This can result in the belief that one is above average in a specific domain when that is not the case (Sedikides and Gregg 2007). Regarding superiority, there may also be a form of mnemic neglect at play. This is the idea that people tend to focus on the praise they receive rather than the blame, and on their successes rather than their failures, when looking for evidence of their special talents or skills in their autobiographical memory. This can result in the belief that one’s talents or skills are better than they actually are. Moreover, people may fail to learn from negative feedback, and thus not realise the limits of their skills and the need to improve their performance (Hepper and Sedikides 2012). This is often due to the fact that the available feedback is often either incomplete or dishonest (we do not always tell the truth when we offer feedback for fear of hurting people’s feelings). Studies on self-protection strategies tell us that when people fail to achieve something they want to achieve, or receive negative feedback on their performance, they tend to reinterpret the event or the feedback in a more positive light. Typically, people are creative in accounting for failure in past performance, and tend to interpret negative feedback favourably. As a result, they maintain high self-esteem and overcome the disappointment that the failure or negative feedback may otherwise have caused them (Hepper and Sedikides 2012). For instance, in one study (Gilbert et al. 1998), people who had been turned down for a job felt better after ten minutes when the outcome could be attributed to one interviewer rather than to a panel of interviewers, because in the former case the situation was easier to rationalize (‘The guy is a jerk!’). Once people adopt such optimistic beliefs, why don’t they revise them when conflicting evidence becomes available, for instance when they experience failure or receive negative feedback? One important finding is that there is an asymmetry in the
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
, , - way in which agents update their beliefs in the light of new evidence (see Jefferson et al. 2016): evidence confirming desirable information is taken into account to a greater extent than evidence disconfirming it. The paradigm revealing asymmetrical belief updating was introduced by Sharot and colleagues (2011): people are asked to provide risk estimates for negative future events, are confronted with base rates for these events, and are later asked for an estimate of their own risk again. When people update their initial risk estimates, they tend to incorporate desirable base-rate information (i.e., information that risks are lower than expected) to a greater extent than undesirable information (i.e., information that risks are higher than expected). This finding shows that belief updating does not comply with formal learning theories, which specify that the extent to which an existing belief is updated should depend on the extent to which the new information conflicts with it, and not on whether the new information is desirable or not (Schultz et al. 1997). Because people update their beliefs to a greater extent after receiving good news than bad news, the positive skew in their expectations tends to be retained or amplified.
4. Psychological Benefits of Optimistically Biased Beliefs Positive illusions have been found to have a number of benefits which may begin to explain why they are so widespread. Many have argued that optimistically biased beliefs are biologically adaptive, making people more likely to survive, be healthy, reproduce, and have lasting relationships that ensure protection for their offspring (see Sharot 2011; McKay and Dennett 2009). At the level of individual agents, people whose beliefs and predictions are optimistically biased are better adjusted, feel better about themselves, are more sociable, and have a more resilient attitude towards stressful events than people whose beliefs and predictions are less optimistically biased (Campbell et al. 2002; Taylor et al. 2003). Allan Hazlett suggests that, just as there may be coping mechanisms in the form of self-deception that offset the negative consequences of bad life events, ‘less extreme’ biases might also be ‘useful as [a] means of coping with the events of everyday life’ (Hazlett 2013, p. 61). Arguments to the effect that positive illusions contribute to mental health have been very influential, and controversial, in contemporary psychology. Taylor suggests a number of psychological benefits for optimistically biased beliefs, including happiness, satisfaction with one’s life, productivity, and motivation. It is interesting that, overall, the listed benefits constitute a heterogeneous set, featuring what we may consider aspects of pro-social and moral behaviour, such as being altruistic and caring; subjective feelings of wellbeing; and what we may consider prerequisites for successful agency, such as motivation, planning, and productivity. [P]ositive beliefs about the self, the world, and the future are associated with happiness, sociability, motivation, and heightened activity. (Taylor 1989, p. 203)
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Across numerous studies, people who enjoy psychological well-being (e.g., low depression, low anxiety, high self-esteem, happiness, and subjective well-being) exhibit a greater BTA [betterthan-average] effect than those who are chronically anxious, depressed, unhappy, or dissatisfied with themselves or their life. (Brown 2012, p. 217) [S]elf-enhancement is positively related to psychological resources (e.g., extraversion, positive reframing, optimism, mastery, planning, active coping), social resources (e.g., positive relations, family support), and psychological adjustment (e.g., purpose in life, personal growth, subjective well-being); on the other hand, self-enhancement is negatively related to psychological distress (e.g., anxiety, depression, neuroticism, hostility). (Alicke and Sedikides 2009)
Recently, the empirical literature has suggested that in some circumstances optimistically biased beliefs are detrimental, and that radical optimism does not always contribute to wellbeing or to good decision making (see Shepperd et al. 2013). This may be because excessively optimistic beliefs foster feelings of invulnerability or lead to disappointment when expectations are not met. In an interesting study by Richard W. Robins and Jennifer S. Beer (2001), students who had illusory beliefs about their academic ability were more likely to exhibit narcissistic traits and to make selfserving attributions. In the short term, their optimistically biased beliefs had a positive effect, contributing to the students’ wellbeing. But in the long term, optimistically biased students were found to become progressively less engaged with their academic context, have decreasing self-esteem, and experience lower levels of wellbeing. In self-enhancers academic performance was not better than in people who had more realistic expectations, and when self-enhancers realized that they could not achieve the grades they expected, they started considering grades less important (the ‘sour grapes’ effect). Thus, it is important not to overstate the psychological benefits of positive illusions, as such benefits may be short-lived, and possibly even outweighed by costs down the line. What we want to ask next is whether optimistically biased beliefs have any epistemic benefits.
5. The Epistemic Benefit Condition Optimistically biased beliefs can be described as epistemically irrational but psychologically beneficial, and it is tempting just to accept that some epistemic irrationality is worth tolerating in exchange for greater psychological wellbeing or psychological adjustment. However, optimistically biased beliefs also have epistemic benefits that depend on their psychological benefits, and in the light of that they may be regarded as epistemically innocent. Epistemic Benefit tells us that optimistically biased beliefs need to confer some significant epistemic benefit to the agent at the time when they are adopted. We saw that positive illusions are reported to have a number of psychological benefits. Do they also have epistemic benefits? We suggest that optimistically biased beliefs are
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
, , - epistemically beneficial in the sense that they have positive consequences for the agent’s capacity to acquire, retain, and use relevant information or for her exercise of some intellectual virtues. First, by enhancing mood and preventing anxiety, optimism promotes concentration and socialization, leading to better cognitive performance and greater exchange of information with peers. Exchanges of information also enable feedback. Second, by supporting a person’s sense of self as that of a competent, largely coherent, and effective agent, optimism plays an important motivating role in the pursuit and fulfilment of the agent’s goals, including her epistemic ones. We saw in Section 4 that positive illusions can reduce anxiety and improve mood, enhance pro-social attitudes and subjective feelings of wellbeing, and support resilience in the face of adversities. Anxiety compromises attention and concentration impairing efficiency and often also effectiveness in a number of cognitive and inferential tasks (Beck et al. 1985; Fernández-Castillo and Caurcel 2015). Anxiety has further consequences—such as irritability and emotional disturbances—which cause the agent’s interaction with other people to be less frequent and less conducive to productive exchanges of information. For this reason, anxiety and negative emotions are found to interfere with the capacity to have social exchanges, share beliefs, and receive feedback. A happier and more sociable agent is more likely to have exchanges of information with her peers, more willing to share beliefs, and, as a result, more likely to receive feedback on such beliefs, thereby acquiring, retaining, and using information to a greater extent. We also saw that the optimistic person experiences illusions of control and superiority. As a result, she often feels that it is in her power to understand and intervene on what goes on in her life and she develops a strong sense of competence and self-efficacy. These attitudes towards experience enhance the agent’s motivation and productivity, being more conducive to the acquisition, retention, and use of relevant information than the state of anxiety and self-doubt that a more realistic or a pessimistic perspective could give rise to. The intellectual virtues of perseverance and curiosity are also likely to be enhanced. People’s self-efficacy beliefs determine their level of motivation, as reflected in how much effort they will exert in an endeavor and how long they will persevere in the face of obstacles. The stronger the belief in their capabilities, the greater and more persistent are their efforts [ . . . ]. When faced with difficulties, people who are beset by self-doubts about their capabilities slacken their efforts or abort their attempts prematurely and quickly settle for mediocre solutions, whereas those who have a strong belief in their capabilities exert greater effort to master the challenge [ . . . ]. Strong perseverance usually pays off in performance accomplishments. (Bandura 1989, p. 1175) There is evidence that optimistic people present a higher quality of life compared to those with low levels of optimism or even pessimists. Optimism may significantly influence mental and physical well-being by the promotion of a healthy lifestyle as well as by adaptive behaviours and cognitive responses, associated with greater flexibility, problem-solving capacity and a more efficient elaboration of negative information. (Conversano et al. 2010, p. 25)
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Positive illusions are instrumental to developing a sense of self that unifies the person’s varied experiences and fluctuating preferences, facilitating and underlying effective agency as characterized by sustained motivation and productivity. In general, an agent who does not give up at the first obstacle and perseveres in pursuing her goals is more likely to fulfil at least some of her goals, where goal fulfilment is not a direct outcome but is mediated by sustained effort. Some of the agent’s goals will be epistemic goals, or will have significant epistemic consequences. A sense of self can be advantageous because it gives us a reference point for organising the large amount of incoming information from our daily lives [ . . . ], and allows for higher order cognitive processing such as planning, goal setting, and perspective taking. (Beer 2012, p. 333)
It has been shown that when people impose a largely illusory consistency on their preferences in a given domain (e.g., a job search), they are more efficient in the relevant tasks and more likely to fulfil their goals in that domain (e.g., receiving desirable job offers) than people who have a more realistic understanding of how their preferences vary. Contrary to the conventional glorification of self-awareness, our research into the motivational effects of preference inconsistency highlights instead the costs of accurate self-appraisal. (Wells and Iyengar 2004, p. 83).
To sum up, the epistemically relevant benefits of optimistically biased beliefs are due to their promoting social connections and a more efficacious, competent, and coherent image of the self which makes for a happier, more socially integrated, and more resilient agent, who pursues and often achieves her goals because she does not give up as easily. Notice that, if we are right in identifying the potential epistemic benefits of positive illusions, these depend on the content of the beliefs being optimistically biased. It is the person’s sense of competence and control that is playing the role of enhancing subjective feelings of wellbeing, a pro-social attitude, and resilience, and these in turn support the social dimension of epistemic functionality making it more likely that the person will exchange information with her peers and receive feedback. A more realistic belief about the self would be better grounded in evidence than an optimistically biased one, but would not play the same motivating role. Similarly, a person’s illusion that she has a consistent set of preferences and that her behaviour reflects her stable features (rather than being influenced by third parties and the contingencies of the environment) contributes to her being persistent, productive, and ultimately successful. More realistic beliefs about other agents and external circumstances playing a more pronounced causal role in determining the agent’s behaviour, and about the agent’s preferences being less stable and more fickle, would be more accurate, but would not support a person’s sense of agency to the same extent (Bortolotti 2018).
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
, , -
6. The No Alternatives Condition Optimistically biased beliefs may have epistemic benefits that better-grounded beliefs about the self would lack. But even if better-grounded beliefs could deliver some of the epistemic benefits we identified, one question is whether such cognitions would be as easily available to agents as optimistically biased beliefs are. What if an epistemically better belief were not available to the agent at the time when the optimistically biased belief is adopted? As we saw in Section 3, this unavailability could be due to the agent’s lacking access to the information that would support the better-grounded belief. This leads us nicely to the question whether people can avoid being optimistically biased about their future. If they cannot, then it would be difficult to deem them responsible for their epistemic irrationality and (typically) the falsehood or ill-groundedness of their optimistically biased beliefs. To borrow the terminology introduced by Ema Sullivan-Bissett (2015, p. 554) for describing the sense in which a cognition may be unavailable, we will see that the information on which people could ground more realistic beliefs can be unavailable to a varying extent. First, it may be strictly unavailable when the person cannot access or retrieve the information on which the more realistic belief should be based. This kind of unavailability of epistemically better beliefs about oneself may apply in the case of the Dunning–Kruger effect. Justin Kruger and David Dunning suggest that “those with limited knowledge in a domain suffer a dual burden: Not only do they reach mistaken conclusions and make regrettable errors, but their incompetence robs them of the ability to realize it” (Kruger and Dunning 1999, p. 1132). A person who has optimistically biased beliefs about her abilities in a domain in which she is incompetent (be it humour, logic, grammar, or moral conduct) may find that more accurate beliefs about her abilities are strictly unavailable to her because she does not know what ability in that domain requires or consists in. Second, information can be motivationally unavailable when motivational factors inhibit the acceptance or use of the information. Recall the case of the job candidates who feel better about being rejected after an interview if the interview was conducted by just one person, because it is easier for them to attribute their failure to the interviewer’s idiosyncratic preferences or biases. In this context, the desire to feel better about their own performance drives the job candidates to interpret the situation as one where they were unfairly rejected, or rejected for arbitrary reasons. The possibility that their performance was sub-optimal, or inferior to that of other candidates, is not acknowledged. Finally, information may be explanatorily unavailable. By this we mean to refer to cases in which information that would ground a better-grounded belief about the self is strictly speaking available to the person, but it is not regarded as plausible, and thus dismissed. One case that would fit this model is the case of a person who is generally competent in a domain, but experiences a token instance of a mess-up. One explanation of that mess-up is that the person performed badly, but that might be taken
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
by the person to be a bad explanation, because of her general, well-documented competence. It is more plausible to believe that it was someone else’s fault. Imagine that an established academic has a lot of papers published on topic T. In a hurry because of an impending deadline, she submits a paper on topic T to journal J, and it gets rejected. Instead of forming the belief that the paper was not good enough to merit publication, she forms the belief that the referees misunderstood her paper. Given her publication record, it is implausible to believe that her paper was not good enough. (Obviously, motivational factors can also be at play here.) Another case that would fit explanatory unavailability is when a person’s choice is determined by external factors of which she is not aware (as in priming or framing effects), but the person explains the choice as a rational choice based on reasons in favour of the chosen outcome. It is subjectively implausible to believe that how a situation is presented may have such a powerful effect on choice, so the person finds it more plausible to believe that she ‘controlled’ the situation and made a choice based on reasons. Imagine that a person at a supermarket buys packaged salmon fillets and when she gets home she explains that she chose them as they looked fresher than the alternatives. In reality, she picked them as they were placed in the fridge on her right (a position effect) and they had the label ‘75 per cent lean’ as opposed to ‘25 per cent fat’ (a framing effect). If the shopper is not aware of the factors potentially affecting her choice, it is more plausible for her to think that she picked those fillets because of how they looked to her than because they were on her right, or they had a positively framed label on them.
7. Conclusions and Implications In this chapter we have considered the psychological and epistemic benefits of optimistically biased beliefs. We began by introducing two views in the empirical literature about the relationship between realism and psychological wellbeing, and suggested that both views oversimplify the relationship. It is implausible to claim that realistic beliefs are more conducive to wellbeing than optimistically biased beliefs, as the traditional view suggests. But it is also implausible to claim that we need to forego realism if we care about wellbeing, as the trade-off view suggests. The relationship between realism and psychological wellbeing is likely to be more complicated. According to the recent empirical literature, in some contexts wellgrounded beliefs about the self contribute to wellbeing, and in some other contexts they do not. The mechanisms responsible for these interactions are being investigated and no general conclusion as to why this is the case has been reached yet, but it is likely that different outcomes depend on different requirements for the completion of different tasks, and from the capacity that people’s beliefs have to foster a sense of agency, socialization, resilience, and motivation. Rather than sketching a big picture of the relationship between realism and wellbeing, in this chapter we have undermined the view according to which the psychological benefits of beliefs about the self
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
, , - come at the expense of their epistemic status. Optimistically biased beliefs about the self may be good, both psychologically and epistemically, despite their being the result of the influence of cognitive biases and motivational factors on reasoning, as the asymmetrical updating of beliefs about the self. By appealing to the notion of epistemic innocence as a way to describe beliefs that are at the same time epistemically irrational and epistemically beneficial, we argued that optimistically biased beliefs can have epistemic benefits that would not be as easily available to the agent via less epistemically costly beliefs. Although this does not mean that optimistically biased beliefs are epistemically rational or epistemically justified, it does enable us to come to a more balanced view of their contribution to an agent’s epistemic functionality.
References Abramson, L.Y., Alloy, L.B., and Rosoff, R. (1981). Depression and the generation of complex hypotheses in the judgement of contingency. Behaviour Research and Therapy, 19, 35–45. Ackerman, R. and DeRubeis, R. (1991). Is depressive realism real? Clinical Psychology Review, 11, 565–84. Alicke, M. and Sedikides, C. (2009). Self-enhancement and self-protection: what they are and what they do. European Review of Social Psychology, 20, 1–48. Alloy, L.B. and Abramson, L.Y. (1979). Judgment of contingency in depressed and nondepressed students: sadder but wiser? Journal of Experimental Psychology. General, 108(4), 441–85. Antrobus, M. and Bortolotti, L. (2017). Depressive delusions. Filosofia Unisinos, 17(2), 192–201. Baker, A.G., Mset, R.M., Hanley, N., and Murphy, R.A. (2012). Depressive realism? In M. Haselgrove and L. Hogarth (eds.), Clinical Applications of Learning Theory. Sussex: Psychology Press. Bandura, A. (1989). Human agency in social cognitive theory. American Psychologist, 44(9), 1175–84. Beck, A.T. (1967). Depression: Causes and treatment. Philadelphia: University of Pennsylvania Press. Beck, A.T., Emery, G., and Greenberg, R.L. (1985). Anxiety Disorders and Phobias: A Cognitive Approach. New York: Basic. Beer, J. (2012). Self-evaluation and self-knowledge. In S. T. Fiske and C. N. Macrae (eds.), Handbook of Social Cognition. London: SAGE. Bortolotti, L. (2015). The epistemic innocence of motivated delusions. Consciousness & Cognition, 33, 490–9. Bortolotti, L. (2016). The epistemic benefits of elaborated and systematised delusions in schizophrenia. British Journal for the Philosophy of Science, 67(3), 879–900. Bortolotti, L. (2018). Optimism, agency, and success. Ethical Theory and Moral Practice, 21(3), 521–535. Bortolotti, L. and Antrobus, M. (2015). Costs and benefits of realism and optimism. Current Opinion in Psychiatry, 28(2), 194–8.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Brown, J.D. (1986). Evaluations of self and others: self-enhancement biases in social judgments. Social Cognition, 4(4), 353–76. Brown, J.D. (2012). Understanding the better than average effect: motives (still) matter. Personality and Social Psychology Bulletin, 38, 209–19. Campbell, W.K, Rudich, E.A., and Sedikites, C. (2002). Narcissism, self-esteem, and the positivity of self-views: two portraits of self-love. Personality and Social Psychology Bulletin, 28(3), 358–68. Carson, R.C., Hollon, S.D., and Shelton, R.C. (2010). Depressive realism and clinical depression. Behaviour Research and Therapy, 48(4), 257–65. Conversano, C., Rotondo, A., Lensi, E., Della Vista, O., Arpone, F., and Reda, M. A. (2010). Optimism and its impact on mental and physical well-being. Clinical Practice Epidemiology in Mental Health, 6, 25–9. Cross, P. (1977). Not can but will college teaching be improved? New Directions for Higher Education, 17, 1–15. Dobson, K.S. and Pusch, D. (1995). A test of the depressive realism hypothesis in clinically depressed subjects. Cognitive Therapy and Research, 19(2), 179–94. Dunning, D., Meyerowitz, J.A., and Holzberg, A.D. (1989). Ambiguity And self-evaluation: the role of idiosyncratic trait definitions in self-serving assessments of ability. Journal of Personality and Social Psychology, 57(6), 1082–90. Fernández-Castillo, A. and Caurcel, M.J. (2015). State test-anxiety, selective attention and concentration in university students. International Journal of Psychology, 50, 265–71. Fu, T., Koutstaal, W., Fu, C.H.Y. et al. (2005). Depression, confidence, and decision: evidence against depressive realism. Journal of Psychopathology and Behavioral Assessment, 27, 243–52. Gilbert, D.T., Pinel, E.C., Wilson, T.D., Blumberg, S.J., and Wheatley, T.P. (1998). Immune neglect: a source of durability bias in affective forecasting. Journal of Personality and Social Psychology, 75, 617–38. Greenawalt, K. (1986). Distinguishing justifications from excuses. Law and Contemporary Problems, 49(3), 89–108. Hazlett, A. (2013). A Luxury of the Understanding: On the Value of True Belief. Oxford: Oxford University Press. Helgeson, V.S. and Taylor, S.E. (1993). Social comparisons and adjustment among cardiac patients. Journal of Applied Social Psychology, 23(15), 1171–95. Hepper, E. and Sedikides, C. (2012). Self-enhancing feedback. Available at accessed 10 October 2016. Jefferson, A., Bortolotti, L., and Kuzmanovic, B. (2016). What is unrealistic optimism? Consciousness & Cognition, 50, 3–11. Jourard, S.M. and Landsman, T. (1980). Healthy Personality: An Approach from the Viewpoint of Humanistic Psychology. New York: Macmillan. Kruger, J. and Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121–34. Langer, E.J. and Roth, J. (1975). Heads I win, tails it’s chance: the illusion of control as a function of the sequence of outcomes in a purely chance task. Journal of Personality and Social Psychology, 34, 191–8.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
, , - Lench, H.C. and Bench, S.W. (2012). Automatic optimism: why people assume their futures will be bright. Social and Personality Psychology Compass, 6, 347–60. Lennox, S.S., Bedell, J.R., Abramson, L.Y., Raps, C., and Foley, F.W. (1990). Judgement of contingency: a replication with hospitalized depressed, schizophrenic and normal samples. Journal of Social Behavior and Personality, 5(4), 189–204. McKay, R. and Dennett, D. (2009). The evolution of misbelief. Behavioral and Brain Sciences, 32, 493–561. McKendree-Smith, N. and Scogin, F. (2000). Depressive realism: effects of depression severity and interpretation time. Journal of Clinical Psychology, 56(12), 1601–8. Moore, M.T. and Fresco, D.M. (2012). Depressive realism: a meta-analytic review. Clinical Psychology Review, 32(6), 496–509. Msetfi, R.M., Murphy, R.A., Simpson, J., and Kornbrot, D.E. (2005). Depressive realism and outcome density bias in contingency judgments: the effect of the context and intertrial interval. Journal of Experimental Psychology: General, 134, 10–22. Presson, P.K. and Benassi, V.A. (2003). Are depressive symptoms positively or negatively associated with the illusion of control?. Social Behavior and Personality: An International Journal, 31(5), 483–95. Puddifoot, K. (2017). Dissolving the epistemic/ethical dilemma over implicit bias. Philosophical Explorations, 20(supp.1), 73–93. Robins, R.W. and Beer, J.S. (2001). Positive illusions about the self: short-term benefits and long-term costs. Journal of Personality and Social Psychology, 80(2), 340–52. Ryff, C.D. (1989). Happiness is everything, or is it? Explorations on the meaning of psychological wellbeing. Journal of Personality and Social Psychology, 57(6), 1069–81. Schacter, D.L. and Addis, D.R. (2007). The cognitive neuroscience of constructive memory: remembering the past and imagining the future. Philosophical Transactions of the Royal Society B, 362(1481), 773–86. Schultz, W., Dayan, P., and Read, M. P. (1997). A neural substrate of prediction and reward. Science, 275(5306), 1593–9. Sedikides, C. and Gregg, A.P. (2007). Portraits of the self. In M.A. Hogg and J. Cooper (eds.), The Sage Handbook of Social Psychology. Thousand Oaks, CA: Sage Publications Ltd. Sedikides, C. (1993). Assessment, enhancement, and verification determinants of the selfevaluation process. Journal of Personality and Social Psychology, 65(2), 317–38. Seligman, M.E. (1974). Depression and Learned Helplessness. Oxford: John Wiley & Sons. Sharot, T. (2011). The optimism bias. Current Biology, 21(23), R941–R945. Sharot, T., Korn, C.W., and Dolan, R.J. (2011). How unrealistic optimism is maintained in the face of reality, Nature Neuroscience, 14, 1475–9. Shepperd, J.A., Klein, W.P., Waters, E.A., and Weinstein, N.D. (2013). Taking stock of unrealistic optimism. Perspectives on Psychological Science, 8(4), 395–411. Sullivan-Bissett, E. (2015). Implicit bias, confabulation, and epistemic innocence. Consciousness and Cognition, 33, 548–60. Sweeny, K., Carroll, P.J., and Shepperd, J.A. (2006). Thinking about the future: is optimism always best? Current Directions in Psychological Science, 15, 302–6. Taylor, S.E. (1983). Adjustment to threatening events: a theory of cognitive adaptation. American Psychologist, 38, 1161–73.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Taylor, S.E. (1989). Positive Illusions: Creative Self-deception and the Healthy Mind. New York: Basic Books. Taylor, S.E. and Sherman, D.K. (2008). Self-enhancement and self-affirmation: the consequences of positive self-thoughts for motivation and health. In J.Y. Shah and W.L. Gardner (eds.), Handbook of Motivational Science. New York: Guilford Press. Taylor, S.E., Lichtman, R.R., and Wood, J.V. (1984). Attributions, beliefs about control, and adjustment to breast cancer. Journal of Personality and Social Psychology, 46, 489–502. Taylor, S.E., Lerner, J.S., Sherman, D.K., Sage, R.M., and McDowell, N.K. (2003). Are selfenhancing cognitions associated with healthy or unhealthy biological profiles? Journal of Personality and Social Psychology, 85, 605–15. Vyse, S.A. (1997). Believing in Magic: The Psychology of Superstition. New York: Oxford University Press. Wells, R.E. and Iyengar, S. (2004). Positive illusions of preference consistency: when remaining eluded by one’s preferences yields greater subjective well-being and decision outcomes. Organizational Behaviour and Human Decision Processes, 98, 66–87. Wolpe, N., Wolpert, D.M., and Rowe, J.B. (2014). Seeing what you want to see: priors for one’s own actions represent exaggerated expectations of success. Frontiers in Behavioral Neuroscience, 8, 232.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
13 Sovereign Agency Matthew Noah Smith
1. Introduction This chapter makes a case for the practical authority of deliberations and the intentions they yield. I argue that sound deliberations yielding an intention to act are together a (i) content-independent reason not to re-open deliberations about how to act and (ii) a content-independent reason to act as intended.¹ Many philosophers have argued that this sort of “bootstrapping” is impossible.² In this chapter, I neither This chapter benefited from comments from audiences at Oxford University, Leeds University, the Bled Philosophy Conference, and the Konstanz Reasoning Conference. Special thanks to Magdalena Balcerak Jackson and Brendan Balcerak Jackson for organizing the Konstanz conference and for excellent comments on a draft of this chapter. ¹ I explain the concept on content-independent reasons below. ² For the contemporary canonical statement of the bootstrapping objection, see Michael Bratman, Intentions, Plans and Practical Reasons (Cambridge, MA: Harvard University Press), pp. 24–7. In Bratman, “Intentions, Practical Rationality and Self-Governance,” Ethics 119 (April 2009): 411–43, Bratman challenges the bootstrapping objection with respect to non-modifiable intentions that are expressions of one’s self-constituting policies or commitments. Other statements of the bootstrapping objection include the following. “Forming an intention to do something surely cannot give one a reason to do it that one would not otherwise have. If it did, we could give ourselves a reason to do something just by intending to do it; and that cannot be right” (Richard Holton, “Rational Resolve,” Philosophical Review 113 (2004): 507–35, 513); “it is not credible that, just by adopting some end, you make it the case that you have reason to pursue it” (John Broome, “Have We Reason to Do as Rationality Requires? A Comment on Raz,” Journal of Ethics and Social Philosophy Symposium 1 (2005): 1–8, 1); and “the blanket conclusion that having goals or intentions provides reasons [is false]” (Joseph Raz, “Instrumental Rationality: A Reprise,” Journal of Ethics and Social Philosophy Symposium 1 (2005): 1–19, 19). See also Joseph Raz, “The Myth of Instrumental Rationality,” Journal of Ethics and Social Philosophy 1/1 (April 2005): 1–28; John Broome, “Are Intentions Reasons?” in Practical Rationality and Preference: Essays for David Gauthier, ed. Christopher Morris and Arthur Ripstein (Cambridge: Cambridge University Press, 2001), pp. 98–120; John Broome, “Reasons,” in Reason and Value, ed. R. Jay Wallace, et. al. (Oxford: Oxford University Press: 2004), 28–55; John Broome, “Does Rationality Give Us Reasons?” Philosophical Issues 15 (2005): 321–37; Kieran Setiya, “Cognitivism About Instrumental Reason,” Ethics 117 (2007): 647–73 (Setiya describes bootstrapping as “illicit” but then goes on to defend a belief-based “cognitivist” version of bootstrapping); and Garrett Cullity, “Decisions, Reasons and Rationality,” Ethics 119 (2008): 57–95, 63–7. See also Luca Ferrero, “Decisions, Diachronic Autonomy and the Division of Deliberative Labor,” Philosophers Imprint 2/10 (2010): 1–23, especially pp. 3–6. An influential related discussion appears in Christine Korsgaard, “The Normativity of Instrumental Reason,” in Ethics and Practical Reason, ed.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
rehearse nor challenge those arguments.³ Rather, my aim is to defend bootstrapping, i.e., to defend the claim that, given certain conditions, and when taken together, a deliberation whether to ϕ and the intention to ϕ that it yields are, for the intending agent, a reason to ϕ. That such a defense is available invites revisiting objections to bootstrapping and reflection on whether such objections are as strong as they are typically taken to be.⁴ The chapter proceeds in two steps. I first argue that deliberations and intentions have certain functional roles, namely, deliberations about whether to ϕ that yield an intention to ϕ function as a reason to intend to ϕ, intentions to ϕ function as reasons not to re-open deliberations about whether to ϕ, and finally intentions to ϕ function as reasons to ϕ (these are all agent-relative reasons by the way). The second step is to argue that intentions ought to play these roles. I then consider an important objection.
2. Assumptions One must always take a stand on certain philosophical issues when defending a view. This makes the soundness of that view conditional on the soundness of those assumptions. The assumptions I am making, then, are the following. First, I assume that intentions are mental states. Second, I assume that intentions bear special action-establishing relationships to behavior such that when that relationship is instantiated (and, as they say, “the world cooperates”) there is action and, at least for the paradigmatic instances of action, when that relationship is not realized, there is at best only behavior (for example, what makes my sneezing not an action is that even if I intend to sneeze and then some external stimulus makes me sneeze, the relationship between the intention and the sneeze is not the right one, whatever that may be). Furthermore, I am concerned primarily with prospective intentions and not intentions in action, at least insofar as the latter bear no significant rational dependence on the former. Garrett Cullity and Berys Gaut (Oxford: Oxford University Press, 1997), 215–54. In “Intentions, Practical Rationality and Self-Governance,” at footnote 20 (pp. 416–7), Bratman questions whether both Broome and Raz should be read as treating the bootstrapping objection as something other than a blanket rejection of intentions being reasons. ³ For an overview of the arguments and decisive objections against them, see Matthew Noah Smith, “One Dogma of Philosophy of Action,” Philosophical Studies 173/8 (2016): 2249–66. ⁴ Nothing here is meant to apply to questions related to epistemic bootstrapping. But, there may be connections. For more on epistemic bootstrapping, see, e.g., Stewart Cohen, “Basic Knowledge and the Problem of Easy Knowledge,” Philosophy and Phenomenological Research 65 (2002): 309–29, Jonathan Vogel, “Epistemic Bootstrapping,” Journal of Philosophy 105/9 (2008): 518–39; Stewart Cohen, “Bootstrapping, Defeasible Reasoning, and A Priori Justification,” Philosophical Perspectives 24 (2010): 141–59; and quite generally Jonathan Weisberg, “The Bootstrapping Problem,” Philosophy Compass 7 (2012): 597–610.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Finally, this chapter discusses only intentions that are products of deliberations. For now, call these deliberative intentions. Deliberative intentions are distinct from intentions adopted in the absence of any deliberations whatsoever. Such intentions are akin to sudden urges. Call these arational intentions. There are also intentions adopted contrary to one’s stable best judgment about the balance of reasons. Call these akratic intentions.⁵ The latter two classes of intentions are not paradigmatic products of healthy agency. They are instead features of lesser or degraded forms of agency. My methodological commitment is to investigate paradigmatic instances of healthy agency in order to understand agency tout court. Thus, I put aside arational and akratic intentions, as these intentions are not features of paradigmatic forms of human agency and therefore are not paradigmatic kinds of intention. The deliberations that yield deliberative intentions need not be explicit, conscious, or executed immediately prior to that particular intention. They can instead be in the psychological background, or recoverable upon prompting, or recollected as older deliberations that drove one to adopt certain practical policies, or something else along these lines.⁶ In these cases, the intention is easily or immediately recognizable to the agent as based on reasoning she has already performed, or values and commitments she has already adopted. These values and commitments can in turn be products of prior reflections or a series of decisions made on the basis of deliberations and then enacted so many times that habits form. In this way, we can deliberatively settle on values and commitments that allow us to make the quick decisions that determine how much of our lives go. But, these habits and quick decisions are just expressions of the deliberations from which they initially sprang. In this way, habitual intentions (so understood) have the sort of provenance that makes them more similar to deliberative intentions than to arational intentions.
3. Deliberation as Authorization The function of deliberation is to produce action by way of an intention. But, deliberation functions correctly not just by producing an intention that in turn produces an action. For, when one deliberates, one deliberates about what the thing to do is, or what the best thing to do is, or how one should live. That means that the function of deliberations is to produce an action that one has decided to take on the basis of those deliberations. So, deliberations do not merely cause actions. Rather, their function is to determine what it is reasonable for the agent to do and ⁵ There are also intentions to establish a Nash equilibrium. In these cases, one adopts an intention because one has conclusive reason to select some option from the available equilibria although this reason has nothing to do with the merits of one option over another. Call these plumping intentions. ⁶ See, e.g., Michael Bratman, “Valuing and the Will,” Philosophical Perspectives 14 (2000): 249–65; and Michael Bratman, “Intention, Planning, and Temporally Extended Agency,” The Philosophical Review 109/1 (2000): 35–61.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
then to produce an action on the basis of those reflections about what it is reasonable to do. In this way, the function of deliberations is to make it such that the agent is authorized to take the action the deliberations recommend. The intuition behind this is that insofar as we have practical authority over ourselves at all—insofar as we are the authors of our own actions—then we must be able to authorize our actions. But, authorization is a process. What process? Maybe it is something over and above practical deliberation. But, that would make authorization quite mysterious—a process we have never really noticed. Instead, we might just treat deliberations yielding an intention to be the authorization process itself. Doing this allows us to explain how actions are not merely caused by the deliberations, as ripples on water are caused by a stone hitting that water. Furthermore, it helps us to see how diminutions in the capacity to deliberate limit or diminish one’s authority over oneself.⁷ Deliberations alone, though, cannot authorize taking some action. For, deliberations that are never complete authorize nothing at all. It is only the completion of deliberation that allows for authorization. Completed deliberations produce intentions. It is this deliberation-intention combination that does the authorizing. But, this invites queries into the relationship between deliberations and intentions. Deliberations cannot merely cause any old intention. The deliberations must also authorize those intentions. In this way, there is a chain of authorization linking deliberation to intention to action. Thus, insofar as the function of deliberations is to authorize actions, they also function as authorizations of intentions, and these together function as authorizations of the intended action. In short, the function of deliberations and intentions is to give the agent both standing and reason to take the action.
4. A Closer Look at the Authorization of Intentions That is a very broad sketch of an argument for deliberations and intentions functioning as authorities. Let us take a closer look. Suppose I deliberate about whether to go to the store. On the basis of those deliberations, I form the intention to go to the store. But, before I leave, a mad scientist uses his new mind-control technology to make me forget everything that has just happened, thereby destroying the deliberation-authorized intention to go to the store. The mad scientist also has a device that reads my mind prior to his zapping me. On the basis of that, he implants in me, after destroying my memory and my authorized intention to go to the store, a new intention to go to the store. This intention causes me to go to the store. This sort of scenario is a case of compromised agency. For, although I am doing as I intended
⁷ For more on this and the moral significance of such agents, see Agnieszka Jaworska, “Respecting the Margins of Agency: Alzheimer’s Patients and the Capacity to Value,” Philosophy and Public Affairs 28 (1999): 105–38.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
because of an intention, I am not going to the store on the basis of my deliberations. What is missing is the authorization of that intention. It is as if an army had swept into a country, deposed its government, destroyed the government agencies, and then rebuilt everything entirely in the image of the old government. Meet the new boss, it’s not the same as the old boss.⁸ This thought experiment suggests that mere causal connections between deliberations, intentions, and action are not sufficient for the realization of agency to the fullest and richest extent. What is required are deliberations authorizing the formation of an intention, with that intention in turn authorizing the action. Causal connections of the right kind are merely necessary but partial grounds of the authorization, in the same way that utterances of the right kind are merely necessary but partial grounds of, e.g., promissory obligation. If authorization was not required, then all we’d have would be deliberations about how one should live, arational causal connections between those deliberations and an intention, and the intention and some behavior. At this stage, someone might resist this by arguing that deliberation’s role is entirely instrumental. The best possible deliberations do no more than reliably cause the correct intention, i.e., the one that the balance of reasons supports. On this picture, then, the character of the psychological process that leads up to behavior is irrelevant. Whether the intention meets independent standards is all that matters. There would be no difference between good deliberations about whether to live life in some way and mere causal influences that force one to live life in that way. But, there is pressure for our deliberations to bear a proper rational relationship to our intentions. For example, imagine the following deliberative train: I love cooking, I ought to do what I love (so long as it’s feasible), I’ve got a lot of experience working on the line in a restaurant, I’ve been told I am pretty good at cooking, restaurant life is better than lives associated with other career paths open to me (despite the fact that I will make more money in some of those career paths), and so on along those lines. These deliberations cause me to decide to pursue a career as a chef. Now imagine this deliberative train: my cat’s breath smells like cat food, a name is just a sound somebody makes when they need you, the NHS should be fully funded, these fingers have veins that run straight to the soul of man—the right hand, friends, the hand of love. This train of thoughts causes me to decide to pursue a career as a chef.
⁸ There is no sense in which this is about free will. It is about whether an action is authorized by deliberations. So long as one recognizes an important difference between the intention produced by deliberations and the intention produced by a mad scientist, then one recognizes the significance of authorization. More needs to be said about the conditions on which authorization supervenes. But, it seems that there is a referential component to authorization: X authorizes this intention, and any other intention implanted, however similar, is not authorized. Notice also that if one knew about the mad scientists and one’s deliberations authorized any intention with a certain content regardless of its provenance then the mad scientist-implanted intention is authorized.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
Let us assume that pursuing a career as a chef is the course of action I ought to take. Are there any differences between these two cases? One case is a case of fullblooded agency and the other is something far less than that. But, if the sole function of deliberations is merely to cause certain intentions to perform “correct” actions, then there is no functional difference between these deliberation-intention pairs. But, there clearly is a difference. The first is a case of deliberation authorizing an intention. The second is just a mess and it cannot authorize anything at all. To see this, suppose the second deliberation was publicly revealed. The recommendation that I revisit my decision to become a chef would be appropriate, even given the fact that becoming a chef was the correct course of action. Such a recommendation wouldn’t be warranted in the case of the first deliberation, though. That’s because the first deliberation authorized the intention to become a chef. One might respond that I am confused. The function of deliberations is to reliably yield correct decisions, goes this objection. The formal character of the second wild train of thought is such that if repeated it is not likely to yield a correct decision. So, that train of thought having functioned as deliberations is a problem, even if it luckily yielded the correct decision. The recommendation to reconsider is therefore warranted. For, that would presumably steer the agent towards more reliable deliberative practices. Thus, insofar as deliberations are significant with respect to a current (correct) decision, then, they are at best attractive window dressing—they make the decision “look better” and so have at best aesthetic significance. They do not make a practical difference to the authority of the intention. This is odd, though. First, the decision itself does not seem more or less aesthetically attractive given the quality of the deliberations behind it. Only the deliberations are apt bearers of that value. For, it is false that because some aesthetically attractive X produced Y, Y is therefore made more attractive. Having beautiful parents does make a child more beautiful. Second, suppose that we are certain that this instance of bad thinking is a oneoff case. Even if we know the decision it yielded is the correct one, it is still appropriate to criticize the deliberations and recommend reconsideration (assuming that the agent has the capacity to re-deliberate correctly). For example, suppose that wise King Solomon has up until now always made the wise decision on the basis of reliable deliberations. This morning, though, he tried biblical marijuana for the first time. Now high, he is faced with the question of which of the two women was the mother of a baby. Stoned, Solomon thinks about the cats prancing about his palace, then about the attractiveness of their swishing claws, and then he reflects on the ridiculous weight of his crown. As a result of this train of thought, he decides to threaten to cut the baby in two in order to determine who gets the baby. Because Solomon does not enjoying being stoned, this will be a one-off case. And because the biblical marijuana is weak, he will return to brilliant reasoning very quickly. Let us now suppose that the threat to cut the baby in two is the correct decision. Is there something defective about Solomon threatening to cut
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
the baby in two on the basis of his stoned reasoning? Yes. For, the decisiongenerating train of thought is utterly corrupt. It must be tossed out along with the decision it generated. So, even if Solomon’s decision is correct, and even if everyone knows that Solomon will never again reason this poorly, that he has made the decision on the basis of such corrupt reasoning is grounds for him to throw out that decision and try again. That he reliably comes to the right decision just is not germane to this issue. I conclude, then, that the function of deliberations is to authorize actions via a certain pattern of reasoning. The deliberation functions as an action-authorizing process via the authorization of intentions, and it is through these intentions that the actions are authorized.⁹
5. Catching My Breath So far, I’ve argued that deliberations have the function of authorizing actions. Furthermore, if deliberations are to authorize actions, then deliberations must also authorize the intentions that are the next proximate source of behavior. Finally, if the authorization is to flow without disruption from deliberation to action, then the intentions also must function as authorities with respect to the action in question. In short, the function of an agent’s deliberations and intentions is to authorize the agent’s actions. In the next sections, I offer further support for this claim. The argumentative strategy is to show that a parsimonious partial understanding of three interrelated phenomena—namely, responsibility for actions, compliance with reasons, and selfknowledge via one’s actions—involves intentions functioning as action-authorizing attitudes.
6. Deliberation and Responsibility The paradigmatic case of responsibility for action is the case of someone acting on the basis of an intention that was produced by sound deliberation.¹⁰ By “paradigm” ⁹ One might object at this stage that my account of deliberations appears to commit me to an antinaturalist metaethical view. No good account of deliberations should do that! This objection fails, though. First, I am working within the realm of reasons, in which people happily talk of the balance of reasons making some action the right thing to do. That is a purely normative relationship, and not in any way a causal one (despite the unintentionally deceptive physics-y talk of the weight of reasons, the balance of reasons, and so on). Furthermore, everything I am saying can be interpreted within a suitably powerful expressivist framework. If we can use expressivism to analyze normative language, then we can apply that to accounts of deliberations that authorize intentions and actions. So nothing I am saying here commits me to an objectionable metaethical view. ¹⁰ Thus the old legal saw, actus reus non facit reum nisi mens sit rea. For philosophical discussion, see, e.g., Jonathan Glover, Responsibility (London: Routledge, 1970); Thomas Scanlon, “The Significance of Choice,” in The Tanner Lectures on Human Values, Vol. 8, ed. Sterling M. McMurrin (Salt Lake City: University of Utah Press, 1988), pp. 149–216; Susan Wolf, Freedom Within Reason (Oxford: Oxford
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
here, I mean the pattern of phenomena that realizes the archetypical instance of responsibility. There are certainly other patterns of phenomena that realize responsibility, but these patterns do so conditional on their being properly similar to the paradigm. Any case of responsibility for action that departs radically from this paradigm is ipso facto a radically unusual case—one that would either require an ad hoc amendment to our received conception of responsibility, or a wholesale reconsideration of whether that conception of responsibility is correct. Thus, when I say that the paradigmatic case of responsibility for action is the case of action produced by intentions that were in turn produced by healthy deliberation, I am saying that this is our starting point for thinking about responsibility, and quite far from a complete theory of responsibility. Nonetheless, it is a starting point with a bite: every step away from it must be made on the basis of good argument. It would be quite odd if in this paradigmatic case of responsibility there was no normative connection between the agent’s psychology and her behavior. For, we generally think that mere causal links between deliberations and intentions, and intentions and behavior, are not sufficient for responsibility. First, were intentions merely causes, then their effects would be akin to the effects of an outside force. The more one is affected by non-rational, normatively inert forces, the less responsible one is for one’s mental life and one’s behavior. Second, if the deliberations merely caused the intentions, then it is entirely possible that one could deliberate about whether to ϕ, never decide to ϕ (and so it could still be open that one might decide not to ϕ), and yet as a causal upshot of that deliberation, one intends to ϕ. This is like the cause of a deviant causal chains-producing behavior. Deviant causal chains are deviant instances of responsibility. So, in the paradigmatic case of responsibility, one’s deliberations and intentions cannot bear a purely causal relation to one’s action. For example, suppose I am fantasizing about stealing a sandwich. What is a fantasy? A fantasy is, for our purposes, a first-person train of thought that is deliberatively offline. It is just idle imagining in which I am the protagonist.¹¹ Now,
University Press, 1990); the essays collected in John Martin Fischer and Mark Ravizza (eds.), Perspectives on Moral Responsibility (Ithaca, NY: Cornell University Press, 1993), pp. 119–48, some of which reject this view but nonetheless treat it as paradigmatic; R. Jay Wallace, Responsibility and the Moral Sentiments (Cambridge, MA: Harvard University Press, 1994); the essays collected in Gary Watson, Agency and Answerability: Selected Essays (Oxford: Clarendon Press, 2004), especially “Two Faces of Responsibility”; Angela Smith “Control, Responsibly, and Moral Assessment,” Philosophical Studies 138 (2008): 367–92, 373; R. Jay Wallace, “Caring, Reflexivity, and the Structure of Volition,” in Normativity and the Will, ed. R. Jay Wallace (Oxford: Clarendon Press, 2006), pp. 190–211, p. 200). For an opposing view, see Robert Adams, “Involuntary Sins,” Philosophical Review 94 (1985): 3–31. An influential outlier is Harry Frankfurt’s work, which doesn’t treat deliberation and intention as philosophically salient. See Harry Frankfurt, “The Importance of What We Care About,” reprinted in The Importance of What We Care About, Harry Frankfurt (Cambridge: Cambridge University Press, 1988), and Harry Frankfurt, Necessity, Volition, and Love (Cambridge: Cambridge University Press, 1999). ¹¹ This is a crucial point. We must distinguish our off-line, first-personal imaginings from our deliberations, even if when represented propositionally they are indistinguishable. The reason why is
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
suppose that I am fantasizing in the following sort of way: “Mmmm I’m hungry . . . and look at that amazing sandwich . . . it’s got everything I could ever want in a sandwich . . . I could easily steal the sandwich . . . I should steal and then eat that sandwich . . . ” Unbeknownst to me, prior to this fantasy, I’d been slipped a drug that affects my mind such that I perform my fantasies. As a result I steal the sandwich I’ve been fantasizing about. In this case, my fantasy is an off-line script that I was pressganged (by the drug) into enacting. Am I responsible for stealing the sandwich? Perhaps, I am. But, this would clearly be an attenuated sort of responsibility and not a paradigmatic instance of responsibility. For, although I am the one who supplied the script by fantasizing, it is not the case that I was engaged in the sort of mental exercise whose function is to authorize my own actions. Thus, in a responsibility-diminishing way, I am estranged from the action, even if it enacts a script I wholeheartedly produced. The lesson from this case is that authorization by deliberation and intention is a necessary feature of the paradigmatic instance of responsibility. (Recall that this is a paradigm of responsibility, which is absolutely not the same thing as the only kind of responsibility; it is just the pattern we use when constructing other conceptions of responsibility.) But, since authorization partially establishes full responsibility for an action, and since this occurs by way of deliberations authorizing intentions that in turn authorize actions, the authority of deliberations and intentions is a condition of full responsibility. Simply on grounds of parsimony, we ought not posit a different normative relationship between intentions and actions in addition to the authority relationship in order to explain why deliberation/ intention-authorized actions are the paradigmatic instances of responsibility for action.
7. Deliberation and Reason Compliance It is widely held that an important feature of human agency is that we can comply with reasons in addition to merely conforming to them. Complying with certain reasons as opposed to merely conforming with those reasons requires not that the psychological processes in which the propositions are tokened—fantasy and deliberation—have different functions. For further clarity, consider the following example: The next day Rastignac dressed himself very elegantly, and at about three o’clock in the afternoon went to call on Mme de Restaud, indulging on the way in those dizzily foolish dreams which fill the lives of young men with so much excitement: they then take no account of obstacles, nor of dangers, they see success in everything, poeticize their existence simply by the play of their imagination, and render themselves unhappy or sad by the collapses of projects that had as yet no existence save in their heated fancy. (Honoré de Balzac, Le Pére Goriot, translation from Dorrit Cohn, Transparent Minds: Narrative Modes for Presenting Consciousness in Fiction (Princeton, NJ: Princeton University Press, 1983), p. 24) In this instance, Rastignac is fantasizing taking a course of action and then further imagining how it will play out.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
accidentally acting in response to those reasons.¹² It involves deliberatively grounded guidance of action by the reason. How does this work? Suppose someone intends to go to the store. Suddenly, she realizes she has forgotten why she intends to go to the store. She knows she is going to the store, but the intention to go to the store has disappeared. There is just inertial behavior. The obvious next psychological move is to deliberate about whether she ought to go to the store. On the basis of reflections like this a natural explanation of how compliance with reasons is possible is to claim that so long as one remembers one’s deliberations and one knows the reason for which one is acting, one thereby complies with that reason. This approach is problematic, though. For, if the intention moving one to act is not the one authorized by the deliberations one remembers, then one is not complying with the reasons considered in those deliberations. Rather, one is luckily doing something but erroneously associating that behavior with prior deliberations. For example, imagine an addict who deliberates about whether to take heroin. But then her addiction overwhelms her capacity for self-control, triggering in her an overwhelming desire to take heroin. As she is shooting up, she recalls her deliberations. This is not sufficient to manifest compliance with any reason she considered in those deliberations. For, since her behavior was caused by a short-circuit to her deliberative process, she did not comply with any reason at all. She at best conformed with a reason, in the same way that a rock in free fall in a vacuum conforms with Newton’s equation describing gravitational attraction. Consider an intention to pull the trigger of the gun. This is an incomplete account of this intention. For, my intention is to pull the trigger for the sake of demonstrating how to fire a gun. This intention is different from the troublemaker’s intention to pull the trigger for the sake of making a startling noise. These intentions are similar in that they are intentions to pull the trigger, but they are also very different. For, they involve compliance with different reasons. My reason—to demonstrate how to fire a gun—makes my intention to pull the trigger quite distinct from the troublemaker’s intention to pull the trigger. As I will discuss in more detail presently, the reasons for the intention were considered in the deliberations, and in being the substantive part of the process that authorized the intention, they are exactly what allow us to say that my intention to pull the trigger of the gun is different than the troublemaker’s intention to pull the trigger of the gun. Or consider the many reasons I have to go for a run along the canal this afternoon: it will afford me a beautiful view, it will allow me to clear my mind after writing all day, it will relax me, it will help me get fit, and so on. Suppose that
¹² For more, see John Gardner and Timothy Macklem, “Reasons,” in The Oxford Handbook of Jurisprudence and Philosophy of Law, ed. Jules Coleman and Scott Shapiro (New York: Oxford University Press, 2002), pp. 440–75. Gardner and Macklem talk of “deliberately” acting for a reason and “accidentally” acting for a reason instead of conforming and complying.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
I decide to go for a run and the reason for which I do so is that it will help me get fit. In particular, I think about how I am getting a little soft around the middle, I reflect on how I would like that process to stop or to be reversed, and finally I think about how regularly running can slow and eventually reverse that process. On the basis of this deliberation, I form the intention to go for a run in order to get fit. I am, of course, aware of all the other considerations supporting going for a run. But, deliberations regarding getting fit are what authorized my intention to run along the canal. The intention must somehow reflect this if there is to be compliance with the relevant reason.¹³ As suggested, we can explain how an intention reflects the reason for which one acts by appeal to deliberations and intentions functioning as the authoritative “practical voice” of the agent. If intentions are understood as intentions to ϕ on the basis of deliberations D, where the “on the basis of ” relation is not a causal one but is instead one of authorization, an intention to ϕ is actually an intention to ϕ as authorized by deliberations D. This is represented colloquially by reference to what it is for the sake of which one is doing something. Intending to go for a run for the sake of getting fit is just intending to go for a run as authorized by deliberations about how I am committed to getting fit and about how going for a run will get me fit.¹⁴ This is, as suggested above, how reasons “get into” intentions. It explains how agents can comply with as opposed to merely conform to reasons. In this way, an appeal to deliberations and intentions functioning authoritatively not only helps to explicate how it is that one can be responsible for an action, it also explicates how one can act in compliance with reasons.
8. Deliberation and Self-knowledge There is a significant philosophical tradition according to which a hallmark of action is that “when someone is acting intentionally, there must be something he is doing intentionally, not merely trying to do, in the belief that he is doing it.”¹⁵ David Velleman argues that this knowledge about what one is doing when one acts is in fact a constitutive aim of action. To some degree, we know ourselves through our actions.¹⁶ ¹³ This does not mean that one must see the intention itself as the reason for one’s action. Rather, it is in virtue of having the intention that one acts for a reason, and therefore in virtue of the intention that one complies with a reason. ¹⁴ An upshot of this is that any time there is a bare intention to act as opposed to an intention as authorized by deliberations, that bare intention is not a full-throated expression of one’s agential capacities. ¹⁵ Setiya, Reasons Without Rationalism, p. 26. ¹⁶ See, generally, Velleman, Practical Reflection, Velleman, The Possibility of Practical Reason, and J. David Velleman, How We Get Along (New York: Cambridge University Press, 2009). Setiya says that “self-knowledge can be described as the constitutive aim of action: it is a goal towards which intentional action is always and essentially directed” (Setiya, Reasons Without Rationalism, p. 108). See also Harry Frankfurt’s work, according to which behavior that flows from attitudes with which the agent
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
A crucial element of action that feeds into one’s knowledge about what one is doing is the reason for which one acts. For example, I understand my running along the canal in terms of a particular reason for running along the canal, not in terms of just any reason for running along the canal. When I am running along the canal for the sake of getting fit, what I believe that I am doing is something rather different from what I believe I am doing when I am running along the canal for the sake of clearing my mind after writing all day. Our knowledge of our own actions is partially facilitated by our capacities to comply with reasons. But, if Section 7’s conclusion was correct, one’s knowledge of one’s own actions is partially facilitated by one’s deliberations and intentions functioning as authoritative. So, deliberations and intentions functioning as authoritative is therefore crucial to the construction of one’s knowledge of what one is doing when one is acting.
9. Midpoint Summary Responsibility for an action, acting for a reason, and knowing what one is doing are central to paradigmatic instances of action. Assuming that agency is ever realized in its paradigmatic form, it has these characteristics. And even if agency is never realized in this paradigmatic form (since we are all imperfect), this is still the paradigm of agency against which our more typical imperfection is understood. This is why we should interpret the three just discussed interlinked phenomena in a somewhat parsimonious fashion by interpreting them in terms of deliberations and intentions functioning as authoritative. One upshot of this is that it aligns the claim that deliberations and intentions function as authoritative with other, less controversial claims about what it is to be an agent, thereby domesticating what some may think are outlandish propositions—that a defining function of deliberations is to authorize intentions and actions, and a defining function of intentions is to authorize actions— by showing the very useful roles these propositions can play in our accounts of three familiar aspects of agency. That is, insofar as one is comfortable with saying that people are responsible for their actions, that people can comply with reasons, and that one aim of action is self-knowledge, then one should also be comfortable with saying that the function of deliberations and intentions is to authorize actions.
“wholeheartedly identifies” is the paradigmatic form of action. Identification with an attitude is a complex state of affairs, but it involves something like self-understanding. One does not discover that one is acting on the basis of attitudes with which one also happens to wholeheartedly identify. Rather, one acts with the knowledge that such-and-such is in fact that for the sake of which one is acting. See Harry Frankfurt, “Identification and Wholeheartedness,” in The Importance of What We Care About (Cambridge: Cambridge University Press, 1988), pp. 159–76, and “Autonomy, Necessity, Love,” in Necessity, Volition and Love (Cambridge: Cambridge University Press, 1999), pp. 129–41. See also G. E. M. Anscombe, Intention, 2nd Edition (Ithaca, NY: Cornell University Press, 1963) and Sarah K. Paul, “How We Know What We’re Doing,” Philosophers Imprint 11/9 (2009): 1–24.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
10. Intentions Function as Reasons If deliberations authorize the formation of an intention, this authorization is more than mere permission. Where A authorizes B with respect to O, A has given B some (or all) of A’s own authority with respect to O. So, when deliberations authorize intentions to ϕ, they give intentions some (or all) of their authority with respect to ϕ-ing. An important part of the authority deliberations and intentions have is the authority to end deliberations about the practical question under consideration. In particular, deliberations must have the authority to end themselves, and the intentions they authorize must have the authority to keep deliberations closed. This is a crucial function of intentions because it allows deliberations to settle practical questions. This is one way to interpret one of Michael Bratman’s central claims about intentions.¹⁷ According to Bratman, existing intentions play a structuring role in our deliberations about how we are going to live. When one settles on a plan, that plan becomes, as Bratman puts it, a “fixed point” for future deliberations. But, what fixes that point? It is not a merely causal matter, where out of physical necessity one reasons on the assumption that the intended action will occur. Rather, it is a normative matter. Intentions function as requirements not to re-open deliberations about how to live.¹⁸ Putting aside the appeal to Bratman, just consider the issue on its own for a moment. Deliberations, if they are to produce actions, must come to an end at some point. They cannot just end willy-nilly, though. That would be like treating a legislature being blown up while considering some bill as no different than the legislature validly passing that bill, transforming it into law (I am assuming away the peculiar practice of the presidential veto).¹⁹ Just as a legislature has the authority to declare its deliberations over by passing a bill, the capacity for practical deliberation must have the authority to “declare” its deliberations over by forming an intention. The passage of the bill is the authorization of the bill into a law, and the completion of deliberations with a decision is the authorization of that decision as an intention. Upon reflection, this is quite clear. Deliberations often have merely causal endings— one is interrupted by some task mid-deliberation, one is startled by something, one
¹⁷ Bratman writes: “unless and until I do give up or reconsider my prior intention, its role in my means– end reasoning will be to set an end for that reasoning and not just to provide one reason among many . . . to see my intention as providing just one reason among many is to fail to recognize the peremptoriness of reasoning-centered commitment” Bratman, Intentions, Plans, and Practical Reasoning, p. 24, footnote removed. ¹⁸ The nature of this requirement is a matter of great debate. Obviously I am arguing for the view that intentions are reasons, not that they trigger, for example, mere Broomean normative requirements. But, so far, I’ve only been attempting to show that intentions function as reasons, not that they are reasons. ¹⁹ This does not mean that the law cannot be repealed or amended. It just means that for now the question is settled and for now all future rule making proceeds on the assumption that this law is a fixed element of the overall body of law.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
simply runs out of steam before making the decision. But, these endings do not authorize anything. We must pick up where we left off if whatever behavior flows from the deliberation is to be a full-throated instance of agency. On the other hand, intentions do not rule out dispreferring the intended option relative to another option. One can also evaluate an option as morally best without also intending that option.²⁰ One might be criticizable for not intending to do what one judges one is morally required to do. But, this criticism is not grounded on any account of the nature of intentions. It is instead grounded in the claim that when one judges that the best thing to do is to ϕ, then one will form an intention to ϕ, and presumably will abandon intentions to act in ways incompatible with ϕ-ing. That is not a thesis about the nature of deliberations and intentions. It is a thesis about the nature of moral judgment. Additionally, an intention is not just the latest all-things-considered judgment about what it would be best to do. For, if an intention were no more than the latest all-things-considered judgment about what it would be best to do, not only would intentions be indistinguishable from evaluations, but akratic intentions would be impossible (since an akratic intention is an intention to act against what one judges to be the best course of action). An intention is also not merely a summary of the latest deliberations. Just as one can summarize one’s evidence for some proposition without making a judgment about whether that proposition is true, one can summarize the case for ϕ-ing without thereby committing to ϕ-ing. Summaries of deliberations have on their own no immediate significance. If one ϕ’s on the basis of a summary, one would do so arationally. For, there would be nothing internal to one’s reasoning that indicated that these were the reasons that settled the question on how to live. Finally, an intention does not forever block re-opening deliberation. One remains open to at least some considerations in favor of suspending the intention and re-opening deliberations.²¹ Again, consider the analogy with law. A state may pass a law requiring people to drive a certain speed. This does not mean that the state will never review this law and perhaps amend or repeal it. But, the state has to reflect on whether to amend the law before considering how to amend the law. The same point goes with intentions: we have to reflect on whether to change course in our lives before we can decide how to change course. So, one function of an intention to ϕ is to require that the agent, before re-opening deliberations about whether to ϕ, consider whether to do so.
²⁰ This is not a claim about reasons-internalism or motivation-internalism. For taking oneself to have a reason to do something, or being motivated to do something, is not the same thing as being committed to doing that thing. One can see oneself as having a reason to give to charity, one can thereby be motivated to give to charity, and yet still lack the kind of commitment to give to charity that generates the action of giving to charity. This could be weakness of will. ²¹ In “Rational Resolve,” Richard Holton discusses this point in the context of dispositions not to reconsider. See also Richard Holton, Willing, Wanting, Waiting (Oxford: Oxford University Press, 2009), chs. 1, 6, and 7.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
The central message of this section, then, is that for deliberations to have the function of authorizing actions, they must have the function of (i) requiring their own completion by way of the formation of an intention that in turn has the function of (ii) requiring the action, and that intention must also have the function of (iii) requiring that deliberations about the intended action are not re-opened. I assume at this stage that if something functions as a requirement to do something, then it functions as a reason to do that thing. Now, for the sake of considering an objection, let us put aside cumbersome talk of function. Here is the objection: I have failed to appreciate wide-scope/narrow-scope distinctions. I am arguing for a narrow-scope reading—an intention to ϕ is a reason not to re-open deliberations and a reason to ϕ. But, the objection goes, all that I have shown is that in virtue of intending to ϕ, one has a reason not (to re-open deliberations about whether to ϕ and to intend to ϕ). But, this is wrong. For, I first argued that deliberations authorize their intentions. Deliberations producing intentions are reasons for having those intentions. In virtue of deliberating about whether to ϕ and then, on the basis of those deliberations, forming the intention to ϕ, one has a reason to intend to ϕ. This has nothing to do with the question of whether I should re-open deliberations. Furthermore, that I have deliberated about whether to ϕ and then formed the intention to ϕ cannot be changed. These events are in the past. And they are the reasons for me to intend to ϕ. The die has been cast: one has a reason to intend to ϕ. So, one cannot discharge the first wide-scope requirement by giving up the intention to ϕ without falling afoul of this reason that one intend to ϕ.²² Returning now to function talk, we can state the conclusion of this part of the chapter: deliberations about whether to ϕ yielding an intention to ϕ together function as a reason to ϕ, and as a reason not to re-open deliberations about whether to ϕ.
11. The Authority of Commands A several points in this chapter, I’ve pushed an analogy between intentions and legislation (and so between practical deliberations and legislative deliberations). At this stage, I want to tighten that analogy. There is a classic distinction in the philosophy of law, pithily presented by Thomas Hobbes: “Law in generall, is not Counsell, but Command . . . addressed to one formerly obliged to obey him.”²³ Hobbes’s aims here are to distinguish two ways in which agents can influence the way another agent lives her life and then to analyze law in terms of one of them. In particular, Hobbes is distinguishing counsel, or advice, with (legitimate) commands, ²² A similar view in epistemology was recently defended in “The Conflict of Evidence and Coherence,” forthcoming in Philosophy and Phenomenological Research. ²³ Thomas Hobbes, Leviathan (1651), C.B. Macpherson, ed. (Baltimore: Penguin Books, 1968), ch. 26, p. 312. See also Thomas Hobbes, On the Citizen (De Cive) (1642), ed. and trans. R. Tuck and M. Silverthore (Cambridge: Cambridge University Press, 1998), ch. 14, para. 1.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
or (legitimate) orders. Law, in virtue of it aiming to be the final word on how the subject ought to live, is a kind of command, and not a form of counsel. For, counsel simply aims to inform someone of what the best course of action would be. Even when one receives counsel that is extremely thoughtful and well informed, the counsel does not have any special practical claim to fix one’s course of actions. One must, in order to act on the basis of the counsel, further reflect on its wisdom, and then on the basis of those deliberations, make a decision about how to live. Counsel therefore does not end deliberation—it merely invites further deliberation and may or may not be followed by decision, much less a decision in line with the recommendations of the counsel. Commands, on the other hand, operate differently (I am concerned only with valid commands now).²⁴ Commands are rational interventions into a deliberative process. They aim to rationally settle what course of action the subject ought to take. So, commands do not purport to offer just any old reason in favor of some course of action. For, if that is all they did then commands would simply be counsel. What commands do is end deliberation, where this ending of further deliberation is not just a short-circuiting of further deliberation as might occur when one’s deliberations about how to deal with some threat are short-circuited by fear of that threat. Commands give subjects both a reason to do as commanded and a reason not to deliberate further about how to live.²⁵ The reasons given by commands are limited and defeasible. My sergeant’s command only has scope over some parts of my life, namely the parts of my life over which my sergeant has authority. My sergeant’s command that I drop and give her twenty gives me a reason to do twenty push-ups, but my sergeant’s command that I divorce my spouse does not give me a reason to divorce my spouse. For, my sergeant has no authority over that part of my life. Commands can also be defeated by other considerations. For example, suppose my sergeant gives me orders to fire my weapon at a potential threat. This gives me reason to fire my weapon. But if I am certain that the target is an innocent child, this reason
²⁴ In what follows, I summarize the canonical account of commands, as found in H. L. A. Hart, “Commands and Authoritative Reasons,” in Essays on Bentham, H. L. A. Hart (Oxford: Oxford University Press, 1982). A more complex and somewhat more tendentious account of commands can also be found in Joseph Raz, Practical Reason and Norms (London: Hutchinson & Co, 1975) and Joseph Raz, The Authority of Law (Oxford: Oxford University Press, 1979). Nothing I say here commits me to Raz’s “service conception” of authority or to his normal justification thesis. I am, though, committed to the most general contours of his account of the relationship between authoritative directives and second-order reasons. This is the notion that authoritative directives have two different functions in practical reasoning: they function as “first-order” or ordinary reasons to act as commanded, and they function as “second-order” reasons to disregard at least some of the reasons speaking for and against performing the commanded action. An important feature of Raz’s view is that this phenomenon is all over the place. For recent discussion, see Joseph Raz, Between Authority and Interpretation (Oxford: Oxford University Press, 2009), pp. 141ff. ²⁵ For more on the “command model” of the law, see Gerald J. Postema, “Law as Command: The Model of Command in Modern Jurisprudence,” Philosophical Issues 11 (2001): 470–501.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
is presumably overridden by other considerations (or outweighed, or whatever metaphor one wants to use). Commands also do not rule out continuing evaluation about the merits of ϕ-ing or fantasies about not-ϕ-ing, and they do not require changing considered preferences about ϕ-ing versus alternatives. One can be commanded to ϕ, thereby have reason to close deliberations about whether to ϕ and reason to ϕ, but still rationally reflect on the reasons that ground ϕ-ing. But, a valid command to ϕ is sufficient to give one reason to ϕ. All the while, one may still reflect on why it is good to ϕ, consider what the superior alternatives to ϕ-ing might be, and fantasize about doing something other than ϕ-ing. Another very important feature of commands is that they do not change the nature of ϕ-ing itself. Commands do change ϕ-ing’s public relation to people’s attitudes and it is this, among other things, that helps to make commands what they are.²⁶ So, authoritative commands generate reasons to ϕ that are not grounded in the merits of ϕ-ing. Legal theorists refer to the reasons generated by commands as contentindependent reasons.²⁷ Content-independent reasons appear to be widespread. People can rationally agree to submit a dispute to a common judge and thereby have reason to do whatever it is that the judge orders them to do with regard to that dispute. The authority of the reason to act as the judge orders is grounded in the subjects’ agreement to submit to the judge’s decision, not in the wisdom of doing whatever it is that the judge orders. This same kind of state of affairs can be realized within any scheme in which a practical authority would be reasonable (e.g., an orchestra in need of a conductor, a team in need of a coach or manager, a boat in need of a captain, etc.). Since there are often very good reasons for setting up practical authorities (such as that ignorance, partiality, and shortsightedness often make it prudent to take oneself out of the decision-making procedure), we would expect content-independent reasons to be widespread.
12. Intentions Function as Commands So far, I’ve argued on functional grounds that our intentions inherit the authority of our deliberations. I then argued that one aspect of this authority is that intentions are ²⁶ For more, see David Enoch, “Authority and Reason Giving,” Philosophy and Phenomenological Research 2/89 (2014): 296–332. ²⁷ There is a vast literature on the relationship between law and content-independent reasons. As noted above, the modern headwaters of this literature is H. L. A. Hart’s work. See especially, in Essays on Bentham, “Commands and Authoriative Legal Reasons” and “Legal and Moral Obligation.” For a useful overview, see Scott Shapiro, “Authority,” in The Oxford Handbook of Jurisprudence and Philosophy of Law, ed. S. Shapiro, et al. (Oxford: Oxford University Press, 2002), pp. 382–439. See also George Klosko, “Are Political Obligations Content Indpendent?” Political Theory 4/39 (2011): 498–523. Klosko explicitly discusses several other instances of content-independent reasons, with special prominence given to promissory reasons as content independent.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
reasons not to re-open deliberations about how to live. I argued that this is also a feature of the authority of commands. I then pointed out that commands also function as a content-independent reason to act. Since intentions share the backward-facing function of commands, why deny that they share the forwardfacing function of commands? After all, deliberations that yield commands aim at a very similar thing as do deliberations that yield intentions, namely, settled plans about how to live. So, by analogy, I conclude that intentions function as commands one gives oneself on the basis of one’s deliberations. Or, to put a Kantian spin on it: intentions are forms of self-legislation. In virtue of this, it follows that intentions function as reason to act as intended. We now have a complete picture of the normative function of deliberations and intentions: together, they function as reasons not to re-open deliberations, and as reasons to do as intended.
13. Functioning as Commands vs. Actually Being Commands So, are our deliberations really authoritative or do they merely function as authoritative? This question can be answered by determining whether agents have the authority to govern themselves through their deliberations. If they do not, then deliberations and the intentions they yield function as reasons, but they aren’t really reasons. If agents do have the authority to govern themselves through their deliberations, then the deliberations and the intentions they yield not only function as reasons, they are reasons.
14. Intentions Are Reasons The burden is not on me to establish that agents have the authority to govern themselves. The presupposition in favor of authority over oneself is as good as any starting point in practical philosophy. It is a lot more perverse to deny that we ought to be self-governing than to claim that intentions are reasons. Nonetheless, it is worthwhile to reflect briefly on some reasons why we ought to have the authority to govern ourselves. First, we ought not be thoroughly deferential to others.²⁸ Even if we regularly invite and consider others’ counsel about how to live, we ought to deliberate about whether to follow this counsel. Even when we abandon authority over ourselves, we should deliberate about whether to do so. And, when we do abandon authority ²⁸ For more, see Thomas E. Hill, Jr. “Servility and Self-Respect,” in Autonomy and Self-Respect, Thomas E. Hill, Jr. (Cambridge: Cambridge University Press, 1991), pp. 4–18. For a fine discussion of this issue regarding moral judgment, see Julia Driver, “Autonomy and the Asymmetry Problem for Moral Expertise,” Philosophical Studies 3/128 (2006): 619–44.
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
over ourselves, we always reserve some authority to reconsider that decision and to take back control of our lives. This chimes with both the prima facie value of individual liberty and the supposed authority of reason. Both of these partially constitute the Enlightenment vision of self-creation through one’s actions. On this view, people ought to develop their capacities to guide their lives via healthy deliberation (including developing habitual intentions as a result of healthy deliberations) at least partially because that is what it is to have a life of one’s own. Furthermore, if we assume that one has both a good (but not perfect or even the best possible) grasp of the facts and a well-functioning capacity for deliberation and decision, then basic egalitarian commitments about the equal liberty of all suggest that it is prima facie best for each to exercise her own healthy, mature agency. One cannot object to this kind of authority simply by pointing to exceptions to its value. Circumstances requiring total submission to others’ judgments about how to live do occur, but they are quite unusual. They typically involve stark and unfortunate limitations on an agent’s capacity to decide for herself how to live. For example, total lack of information and terror immediately after being diagnosed with a frightening illness can be grounds for ceding authority over one’s life to a loved one and a doctor. But, this exception does not support a general principle militating against such authority. Perhaps those who completely lack certain capacities ought to defer substantially more to others in order to determine how to live.²⁹ But, such cases— young children, the severely mentally disabled, etc.—are exceptions and not grounds for generalization to others in different conditions. There are also cases, such as emergencies, when it is best to defer to others in order to achieve some valued end. Some might even extend such cases into the political and treat the state as serving this role.³⁰ Thus, one ought to defer to rules and laws when doing so will serve the good. This may be the way to go in certain cases, e.g., cases in which what is good is not controversial and how to achieve that good is a matter of rare or significant technical expertise. But, either when what is valuable is contested or when many non-rival values can be jointly pursued, it is not obvious that deference is required. At the very least, a case must be made for deference in place of authority over self. So, since we ought to have authority over ourselves, and since this is partially constituted by our intentions having authority—by their functioning as self-legislation
²⁹ But see Jaworska, “Respecting the Margins of Agency: Alzheimer’s Patients and the Capacity to Value” and Jaworska, “Caring and Full Moral Standing.” But see Friedman, Autonomy, Gender, Politics: “If what someone adaptively prefers and chooses is behavior so servile that she ceases to act according to her own deeper concerns in any sense and becomes slavishly obedient to others instead, or becomes subject to their coercive interference with whatever subsequent choices she tries to make, then she loses autonomy in a content-neutral sense” (p. 25). ³⁰ For more, see David Estlund, Democratic Authority: A Philosophical Framework (Princeton: Princeton University Press, 2008).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
which requires their functioning as content-independent reasons both not to re-open deliberations and to act as intended—and since our intentions in fact perform this function, then our intentions are reasons.
15. Evil Intentions Does this mean that intentions to act wrongly can be content-independent reasons to act wrongly? If so, there must be something wrong with this view. But, there is nothing particularly unique about this sort of problem. For, all sources of contentindependent reasons for action, such as promises and the law, face this challenge. The natural way to respond to the problem of evil promises or evil commands is to carve out exceptions to the authority of such phenomena: evil promises may be promises but they are not reasons because they are evil, evil laws may be laws but they are not reasons because they are evil, and so on. If this is the start of a philosophically acceptable response to this problem when it arises in relation to promissory obligation, legal obligation, and the like, then it should be an acceptable start in the present context. Furthermore, promises and laws do not have to be evil to cease to have application as reasons for action. As already noted, sometimes in emergencies, one ought to cede authority over self to others. This point generalizes: there can be emergency-based suspensions of any normal source of authority. If there is an emergency and obeying the person most capable of properly organizing those affected would contravene a promise or the law, there is a strong moral case for treating those promises or laws as being overridden. This same line of argument can be applied to intentions as reasons. We might argue that our intentions are no more undefeatable or non-overrideable reasons to ϕ than are promises to ϕ and laws requiring ϕ-ing. If standing requirements of morality can trump promises and laws, then they can do the same to intentionsbased reasons. So, the mystery of morally objectionable intentions being reasons to do wrong is no different than the mysteries of morally objectionable laws and morally objectionable promises being reasons to do wrong. The problems faced by this account of intentions are the same ones faced by any account of contentindependent reasons for action. In fact, this account of intention-as-reasons may be stronger in the face of this objection than many accounts of promises-as-reasons or laws-as-reasons. For, the authority of intentions rests on the value of authority over self. If we ought not be authoritative over ourselves when it comes to doing evil, then evil intentions ought not to function as reasons. The person who is so evil as to deliberate about how to be evil and then to decide to act evilly on the basis of those deliberations, has, as John Locke puts it in the Second Treatise on Government, “renounced reason, the common rule and measure [of ] mankind” and so is more like “a lion or a tiger, one of those
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
savage beasts with whom men can have no society nor security.”³¹ This is why evil people are to be managed like dangerous wildlife. Furthermore, since this reason to manage evil people is agent-neutral and not agent-relative, evil people themselves have a reason not to treat themselves as authorities over themselves.³² Finally, unlike cases such as promises and rules in which suspension of the requirement to act is not subject to the authority of the promisor or the rulebound subject,³³ each agent has authority over herself and so has the authority to abandon her own intentions. Earlier we asked when it would be appropriate to re-open deliberations about how to live. Here is one answer: one ought both to ask whether to re-open deliberations, and actually to re-open deliberations when one’s intentions are evil (and certainly when one knows one’s intentions are evil!).³⁴ This point applies to changes in circumstances as well. Such changes can require abandoning intentions. For example, one might intend to go in the evening to the gym. But, as the evening approaches, one starts feeling sick. If these circumstances support re-opening the question about whether one ought to go to the gym, then one has strong reason to abandon one’s intention to go to the gym and to deliberate, once again, about whether to go to the gym. But, if circumstances don’t change and one simply abandons one’s intention out of laziness or distraction, then one is criticizable for having certain character defects: a lack of willpower or a lack of resolve, and so on.³⁵ At least part of the explanation for why these are defects is that they involve the agent failing to do what she has reason to do in virtue of her intention to do it.
16. Defective Deliberation It seems, then, that one important ground for denying that we ought to have authority over ourselves is when the deliberations from which the authority flows are somehow defective. This is unsurprising. If our capacity for deliberations about how to live are the headwaters of our agential authority, then if these waters are corrupted, so too is the capacity for agency. In these conditions, we ought not to be authorities over ourselves. ³¹ Locke, Second Treatise, §11. ³² For more on this topic, see the classic essay Gary Watson, ‘‘Responsibility and the Limits of Evil: Variations on a Strawsonian Theme,” in Fischer and Ravizza, Perspectives on Moral Responsibility, pp. 119–48. ³³ Except when the rule-bound agent is also a special kind of rule-applying agent such as a legislator or judge in a common law state. ³⁴ At this stage, my main differences with Bratman’s views in “Intentions, Practical Rationality and SelfGovernance” are at their starkest. Bratman argued that non-modifiable intentions to ϕ, even evil intentions to ϕ, that are essential to one’s self-constitution are reasons to act on the intention to ϕ. In contrast, I argue that only non-evil intentions to ϕ that are the product of sound deliberation, regardless of whether those intentions are modifiable (and in fact especially in the case of modifiable ones since those are the ones that are best expressions of our capacity for deliberation), are reasons to ϕ. I am also able, unlike Bratman, to locate familiar partners in guilt for exceptions to the rule that intentions are reasons. ³⁵ For more, see Richard Holton, “Rational Resolve.”
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
It is not easy to spell out the standards of non-defective deliberation.³⁶ What makes insane or corrupted deliberations insane or corrupted?³⁷ What counts as an irrational connection between deliberations and intentions? Small logical errors are probably not sufficient to render us insane or deeply irrational. This is especially the case since deliberations typically are enthymematic and so are sensible only on the basis of certain presuppositions by the agent. If some of those presuppositions are false, that doesn’t thereby render the deliberations insane, corrupt or utterly irrational. This is so even if one’s decisions are not the best possible ones. Even being in the grip of malformed preferences or some false beliefs doesn’t make the agent’s loss of authority over herself acceptable.³⁸ Unless those preferences and/or beliefs are especially troublesome (and perhaps not even then), it is almost always better for the agent to be in control of her own life. So, while meeting at least some requirements of reasoning may be a necessary condition for the value of self-governance, those requirements cannot be so stringent as to make it such that almost anyone subject to some form of false consciousness or mistaken patterns of reasoning ought to be slaves to others’ commands or to socially enforced patterns of activity. Achieving the impossibility of perfection in deliberation and decision cannot be a necessary condition for self-governance’s controlling value. So, we have two ends of a spectrum of deliberative health: on the least healthy end, we have a stream of thought composed of completely disconnected propositions yielding an intention to do something unrelated to that stream of thought, and on the healthy end we have a long train of reasoning involving true propositions, and whose form carefully respects whatever norms of rationality there may be. Where on this spectrum deliberation becomes so corrupted as to no longer count as being healthy is as difficult a question as the question of where on the spectrum of corporeal constitution a body becomes so diseased as to no longer count as healthy. There is not space here to resolve this issue. All we can conclude at this stage is that it is better than not to be an agent who deliberates in a more or less intelligible fashion about how to live and ends those deliberations with intentions to act, where
³⁶ That is the project of some of the chapters in this volume. ³⁷ See, e.g., Richard Holton, “Intention and Weakness of Will,” Journal of Philosophy 96 (1999): 241–62. For more on the problem of what makes some reasoning good and other reasoning bad (usually in the epistemic context), see, e.g., Paul Boghossian, “Blind Reasoning,” Proceedings of the Aristotelian Society (Supplementary Volume), 77 (2003): 225–48; David Enoch and Joshua Schechter, “How Are Basic Beliefforming Methods Justified?” Philosophy and Phenomenological Research 76 (2008): 547–79; Ralph Wedgwood, “Primitively Rational Belief-forming Processes,” in Reasons for Belief, ed Andrew Reisner and Asbjrn Steglich-Petersen (Cambridge: Cambridge University Press, 2011), pp. 180–200; and Sharon Berry, “Default Reasonableness and the Mathoids,” Synthese 190 (2013): 3695–713. ³⁸ Marilyn Friedman, in Autonomy, Gender, Politics (New York: Oxford University Press, 2003), writes: “Even adaptively deformed preferences can be the basis of autonomous behavior if they represent what someone reaffirms as deeply important to her upon reflective consideration and she is able to act effectively on those concerns” (p. 25).
OUP CORRECTED PROOF – FINAL, 23/4/2019, SPi
those intentions function as reasons not to re-open deliberations and as reasons to act as intended.³⁹
17. Conclusion A constitutive feature of authority over oneself is one’s deliberations and intentions functioning as content-independent reasons not to re-open deliberations and as content-independent reasons to do as intended. Since we in fact have authority over ourselves, then our deliberations and the intentions they produce are contentindependent reasons not to re-open deliberations and as content-independent reasons to do as intended. If I am right about all this, then, the rejection of bootstrapping therefore requires the rejection of the value of a person having authority over herself.
³⁹ We can still gain some insight on healthy agency by appreciating a few of its limits. For example, deliberating under the presupposition that one is trapped in one’s life—that there are no alternatives to how one is living or has lived—goes some distance towards loss of the capacity for self-governance. In whichever domains this depressing view of life applies, one lacks a healthy capacity for self-governance. In these domains one cedes authority over one’s life to facts of how one currently and has recently lived. Past patterns—facts about what one has done recently—unjustifiably become normative. But, that is simply to see authority over one’s life as residing somewhere outside oneself.
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Index A priori / a posteriori 103–7 Acceptance 64–5, 97, 111, 187 Acquaintance as a source of justification 161–2 Action acting for a reason 256 authorship over 250–1, 254 responsibility for 254, 259 reasoning as an 6, 33, 38, 44, 113, 148 Aesthetic judgment 30–1 Agency 5–7, 91, 248 and responsibility 93–9, 119–20 constitution of 116–17 and practical reasoning 74 over reasoning 91, 113 Aim of reasoning 2–3 Akrasia 79, 84, 261n. 20 and intention 250 Analyticity 167 Antrobus, Magdalena 9, 233–4 Anxiety 240 Argument 101n. 1, 192 Assertion 57–9 Association 6–7, 26, 28, 110, 116 Attention 27–8, 47 Attitude-Dependence Thesis 80 Audi, Robert 74–6 Authority of deliberation 9–10, 248 of intention 248 of methods of reasoning 228–9 of practical reason 71–2, 75–6 of reasoning 5–6, 229 Awareness 41, 54, 92, 94, 100, 196–7 non-conceptual 21 Balcerak Jackson, Brendan 8, 178 Balcerak Jackson, Magdalena 8, 178 Ballantyne, Nathan 213n. 16, 18, 214n. 20, 218n. 29, 224n. 42 Basic rules of inference 7–8, 153, 186, 200 see also Rules of inference evolutionary accounts of 161–2 importance of 168–73 justification of 152 pragmatic indispensability of 7–8, 153, 168–9, 169n. 45, 170–3 psychological unavoidability of 159–61 usefulness of 170–3 versus non-basic rules of inference 147
Basing permission 142 Bayesianism 45, 209n. 6 objective 209 subjective 208–10 Belief biased 9, 232 correctness of 145–6 dispositional 102 graded versus outright 44 implicit versus explicit 36–7, 102 nature of 36–7 reasoning 32 Bengson, John 122 Berry, Sharon 155n. 7 Biased reasoning 236 Boghossian, Paul 4, 6–7, 16, 27n. 13, 34, 43, 47, 49, 59–60, 94n. 2, 101, 154n. 5, 169n. 43 Bonjour, Laurence 160n. 21, 161n. 23 Bootstrapping intentions and reasons for action 9–10, 77–9, 248–9, 270 Bortolotti, Lisa 9, 233–5 Bratman, Michael 77–8, 248n. 2, 260, 268n. 34 Broome, John 5, 6n. 2, 25, 32, 47, 49, 56–7, 59–61, 59n. 9, 72–4, 77–8, 96n. 6, 141–5 Burge, Tyler 105–6 Carroll, Lewis 2–3 regress 2–3, 23, 37–8, 149 Chains of reasoning 131, 145 Chalmers, David 16n. 2, 218n. 28 Chang, Ruth 77 Change blindness 26, 29 Chomsky, Noam 190–1 Christensen, David 224n. 41–2 Chudnoff, Elijah 23n. 9, 160n. 19, 195n. 23 Clarke, Roger 50–1 Clutter Avoidance Principle 156 Coffman, E.J. 224n. 42 Cognitive capacities 8, 178 Cognitive penetration 196 Cohen, Stewart 213, 222–3, 225 Commands 262 Concepts 41–2 as constituted by rules 164–9, 169n. 43 harmony of 166 possession of 164–6 required for reasoning 96 structure of 164–6 Conceptual-role semantics 163n. 29
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Conditionals 168–9, 169n. 43 Conee, Earl 214 Consciousness 6–7, 38, 47, 93 Constructivism 88–9 contractualism 88–9 Contents linguistically structured 56 marked 38–40, 43, 72–3 of attitudes in reasoning 32–3 of perception 196–7, 199 probabilistic 5 propositional 15–16, 38–9, 180 sensitivity to 112, 116 Context of discovery versus context of justification 101 Control over reasoning 91, 100 Correctness of reasoning 7, 129 preservation 146–7 Credence 44, 85–6, 136n. 17 and credal reasoning 207–9, 223–4 credal reductivism 52–3 Cullity, Garrett 77 Davidson, Donald 162 Decision making 80–6 theory 63–5 to act 75–6 Deductive inference 2–3, 131, 131n. 3 epistemology of 2–3 modus ponens 7–8, 35, 95, 130–1, 142–7, 149, 152, 154–6, 178, 185, 188 rules of 7–8 single-premise deduction 134 versus inductive inference 114 Degrees of belief 5, 44, 206–7 and beliefs in probabilistic content 48–9, 55–6 confidence 156–7 Deliberation 9–10 defective 268 functional role of 9–10, 250–1, 254 Dennett, Daniel 51–2 Depression 232 Disagreement 8–9, 205 Dispositions 5, 35–7, 39, 95n. 5, 96n. 6, 192n. 18 Dogramaci, Sinan 55–6, 155n. 7, 161n. 25, 218n. 29 Drayson, Zoe 44n. 1 Dual process view 6–7, 46, 54n. 6, 99n. 8, 109–10 Dual system view see Dual process view Dummett, Michael 165–6 Dynamic permissivism 220n. 33
Endorsing an experience 18 Enkratic reasoning 73–4 Enoch, David 88, 153, 169n. 44, 173n. 48, 229n. 53 Epistemic basing 6–7, 102, 111 Epistemic dependence 15–19 Epistemic evaluability 6, 24, 116 Epistemic innocence 9, 167–8, 234 Epistemic standards see Methods of reasoning Epistemic virtue 9, 239–40 Evans, Jonathan 47, 54n. 6, 62 Evidence 8–9, 24–5, 81–3, 103–4, 132–3, 146, 205 and rationality 205 and truth 215 bootstrapping 227 bypass 25 factivity of 216 higher-order 134, 134n. 13, 135, 137n. 19 misleading 215 misleading higher-order 148n. 43 permissivism 205 problem of forgotten evidence 105 Evidentialism 212–14 Evil 267 Expertise 8, 189–90 Fallibilism 132n. 8 Feldman, Richard 206–14, 224n. 40, 226n. 44 Fermat’s Last Theorem 7–8, 115, 153–6, 168 Frankfurt, Harry 258n. 16 Frankish, Keith 49 Frege, Gottlob 16, 110 Frege-Geach problem 40 Friedman, Marilyn 266n. 29, 269n. 38 Gibbard, Allan 72n. 2 Greco, Daniel 50–1 Grice, H.P. 49 Hale, Bob 169n. 43 Harman, Gilbert 62–3, 74–5, 129n. 2, 131n. 3, 141n. 22, 149n. 45, 156 Hawthorne, John 49–50 Helmholtz 115–17 Heuristics 62 Hicks, D.J. 219n. 32 Hieronymi, Pamela 72, 78–9, 186n. 9 Higher-order evidence see Evidence Hlobil, Ulf 114 Hobbes, Thomas 262–3 Horowitz, Sophie 218 Horwich, Paul 161 Hussain, Nadeem 140–2, 144–5 Hypothetical reasoning see Suppositional Reasoning
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Ideal rationality 200 Implicit bias 92–3, 120–1 Impossible inferences 114–15 Inference to the Best Explanation 152 inferential internalism 24–5, 148–9 inferential versus non-inferential basing 29, 107 patterns of 2–3 permissions versus obligations 156–7 Information processing 3–4, 98 Intelligence 30–1 Intention 5, 38, 84 see also Practical Reason arational 250 deliberative 250 functional role of 248 kinds of 250 Introspection 51–2, 107, 118n. 16, 119 and evidence 227n. 47 Intuition 15, 107, 115n. 12, 117–18, 161n. 23, 226n. 44 Jeffrey, Richard 45, 50 Judgment 15n. 1 Justification 7–9, 15, 24, 101, 129, 207–8, 214–15 concept-based accounts of 163 conceptions of 157–8 justificatory arbitrariness 213, 215 on-off versus scalar 133, 136n. 17, 138 of reasoning 148 preservation of 7, 131, 146–8, 153 propositional versus doxastic 129n. 1 propositionalism 103–4 prospective 129 statism 103–4 transmission of 96–7, 169n. 43 Kahneman, Daniel 109 Kant, Immanuel 30 Kelly, Thomas 210n. 9, 224–5, 229 Kiesewetter, Benjamin 88 Knowledge and the value of reasoning 98 concept-based accounts of 163 functional accounts of 161–2 preservation of 157, 159 safety condition 159 tacit 190–4, 199 Kolodny, Niko 87–8, 198n. 29 Kopec, Matthew 8–9, 209 Kornblith, Hilary 112 Kuhn, Thomas 211–12 Language comprehension 8, 180, 190–1, 195, 198n. 30, 199 Lasonen-Aarnio, Maria 133n. 11, 134n. 12, 137n. 19
Lewis, David 208n. 5 Linking belief 5, 6n. 2, 32, 96 Littlejohn, Clayton 103 Locke, John 267–8 MacFarlane, John 7 Memory 56, 105–7 preservative versus substantive 105–7 working see Working memory Mental jogging 2, 25, 28 Methods of reasoning 208, 214, 214n. 19, 218n. 29–30, 222, 224n. 41–2 Millikan, Ruth 161n. 24 Mindreading 183, 189–91 Modeling reasoning 48 Modus ponens see Deductive inference Moore-style paradoxes 114 Moss, Sarah 45, 48, 58, 60 Motivational influences on reasoning 9, 232 Naturalism meta-ethical 88 New evil demon 160n. 21 Non-deductive reasoning inductive and abductive reasoning 91, 146n. 38 rules of 158 Normative pluralism 87 Normative realism 229n. 53 Normative requirements 189, 260n. 18 Normativity of logic 140–1, 147, 161–2, 186 of rationality 88 norms of reasoning 7 Optimism bias 236 epistemic benefits of 239 in reasoning 232 psychological benefits of 238 Ought deliberative versus evaluative 87–8 incommensurability of objective and subjective 87–8 objective versus subjective 87 Ought implies can principle 160 Parfit, Derek 74–6, 83 Passive reasoning 33 Peacocke, Christopher 59–60 Peer disagreement see Disagreement Perception 16–17, 115, 183, 191, 194–5 and belief 99n. 8 categorization 17–18 contents of 180 epistemology of 103–4, 122–3, 199 face recognition 182, 189–92, 194–5
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Perception (Cont.) perceptual experience 18, 96–7, 122 perceptual learning 195n. 23 perceptual states as conclusions of inferences 107n. 9, 111 Personal level 5–7 and subpersonal level 3–4, 6, 44n. 1, 91, 115–17, 155n. 7 reasoning 15, 17, 22, 113 Pettigrew, Richard 45, 50 Pettit, Philip 113n. 10 Phenomenology 15, 117–19 as source of justification 160–1 of reasoning 160 Podgorski, Abelard 220n. 33 Practical questions 78–80 Practical reason 38, 61–2, 71, 139–40, 248 conceptions of 74, 78 Preface paradox 132n. 7 Prior, Arthur 165n. 34 Promises 267–8 Proof 105, 152 Propositions 15–16, 42 Prosopagnosia 182–3, 189–92 Pryor, James 103, 160n. 20 Rationality and evidence see Evidence coherence requirements of 3–4, 200 contrastivism 186n. 9 conceptions of 234 normativity of see Normativity permissions of 7, 141–5 process versus state requirements of 200 required projects of 170–1 requirements of 7, 88, 140–5, 178, 200, 207 structural requirements of 7–8, 88, 129–30, 140 subject-relative requirements of 184 wide-scope requirements of 185–7, 198n. 29, 262 Raz, Joseph 77, 79, 263n. 24 Reactive attitudes 116 Reasoning in small children and non-human animals 95–8 Reasons 9–10, 107–8, 129, 186n. 9, 248 circularity worry 88 compliance with 256 content-independent 264, 267, 270 first-order and second-order 263n. 24 for acting 9–10, 76 underdetermination problem 77, 84–5 Reckoning Model 4–5, 6n. 2, 15 Reflection 17, 46 Reichenbach, Hans 101 Reliabilism 122, 159–61 Response Hypothesis 23–4
Responsibility 3–4, 6–7, 91, 111, 116–17, 121, 178–9, 189 Richard, Mark 6–7, 119 Ross, Jacob 53 Rule-following 3–4, 30, 35–6, 39, 42, 96, 98, 148–9 and rule-governedness 3–4, 25n. 11, 152 tacit 118–19 Rules of inference 7–8, 129–31, 146–8, 152, 178 see also Basic rules of inference Savage, Leonard J. 211 Scanlon, Thomas 74–5, 84 Schechter, Joshua 7–8, 137n. 18, 152 Schoenfield, Miriam 208, 218n. 29, 219n. 31 Schroeder, Mark 53, 84, 87–8 Scientific consensus 210–11 Self-awareness as a condition on inference 4, 17 as a condition on reasoning 19 Self-knowledge as an aim of action 258 Self-legislation 264–7, 269, 270n. 39 Settling questions 5–6, 71, 186, 188, 198–9, 261 Sharadin, Nathaniel 213, 222–3 Sharot, Tali 233, 236–8 Siegel, Susanna 3–4, 6n. 2, 100n. 9, 107n. 9, 111, 196n. 26 Simpson, Robert M. 229n. 53 Smith, Matthew 9–10 Sosa, Ernest 159n. 17 Southwood, Nicholas 5–6, 10n. 4, 71 Staffel, Julia 5, 62 Stanley, Jason 49–50 Stanovich, Keith 62 Strawson, P.F. 116 Sturgeon, Scott 52 Sullivan-Bissett, Emma 9, 234, 242 Suppositional reasoning 15n. 1, 58–9, 61, 107n. 9, 156–7, 178–9 Swampman 162 Taking at face value 1, 8, 182–3, 195 Taking condition 6–7, 32, 59, 92–3, 110, 112–13 Tang, Weng Hong 50–1 Taylor, Shelley 233, 236, 238 Titelbaum, Michael 8–9, 209, 224n. 41 Truth 145–6 and evidence see Evidence as a goal of inquiry 159 conduciveness 215 preservation 153–5, 160 Unavailability of cognition 242–3 Unconscious inference 19, 115, 122
OUP CORRECTED PROOF – FINAL, 24/4/2019, SPi
Uniqueness Thesis 205 Unknown premises 93, 121–2 Valaris, Markos 55 Validity 131, 136, 147–8, 155 Velleman, David 74–5, 258 Verbal dispute 111 Wallace, R. Jay 71n. 1 Way, Jonathan 131–4
Wedgwood, Ralph 50–1, 136n. 17, 169n. 43 Weisberg, Jonathan 229n. 53 Well-being 232 White, Roger 206 Whiting, Daniel 131–4 Williamson, Timothy 103, 117, 132n. 8, 164–5 Wittgenstein, Ludwig 35n. 7, 108, 119 Working memory 5, 47, 62 Worsnip, Alex 7, 130 Wright, Crispin 59n. 10, 107n. 9