219 23 3MB
English Pages [359] Year 2021
Epistemic Autonomy
This is the first book dedicated to the topic of epistemic autonomy. It features original essays from leading scholars that promise to significantly shape future debates in this emerging area of epistemology. While the nature of and value of autonomy has long been discussed in ethics and social and political philosophy, it remains an underexplored area of epistemology. The essays in this collection take up several interesting questions and approaches related to epistemic autonomy. Topics include the nature of epistemic autonomy, whether epistemic paternalism can be justified, autonomy as an epistemic value and/or vice, and the relation of epistemic autonomy to social epistemology and epistemic injustice. Epistemic Autonomy will be of interest to researchers and advanced students working in epistemology, ethics, and social and political philosophy. Jonathan Matheson is a Professor of Philosophy at the University of North Florida. He is the author of The Epistemic Significance of Disagreement (Palgrave) and co-editor (with Rico Vitz) of The Ethics of Belief: Individual and Social (Oxford). Kirk Lougheed is Assistant Professor of Philosophy and Director of the Center for Faith and Human Flourishing at LCC International University. He is also a Research Associate at the University of Pretoria. He is author or editor of 4 books, and over 25 articles appearing in such places as Philosophia, Ratio, and Synthese.
Routledge Studies in Epistemology Edited by Kevin McCain, University of Alabama at Birmingham, USA and Scott Stapleford, St. Thomas University, Canada
The Ethics of Belief and Beyond Understanding Mental Normativity Edited by Sebastian Schmidt and Gerhard Ernst Ethno-Epistemology New Directions for Global Epistemology Edited by Masaharu Mizumoto, Jonardon Ganeri, and Cliff Goddard The Dispositional Architecture of Epistemic Reasons Hamid Vahid The Epistemology of Group Disagreement Edited by Fernando Broncano-Berrocal and J. Adam Carter The Philosophy of Group Polarization Epistemology, Metaphysics, Psychology Fernando Broncano-Berrocal and J. Adam Carter The Social Epistemology of Legal Trials Edited by Zachary Hoskins and Jon Robson Intellectual Dependability A Virtue Theory of the Epistemic and Educational Ideal T. Ryan Byerly Skeptical Invariantism Reconsidered Edited by Christos Kyriacou and Kevin Wallbridge Epistemic Autonomy Edited by Jonathan Matheson and Kirk Lougheed For more information about this series, please visit: https://www.routledge. com/Routledge-Studies-in-Epistemology/book-series/RSIE
Epistemic Autonomy
Edited by Jonathan Matheson and Kirk Lougheed
First published 2022 by Routledge 605 Third Avenue, New York, NY 10158 and by Routledge 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2022 Taylor & Francis The right of Jonathan Matheson and Kirk Lougheed to be identified as the authors of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilized in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data Names: Matheson, Jonathan, editor. | Lougheed, Kirk, editor. Title: Epistemic autonomy / edited by Jonathan Matheson and Kirk Lougheed. Description: New York, NY : Routledge, 2022. | Series: Routledge studies in epistemology | Includes bibliographical references and index. Identifiers: LCCN 2021018300 (print) | LCCN 2021018301 (ebook) | ISBN 9780367433345 (hbk) | ISBN 9781032052342 (pbk) | ISBN 9781003003465 (ebk) Subjects: LCSH: Autonomy (Philosophy) | Knowledge, Theory of. Classification: LCC B808.67 .E65 2022 (print) | LCC B808.67 (ebook) | DDC 128--dc23 LC record available at https://lccn.loc.gov/2021018300 LC ebook record available at https://lccn.loc.gov/2021018301 ISBN: 978-0-367-43334-5 (hbk) ISBN: 978-1-032-05234-2 (pbk) ISBN: 978-1-003-00346-5 (ebk) Typeset in Sabon by SPi Technologies India Pvt Ltd (Straive)
Contents
Contributors vii Acknowledgments xi Introduction: Puzzles Concerning Epistemic Autonomy
1
JONATHAN MATHESON AND KIRK LOUGHEED
PART I
The Nature of Epistemic Autonomy
19
1 Epistemic Autonomy and Externalism
21
J. ADAM CARTER
2 Autonomy, Reflection, and Education
41
SHANE RYAN
3 The Realm of Epistemic Ends
55
CATHERINE ELGIN
4 Professional Philosophy Has an Epistemic Autonomy Problem
71
MAURA PRIEST
PART II
Epistemic Autonomy and Paternalism
93
5 Norms of Inquiry, Student-Led Learning, and Epistemic Paternalism
95
ROBERT MARK SIMPSON
6 Persuasion and Intellectual Autonomy ROBIN MCKENNA
113
vi Contents
7 What’s Epistemic about Epistemic Paternalism?
132
ELIZABETH JACKSON
PART III
Epistemic Autonomy and Epistemic Virtue and Value
151
8 Intellectual Autonomy and Intellectual Interdependence 153 HEATHER BATTALY
9 The Virtue of Epistemic Autonomy
173
JONATHAN MATHESON
10 Understanding and the Value of Intellectual Autonomy
195
JESÚS VEGA-ENCABO
11 Epistemic Myopia
215
CHRIS DRAGOS
12 Intellectual Autonomy and Its Vices
231
ALESSANDRA TANESINI
13 Gaslighting, Humility, and the Manipulation of Rational Autonomy
250
JAVIER GONZÁLEZ DE PRADO
PART IV
Epistemic Autonomy and Social Epistemology
269
14 Epistemic Autonomy for Social Epistemologists: The Case of Moral Inheritance
271
SARAH MCGRATH
15 Epistemic Autonomy and the Right to be Confident
288
SANFORD GOLDBERG
16 We Owe It to Others to Think for Ourselves
306
FINNUR DELLSÉN
17 Epistemic Self-Governance and Trusting the Word of Others: Is There a Conflict?
323
ELIZABETH FRICKER
Index
343
Contributors
Heather Battaly is Professor of Philosophy at the University of Connecticut. She specializes in virtue and vice theory in epistemology and ethics. She is the author of Virtue (2015) and editor of The Routledge Handbook of Virtue Epistemology (2019). J. Adam Carter is Reader in Epistemology at the University of Glasgow, where he is Deputy Director of the COGITO Epistemology Research Centre. His books include Metaepistemology and Relativism (Palgrave Macmillan, 2016), A Critical Introduction to Knowledge-How (with Ted Poston, Bloomsbury 2018), This Is Epistemology (with Clayton Littlejohn, Wiley-Blackwell, 2020), The Philosophy of Group Polarization (with Fernando Broncano-Berrocal, Routledge, 2020), and Digital Knowledge (Routledge, forthcoming). Finnur Dellsén is an associate professor at the University of Iceland and a visiting professor at the Inland Norway University of Applied Sciences. Finnur’s main research interests lie at the intersection of philosophy of science and epistemology, and include topics such as scientific consensus, scientific progress, understanding in science, scientific testimony, and explanatory reasoning. Finnur’s work has appeared in journals such as Analysis, Australasian Journal for Philosophy of Science, British Journal for Philosophy of Science, Philosophical Studies, Philosophy and Phenomenological Research, and Philosophy of Science. Chris Dragos completed his doctoral studies at the University of Toronto’s Institute for the History and Philosophy of Science and Technology. His published work is in meta-epistemology, social epistemology, philosophy of science, and philosophy of religion. Catherine Elgin is Professor of the Philosophy of Education at Harvard Graduate School of Education. She is the author of True Enough, Considered Judgment, Between the Absolute and the Arbitrary, With
viii Contributors Reference to Reference, and co-author with Nelson Goodman of Reconceptions in Philosophy and Other Arts and Sciences. Elizabeth Fricker is an Emeritus Fellow of Magdalen College Oxford, and Emeritus Oxford Philosophy Faculty member. Her main research specialization is the epistemology of testimony, and she has published over 20 articles on this. She has also published in general epistemology and philosophy of mind. She has a further research interest in the philosophy of Ludwig Wittgenstein. Sanford Goldberg is Chester D. Tripp Professor in the Humanities and Professor of Philosophy at Northwestern University, as well as Professorial Fellow at the University of St. Andrews. His research is primarily in the areas of epistemology and philosophy of language. His is the author of several books, including Foundations and Applications of Social Epistemology: Collected Essays (OUP, forthcoming), Conversational Pressure: Normativity in Speech Exchanges (OUP, 2020), To the Best of Our Knowledge: Social Expectations and Epistemic Normativity (OUP, 2018), and Assertion: On the Philosophical Significance of Assertoric Speech (OUP: 2015). Javier González de Prado is an Assistant Professor at UNED, Spain. He received his PhD from the University of Southampton in 2016. His main areas of research are normativity theory, epistemology, and philosophy of language. Elizabeth Jackson is an Assistant Professor at Ryerson University. Her research focuses on issues at the intersection of formal and traditional epistemology. She has recently published work on the belief-credence connection, epistemic permissivism, and pragmatic and moral encroachment. Her research interests also include social epistemology, decision theory, and philosophy of religion. She completed her Ph.D. in Philosophy at the University of Notre Dame. Kirk Lougheed is Assistant Professor of Philosophy and Director of the Center for Faith and Human Flourishing at LCC International University. He is also a Research Associate at the University of Pretoria. He is author or editor of 4 books, and over 25 articles appearing in such places as Philosophia, Ratio, and Synthese. Jonathan Matheson is a Professor of Philosophy at the University of North Florida. He is the author of The Epistemic Significance of Disagreement (Palgrave) and co-editor (with Rico Vitz) of The Ethics of Belief: Individual and Social (Oxford).
Contributors ix Sarah McGrath is an Associate Professor at Princeton University. she has written on moral disagreement, moral testimony, moral expertise, and also about issues at the intersection of metaphysics and ethics. She has also written a book, Moral Knowledge (OUP, 2019), that brings together some of her previous work on moral epistemology. Robin McKenna is a Lecturer in Philosophy at the University of Liverpool. Before coming to Liverpool he worked in Austria (at the University of Vienna) and Switzerland (at the University of Geneva). He completed his Ph.D. at the University of Edinburgh. Most of his work is in epistemology, but he is also interested in philosophy of language, philosophy of science, and ethics. Within epistemology, he works on various topics in applied epistemology, feminist epistemology, and social epistemology more broadly. He is currently writing a book on ideal and non-ideal theory in epistemology. Maura Priest is a philosophy professor and bioethicist at Arizona State University, and also a research affiliate at The University of Cologne (Germany). She has also spent time as a visiting scholar at Saint Louis University; The University of Connecticut, and Children’s Mercy Hospital. Her main research areas can be found at the intersection of epistemology and ethics (sometimes applied), bioethics (especially pediatric bioethics), and various inquiries related to conflicts between individual versus group interests. She is working on two books: one about trusting intellectual elites, and one on focused on dating ethics. She also manages Arizona State University’s department of philosophy’s applied ethics website, direct their colloquia series, and chair of our department’s diversity committee Shane Ryan is Associate Professor of Philosophy at Nazarbayev University. His research includes, but is not limited to, wisdom, epistemic environmentalism, and paternalism. In future work he is interested in exploring how epistemic environments might be designed to facilitate the development of wisdom and how perceptions of epistemic coverage can influence the consumption of fake news. Robert Mark Simpson is Associate Professor of Philosophy at University College London. He was previously a Lecturer at Monash University, and a Visiting Assistant Professor at the University of Chicago. His main research interests are in social and political philosophy, primarily relating to freedom of speech. He also works on a range of issues in political theory, social epistemology, ethics, and applied ethics.
x Contributors Alessandra Tanesini is Professor of Philosophy at Cardiff University. Her current work lies at the intersection of ethics, the philosophy of language, and epistemology with a focus on epistemic vice, silencing, prejudice, and ignorance. Her new book The Mismeasure of the Self: A Study in Vice Epistemology is forthcoming with Oxford University Press. Jesús Vega-Encabo is Professor of Logic and Philosophy of Science in the Autonomous University of Madrid (Spain). He has extensively published on epistemology, philosophy of mind, and philosophy of science and technology, in journals such as Phenomenology and the Cognitive Sciences, Synthese, and Social Epistemology. His most recent research is focused on intellectual autonomy and epistemic dependence, testimony, epistemic normativity, and the nature of artifacts and their role in cognition.
Acknowledgments
We would like to thank our editor at Routledge, Andrew Weckenmann, for his support of the project. Alexandra Simmons at Routledge provided important technical support along the way. The editors of Routledge Studies in Epistemology in Kevin McCain and Scott Stapleford were enthusiastic about the project from the start. We’re grateful for the opportunity to put together this volume. We also want to thank our contributors who were able to keep us on our timeline despite all of the disruptions of the past year. I (Kirk) would like to thank Jonathan Strand for giving me a job at the 11th hour a few years ago when I thought I might have to leave the discipline. I’m grateful for the encouragement that Klaas J. Kraay and Nathan Ballantyne have given me throughout my career. Over the last few years I’m also particularly thankful for invaluable mentorship from Thaddeus Metz. Finally, some of my work on this project was supported by a fellowship from the Social Sciences and Humanities Research Council of Canada. I’m grateful to the Canadian taxpayer. On a more personal note, I faced extreme isolation when the Covid-19 pandemic hit. I would like to thank my friends Tim, Samantha, and Theo who allowed me to form a “Covid bubble” with them. Doing so has greatly reduced the isolation I otherwise would have face during this pandemic. My parents, Stephenson and Diane Lougheed, have been unwavering in their support of my career in philosophy. They have always provided me with support that surely exceeds what parents owe their adult children. I (Jon) would like to thank my family for being the best distractions from academic work. I am also grateful to the University of North Florida for granting me a sabbatical for the spring of 2020 during which much of the planning and preparation for this work was done.
Introduction Puzzles Concerning Epistemic Autonomy Jonathan Matheson and Kirk Lougheed
A couple of years ago, nearly 30 scholars from Princeton, Harvard, and Yale signed an open letter to incoming freshman.1 In this letter, these scholars urged new students to think for themselves. Their advice echoed the motto of the enlightenment: Dare to know! Have the courage to use your own understanding.2 These scholars warned of being tyrannized by public opinion and falling prey to echo chambers. A love of truth, they claimed, should motivate you to think for yourself, exercising openmindedness, critical thinking, and debate. Along these lines, many philosophers see promoting the autonomy of students as one of the primary goals of higher education.3 For instance, the stated goal of many introductory philosophy texts is to get students to think for themselves.4 While there is clearly some sense in which the advice to ‘think for yourselves’ is good advice and the goal of developing autonomous students is a good goal to have, the call to think for yourself also leads to a number of puzzles. These puzzles concern the nature and value of epistemic autonomy. While moral autonomy and political autonomy have received a great deal of attention in the philosophical literature, rather little investigation has been conducted regarding the nature and value of epistemic autonomy. While moral autonomy is of utmost moral importance, and political autonomy is of great political importance, the epistemic value of epistemic autonomy is far more contentious and obscure. Much recent work in epistemology has attempted to gain insights from neighboring normative fields like ethics and social and political philosophy. Insights regarding moral virtues have been used to better understand intellectual virtues (see Zagzebski 1996). Insights regarding justice and injustice have been used to better understand epistemic justice and injustice (see Fricker 2009). Insights regarding political authority have been used to better understand expertise and epistemic authority (see Zagzebski 2012). Along these lines, the nature and value of autonomy has long been discussed in ethics and social and political philosophy, but only recently is it beginning to receive attention within epistemology. The research in social epistemology and virtue epistemology in the last decade have laid
2 Introduction a perfect backdrop for a greater exploration of the nature and value of epistemic autonomy. In this chapter we will lay the foundation to explore four broad sets of puzzles that emerge regarding epistemic autonomy and outline the contributions to these debates contained in this volume. These sets of puzzles concern (1) how to conceive of epistemic autonomy, (2) how epistemic paternalism is related to epistemic autonomy, (3) whether epistemic autonomy is an intellectual virtue and whether it has epistemic value, and (4) how to think about epistemic autonomy within the broader context of social epistemology.
1 The Nature of Epistemic Autonomy Joseph Raz (1988) claims that the autonomous person determines the course of their life for themselves (407). Put differently, autonomy rules out coercion, manipulation, and other ways of subjecting your will to another. Epistemic autonomy carries this notion into the domain of epistemology by focusing on our intellectual lives. Elizabeth Fricker (2006) describes a supposed ideal of an epistemically autonomous person as someone who “takes no one else’s word for anything, but accepts only what she has found out for herself, relying only on her own cognitive faculties” (225). Similarly, Linda Zagzebski (2007) claims that an epistemically autonomous person “will demand proof of p that she can determine by the use of her own faculties, given her own previous beliefs, but she will never believe anything on testimony” (252). As Sandy Goldberg puts it, an epistemically autonomous subject is one who judges and decides for herself, where her judgments and decisions are reached on the basis of reasons which she has in her possession, where she appreciates the significance of these reasons, and where (if queried) she could articulate the bearing of her reasons on the judgment or decision in question. (Goldberg 2013, 169) Here, the idea is that the epistemically autonomous person demands direct, or first-order, reasons why something is true, and does not merely rely on reports about their existence. Epistemically autonomous individuals are epistemically self-reliant and are responsible for the justification of their beliefs.5 As Kristof Ahlstrom-Vij sees it, there is both a positive and negative aspect to epistemic autonomy. The epistemically autonomous person does not simply rely on the word of others, and they do conduct their own inquiry, relying on their own cognitive resources (92). So, while the autonomous person need not live free of any outside influence, she must nevertheless be determining things for herself, whether they be her
Introduction 3 actions, or determinations about what to believe.6 Why might there be a problem with such a reliance on others? Some have seen such epistemic dependence as a kind of free-riding where individuals are freely benefiting on the intellectual labors of others.7 Others see beliefs formed simply on the say-so of others as second-hand goods, lacking important value had by beliefs one has evaluated on their own. Both Fricker (2006) and Zagzebski (2007) see epistemic autonomy, so understood, as having no real value, at least for creatures like us. As Fricker puts it, any being like us that attempted to live a fully epistemically autonomous live would be “either paranoid, or severely cognitively lacking, or deeply rationally incoherent” (244). John Hardwig (1985) echoes this sentiment claiming, “If I were to pursue epistemic autonomy across the board, I would succeed only in holding uninformed, unreliable, crude, untested, and therefore irrational beliefs” (340). After all, we are finite creatures with limited cognitive abilities and time, so making the independent judgments required by epistemic autonomy is quite costly.8 If we needed to figure everything out for ourselves, we wouldn’t ever figure out very much. Our epistemic success depends on others.9 In contrast, others build normativity into the very concept of epistemic autonomy. Robert Roberts and Jay Wood claim that an epistemically autonomous individual is “properly regulated by others” (260) so exercising epistemic autonomy “involves a reasonable, active use of guidance from another” (267). Here, epistemic autonomy is thought of as thinking for yourself well, or in the right way, not simply thinking for yourself. As Zagzebski (2013) understands intellectual autonomy, a concept she distinguishes from epistemic autonomy, it is “the right or ideal of self-direction in the acquisition or maintenance of beliefs,” which requires an appropriate amount of self-trust (259). On this understanding of epistemic autonomy, the autonomous thinker succeeds in placing the appropriate weight on their own thoughts and knows when to outsource their intellectual projects to those that are better positioned to determine the truth of the matter. Some see epistemic autonomy, so understood, as central to the very idea of epistemic agency itself (Grasswick 2018, 196). Even if epistemic autonomy has value, its value may be limited to some domains. Joseph Shieber (2010) argues that Kant’s dictum to “think for yourself” should be limited to philosophical, moral, and mathematical matters. This fits with the idea that many philosophers have that there is something amiss with moral deference in contrast to other forms of deference.10 While there does not seem to be anything inappropriate about deferring to an archeologist about the age of some artifact, some find something problematic about taking on your moral, religious, or philosophical beliefs simply on someone else’s say-so. If the value of epistemic autonomy is domain relative, then what features of these domains give it its value, and why?
4 Introduction Finally, there are questions regarding what the epistemically autonomous person is autonomous about. Thus far, we have presumed that individuals are epistemically autonomous with respect to their beliefs – how they manage their own doxastic attitudes. However, Finnur Dellsen (2018) has argued that the ideal of epistemic autonomy is better understood with respect to acceptance – treating something as true – rather than belief (3). We can also see epistemic autonomy as applying more broadly with respect to actions like inquiry, evidence gathering, and decisions regarding which intellectual projects one pursues.11 For one thing, acceptance and inquiry seem to be under the control of individuals in ways that their beliefs are not. So, to the degree that epistemic autonomy requires voluntary control, we may have reasons to prefer different views about the object of epistemic autonomy. 1.1 Summaries In Chapter 1, “Epistemic Autonomy and Externalism,” J. Adam Carter makes the case that there is a kind of attitudinal autonomy, what he calls “epistemic autonomy,” that matters for propositional knowledge. In order for a subject to know a proposition, they must have exercised epistemic autonomy in forming the target belief. Having established that this type of attitudinal autonomy is importantly different than the kind of attitudinal autonomy discussed in the literature on moral responsibility, Carter also explores the prospects of internalism and externalism about epistemic autonomy. Having dismissed internalism about epistemic autonomy, Carter argues that history-sensitive externalism shows the most promise amongst externalist theories. In Chapter 2, “Autonomy, Reflection, and Education,” Shane Ryan argues that on the assumption that developing autonomy is an aim of education, that educating should not remain neutral on the conception of the good. He further contends that reflection is essential to autonomy and that neutrality on the good is impossible if we are to promote skillful reflection, instead of unskillful reflection. Ryan appeals to the dual process theory of reasoning in order to explain his preferred account of reflection. Type 1 processes are fast and automatic, while Type 2 processes are slow and controlled. While it might be tempting to conclude that reflection involves only Type 2 processes, Ryan argues that Type 1 processes explain important parts of reflection including the “when” and “how.” This bolsters the idea that education cannot stay neutral about the good since decisions about what to reflect upon are inevitable. In Chapter 3, “The Realm of Epistemic Ends,” Catherine Elgin distances the notion of epistemic autonomy from that of intellectual independence. Elgin argues that autonomy is instead better understood as self-governance. Building on her earlier work on epistemic normativity, Elgin argues that epistemically autonomous agents collectively certify
Introduction 5 epistemic norms by justifying those norms to those in their epistemic community. On this account, epistemic autonomy and intellectual interdependence are mutually supporting, rather than standing in conflict with each other. Elgin also defends this account from the charge that it permits a troubling form of epistemic relativism. In Chapter 4, “Professional Philosophy Has an Epistemic Autonomy Problem,” Maura Priest explores a conception of epistemic autonomy as intellectual self-governance and argues that contemporary professional philosophy has an epistemic autonomy problem. While epistemic autonomy is compatible with a reliance on others, intellectual self-governance requires a significant amount of control over one’s intellectual projects. Given the nature of philosophy, we should expect philosophers to have a great deal of epistemic autonomy. However, Priest identifies several impediments to autonomous inquiry that exist in contemporary professional philosophy and argues that we are all worse off because of them.
2 Epistemic Autonomy and Paternalism Suppose we discover that an individual is epistemically better off for having certain parts of evidence excluded from her assessment of the evidence about some given topic. Perhaps the individual is much more likely to arrive at true belief or knowledge if she examines only a subset of the evidence instead of the entire body of relevant evidence. The idea that an individual is advantaged for having certain evidence screened off from her is the general line of reasoning most often used in defenses of epistemic paternalism. Epistemic paternalism occurs, roughly, when some agent X is making a doxastic decision about a question Q, and another agent Y has control over the evidence provided to X, and Y need not make available all of the relevant evidence about Q to X if doing so makes it more likely that X will arrive at the truth about Q. As it stands, it seems that there are two assumptions needed to defend epistemic paternalism. The first is veritism. If truth is not the primary epistemic goal in question, then the most prominent defenses of epistemic paternalism fail (Ahlstrom-Vij 2013; Goldman 1991). Consider that just because a paternalistic practice might help enable one to arrive at true beliefs, it in no way follows that it offers the same help at arriving at a more complex doxastic state such as understanding. Consider that I could come to arrive at truths about some topic by believing whatever a reliable person tells me. Yet it doesn’t follow from this that I would gain understanding, which partly involves seeing how things fit together. The second assumption is epistemic consequentialism. This is the view that positive epistemic consequences ought to be one of the main aims of epistemically responsible agents. So, in order for epistemic paternalism to be justified, the end in question needs to be arriving at a positive veritistic result. Indeed, Alvin Goldman’s favorite
6 Introduction example intended to justify epistemic paternalism regards legal evidentiary rules where the goal of such rules is to help jurors arrive at correct verdict. Goldman stresses given each of our own epistemic limitations we will often benefit from paternalism. He says that we live in an epistemically complex world, where each of us cannot reasonably hope to assess all evidence for all theses personally. We often have to depend on the authority of others. Given this situation, it seems likely that epistemic paternalism will frequently be necessary, and sometimes epistemically desirable. (Goldman 1991, 126–127) An important question this volume explores regards the relationship between epistemic paternalism and epistemic autonomy, for they are in clear tension with one another. Suppose that Sally is a juror in a criminal case. Sally does not have access to all of the evidence. Indeed, some of the evidence presented to the judge by both the prosecution and defense is deemed inadmissible because it is unduly prejudicial (i.e. it is likely to lead Sally and her fellow jurors astray). But this means that, in at least some important sense and to at least some degree, Sally lacks epistemic autonomy. She is not free to evaluate all of the evidence for herself and to make determinations of its significance for herself. The evidence she can access is selected for her, and she has no input into that process. The defender of epistemic paternalism is likely to say that it is justified inasmuch as it helps Sally arrive at the truth, but if epistemic autonomy is also of value, then we at least have competing values at stake. Limiting evidence in this way further contrasts with the idea that having more evidence is in general better for arriving at the truth. Complete epistemic paternalism is incompatible with total epistemic autonomy. But an agent can have some degree of epistemic interference while maintaining a certain amount of epistemic autonomy. Is there a correct balance between these two extremes? If yes, what is that balance? If no, which of the paternalism and autonomy are to be preferred? All else being equal, isn’t it better for someone to arrive at the truth on their own? If arriving at true beliefs is an achievement of some kind it might be better to arrive at true beliefs without relying on epistemic paternalism (e.g. Greco 2010). Wouldn’t it better if Sally arrived at the correct verdict on her own, without the need for epistemic interference? And yet it doesn’t right to think Sally would be worse if she arrived at an altogether false belief as opposed to relying on paternalism to arrive at a true belief. The chapters in this section will examine these and related questions. 2.1 Chapter Summaries In Chapter 5, “Norms of Inquiry, Student-led Learning, and Epistemic Paternalism,” Robert Mark Simpson explores whether epistemic paternalism
Introduction 7 is ever justified outside of legal contexts where the practice is well-entrenched. In particular, Simpson is most interested in whether epistemic paternalism is justified in contexts of inquiry. He argues that an information controller is better able to assist others in their inquiry if they are sensitive to the interests of the inquirers in question. An information controller, then, should thus be operating in consultation with the inquirers. Simpson applies these considerations to defend the idea of student-led learning. Teachers should consult with students about the topics they are interested in, because it’s easier to benefit someone epistemically if you know their interests. In Chapter 6, “Persuasion and Intellectual Autonomy” Robin McKenna examines the tension between responsible public policy making and democratic legitimacy. In order to be responsible, such policies should be made based on the available scientific evidence. But in order to be democratic there must be broad acceptance of such policies, which implies there must be broad acceptance of the scientific basis for such polices in the first place. The tension arises, however, because in many cases there is a sizable minority that rejects the science in question. In the cases of climate policy in the US, he argues that the tension cannot be resolved (alone) by better science education or the development of critical thinking ability. What may help is what McKenna calls “marketing methods” – methods one may think are at odds with intellectual autonomy – that are aimed at selling a product (i.e. climate science) instead simply providing more evidence. McKenna argues that there might be cases where science marketing actually facilitates critical reasoning and hence promotes intellectual autonomy. In Chapter 7, “What’s Epistemic About Epistemic Paternalism?” Elizabeth Jackson surveys a number of different understandings of epistemic paternalism before examining various normative questions about it. Jackson worries that current definitions of epistemic paternalism tend be too broad so as to admit of cases clearly not paternalistic or too narrow so as to exclude clear cases of paternalism. Jackson also wonders whether there are very many cases of “pure” epistemic paternalism because most of the cases mentioned in the literature include other considerations. For instance, evidentiary rules in court proceedings have a significant moral dimension because it is immoral to convict and punish innocent persons. Additionally, she doubts whether epistemic paternalism can ever be morally or all-things-considered justified, even if there are cases where it is epistemically justified.
3 Epistemic Autonomy and Epistemic Virtue and Value In recent years, there has been a growth in the subfield known as virtue epistemology. Instead of focusing on analyzing propositional knowledge in the form of S knows that P just in case certain conditions obtain, virtue epistemologists focus on the specific traits an agent needs to manifest when acquiring knowledge. With the focus on the character of the
8 Introduction knower, one question that quickly emerges is whether agents manifest an epistemic virtue or a vice when being epistemically autonomous. On one hand, thinking for yourself seems virtuous; at least prima facie, it’s a positive achievement if a knower arrives at knowledge her own. Being in possession of the relevant reasons and seeing for yourself their support for the truth of some claim is quite valuable. On the other hand, being unwilling to rely on others for knowledge acquisition seems to clearly exhibit arrogance and a harmful hyper-autonomy (Roberts and Wood 2007, 236). Further, an insistence on “seeing it for oneself” can be problematic. It can demonstrate a defective lack of trust in others, like when someone feels the need to double-check their friend’s testimony even while knowing they are reliable. The need to “see it for oneself” can even perpetuate epistemic injustice and oppression, like when one fails to trust the reported experiences of others when they differ from your own (i.e. reports of sexual harassment). The intellectual virtue of epistemic autonomy would need to navigate the course between either of these extremes. Nathan King sees epistemic autonomy as the mean between the intellectual vices of servility and isolation (2020, 58). When one is intellectual servile, they outsource their entire intellectual life. When one exhibits the vice of isolation, they ignore the valuable insights and intellectual labors of others. Perhaps Thomas Scanlon (1972) expresses the appropriate balance between reliance on oneself and on others when he writes that: [A]n autonomous person cannot accept without independent consideration the judgment of others as to what he should believe or what he should do. He may rely on the judgment of others, but when he does so he must be prepared to advance independent reasons for thinking their judgment likely to be correct, and to weigh the evidential value of their opinion against contrary evidence. (216) Here, while the autonomous person need not come to possess the testifier’s reasons for believing the proposition in question, she does possess, and require, reasons to trust the testifier in the first place. As King puts this sentiment, “autonomy requires thinking for ourselves, not by ourselves” (2020, 55). Jesus Vega-Encabo (2008), too, sees the relevant excess not as epistemic dependency, but heteronomy.12 It’s also worth noting that epistemic autonomy need not be only benefit individuals. Some suggest that groups comprising heterogenous thinkers typically outperform groups made up of homogenous thinkers (Lougheed 2020, 69–77). Groups containing members who disagree with one another tend to get better epistemic results than those groups in which members don’t challenge each other. If this is right, then epistemic autonomy could benefit collective or social knowledge; it’s better when
Introduction 9 groups contain agents are epistemically autonomous.13 This describes the benefits of a particular method for acquiring knowledge. However, Linda Zagzebski suggests that knowledge acquired on one’s own is no better than the same knowledge acquired by relying on the testimony of another. Thus, while the collective could be benefited by epistemically autonomous individuals, it’s not clear that autonomy should be pursued for its own sake. On the other hand, some have argued that epistemic states such as understanding cannot be gained via testimony. Individuals must come to understanding on their own, and so epistemic autonomy turns out to be necessary in order to achieve understanding.14 Finally, Zagzebski argues that epistemic autonomy is sometimes incompatible with self-reliance. For Zagzebski, epistemic autonomy requires conscientiousness, and doing one’s best can call for deferring to an epistemic authority. This discussion has close connections to the epistemic virtue of intellectual humility, which has garnered much recent attention.15 Roberts and Wood explain that humility is best understood as the opposite of certain vices including vanity and arrogance (2007, 258). An intellectual humble individual is less concerned with accolades than they are with truth, justification, knowledge, etc. According to Whitcomb et al. (2017), intellectual humility consists in being appropriately attentive to, and owning, one’s intellectual limitations. According to this account, owning one’s intellectual limitations characteristically involves dispositions to: (1) believe that one has them; and to believe that their negative outcomes are due to them; (2) to admit or acknowledge them; (3) to care about them and take them seriously; and (4) to feel regret or dismay, but not hostility, about them. (519) When owning your limitations is motivated by a love of the truth, Whitcomb et al. see intellectual humility as an intellectual virtue. Along these lines, Roberts and Wood hypothesize that “just about everybody will be epistemically better off for having, and having associates who have, epistemic humility” (Roberts and Wood 2007, 271–272). However, once we accept our own epistemic limitations, what, if anything, is the value of epistemic autonomy? There seems to be some tension between these two apparent intellectual virtues. Do we need to strive for a balance between intellectual humility and epistemic autonomy? Not everyone sees epistemic autonomy as a virtue. Lorraine Code (1991) argues that the "ideal of epistemic autonomy” comes from a dated Cartesian epistemological picture that ignores the important social and political contexts of epistemic agents. As epistemologist have become more aware of epistemic importance of one’s social context, the value of epistemic autonomy may come to be seen as a relic of an overly
10 Introduction individualistic conception of epistemology that ignores the cooperative aspects of knowledge. Of help here, may be a revised notion of epistemic autonomy that draws on the literature on relational-autonomy.16 Relational-autonomy incorporates into the concept of autonomy the insight that we are socially situated creatures that are mutually dependent upon others. According to such conceptions of autonomy, autonomous agents must stand in certain social relations. As MacKenzie and Stoljar put it, relational autonomy is “premised on a shared conviction that persons are socially embedded, that agents’ identities are formed within the context of social relationships and shaped by a complex of intersecting social determinants, such as race, class, gender, and ethnicity” (MacKenzie and Stoljar 2000b, 4). 3.1 Chapter Summaries In Chapter 8, “Intellectual Autonomy and Intellectual Interdependence,” Heather Battaly proposes accounts of two traits: intellectual autonomy and intellectual interdependence. Intellectual autonomy involves dispositions to think for yourself, whereas intellectual interdependence involves dispositions to think with others. Battaly then explores when these two traits can be intellectual virtues, arguing that the corresponding virtues require both good judgment and proper motivation. Battaly also argues that since deficiencies in one trait needn’t correspond to excesses in the other, it is best to see intellectual autonomy and intellectual interdependence as two distinct, though related, traits and virtues. In Chapter 9, “The Virtue of Epistemic Autonomy,” Jonathan Matheson develops and motivates an account of the virtue of epistemic autonomy. Matheson gleans some desiderata for a virtue of epistemic autonomy through critiquing extant accounts of this intellectual virtue, and proceeds to offers his own account of the character virtue that fulfills them. On Matheson’s account, the character virtue of epistemic autonomy involves the dispositions (1) to make good judgments about how, and when, to rely on your own thinking, as well as how, and when, to rely on the thinking of others; (2) to conduct inquiry in line with the judgments in (1); and (3) to do so because one loves the truth and appropriately cares about epistemic goods. In Chapter 10, “Understanding and the Value of Intellectual Autonomy,” Jesús Vega-Encabo examines several arguments concerning what makes intellectual autonomy epistemically valuable. These arguments claim that intellectual autonomy is epistemically valuable for the role it plays in cognitive achievements, like understanding. Vega-Encabo finds each of these arguments wanting in that they fail to locate any value in intellectual autonomy that does not rely on the value of autonomy more generally. In their place, he argues that intellectual autonomy is valuable because it is essentially linked to the building and preserving of our intellectual identities.
Introduction 11 In Chapter 11, “Epistemic Myopia,” Chris Dragos examines questions about how individuals can keep their most fundamental commitments in good epistemic standing. On one account, in order to keep fundamental commitments in good epistemic standing one must be receptive to dialogue about them with trustworthy critics. Epistemic myopia occurs when an individual becomes unreceptive to trustworthy criticism, and thus becomes incapable, in principle, of rationally monitoring her fundamental commitments. An individual who keeps her fundamental commitments in good standing is open to scrutinizing them (after dialogue) on her own, while the person exhibiting epistemic myopia shows no such openness. Dragos further suggests that there is some evidence from neuroscience to support the idea that humans are susceptible to epistemic myopia, though there is action we can take to combat it. In Chapter 12, “Intellectual Autonomy and Its Vices,” Alessandra Tanesini presents and motivates an account of epistemic autonomy that is focused on answerability. Tanesini argues that the value that comes from being epistemically autonomous obtains only when others recognize you as being answerable for your beliefs. When seen as answerable for her beliefs, an epistemic agent is viewed as an informant, rather than a mere source of information. Tanesini argues that social oppression can prevent individuals from being seen as answerable for their beliefs, and that it thereby deprives them of the value of being epistemically autonomous agents. Further, such conditions can also cause a reduction in one’s autonomy by eroding away one’s self-trust. So, oppressive conditions foster the development of both the epistemic vices of hyper-autonomy in privileged individuals, and of heteronomy in those who are subordinated. In Chapter 13, “Gaslighting, Humility, and the Manipulation of Rational Autonomy,” Javier González de Prado presents an integrated picture of rational autonomy and intellectual humility. According to this picture, rational autonomy is the capacity to deliberate while responding to considerations one sees as reasons, and only to such considerations. As such, when an agent has significant doubts about her competence in treating some consideration as a reason, this consideration is not accessible to her as a reason in deliberation. So, when someone misleads an individual into doubting her reasons-responsiveness, they manipulate that individual’s rational autonomy and can cause her to lose access to some of her reasons. Gaslighting, de Prado argues, is one particularly vivid example of how one’s rational autonomy can be manipulated in this way.
4 Epistemic Autonomy and Social Epistemology As mentioned above, recent epistemology has witnessed an explosion in social epistemology. Social epistemology is epistemology done with particular care and emphasis placed on the social aspects of our ability to gather and disseminate knowledge. Questions about the nature of
12 Introduction testimony, trust, the significance of peer disagreement, and issues concerning epistemic injustice have all received significant attention within this literature. This emphasis on the social aspects of knowledge marks a significant move away from traditional epistemology that focuses on addressing under what conditions an individual can have knowledge. Social epistemology faces questions about the epistemic role and importance of the individual inquirer and her epistemic autonomy.17 Once one is aware of their own epistemic limitations, the role of epistemic autonomy in our intellectual endeavors is far from straightforward. For nearly every topic you might be interested in, you are probably aware that someone else is in a better epistemic position on that matter than you are to know the truth of the matter. That is, on most matters you know that others have more evidence, better intellectual abilities, greater intellectual virtue, and so forth. Once you are aware of your epistemic limitations in these scenarios, in what sense, if any, should you think for yourself? If you are interested in the truth on some matter and are aware of someone who is better positioned to determine the truth on the matter, then it seems as though you should instead rely on that person’s intellectual abilities. This is because doing so is more likely to get you the truth. Deference in many cases like this seems perfectly appropriate. It is appropriate for me to defer to a botanist as to whether a given tree is an elm. It is appropriate for me to defer to a chemist as to the chemical composition of caffeine. Insisting on figuring these things out for oneself would be both cumbersome and a waste of time given the efforts of others have already been invested in these inquiries.18 However, an intellectual life where one outsources nearly all of their intellectual projects also seems far from ideal. It is important to think through some things for yourself. Along these lines, a number of philosophers have found something amiss with moral deference – taking on a moral belief simply on someone else’s say-so.19 But such problems with deference do not seem limited to morality. Believing that God does/ does not exist or that we do/do not have free will without thinking about these matters for oneself at all seems deeply problematic. While Socrates’ famous dictum “an unexamined life is not worth living” might be too strong, surely examining such questions for oneself adds some kind of value to your life. This is the basis of a liberal arts education. These considerations lead to difficult questions regarding epistemic autonomy and its relation to social epistemology. How should we balance our epistemic endeavors? When, and why, should we think for ourselves, and when, and why, should we defer to others?20 These issues become even more pressing in light of disagreement. According to prominent views about the epistemic significance of disagreement, it is irrational to hold on to a belief in the face of acknowledged extensive controversy on the matter.21 On such views of disagreement, it becomes all the more puzzling as to why people should think about life’s most fundamental questions for themselves, given their awareness of the
Introduction 13 controversy surrounding them. If the existence of extensive controversy surrounding issues like God’s existence, the nature of free will, and the ethics of eating meat prevents one from emerging from their own inquiry on these matters with a justified belief, then what good is it for people to think about these things at all? If coming to a justified answer is off the table, then why bother? Finally, considerations related to epistemic injustice add a third set of puzzles concerning epistemic autonomy. In this literature, philosophers have pointed out the prevalence of problematic credibility misattributions. Testimonial injustice occurs when an individual is harmed as a knower because they are either given a credibility deficit or others are given a credibility excess. Hermeneutical injustice occurs when an individual lacks the epistemic resources to interpret and express their experiences. Both types of injustice raise issues with epistemic autonomy. Issues related to hermeneutical injustice make clear the need for collaborative practices to better equip knowers. The problems raised by hermeneutical injustice make clear a need to rely on others to better make sense of one’s own experiences. Testimonial injustice also makes evident the pitfalls of relying on our own credibility assessments of others. Given that we are prone to all kinds of prejudice and bias, “thinking for yourself” may seem to actually be detrimental to your epistemic well-being. In addition, while independent thinking may be essential for escaping contexts of epistemic oppression (Grasswick 2018, 197), it may also further marginalize oppressed groups by not instantiating the kind of trust exhibited in deference.22 4.1 Chapter Summaries In Chapter 14, “Epistemic Autonomy for Social Epistemologists: The Case of Moral Inheritance,” Sarah McGrath identifies an apparently inconsistent triad: (1) we inherit much of our knowledge in certain domains unreflectively, particularly from testimony; (2) there is an autonomy requirement on moral knowledge that doesn’t exist in other domains; (3) the standard for knowledge is the same across domains. McGrath argues that this triad is not inconsistent because the autonomy requirement is really about moral agency, not moral knowledge. In order for an agent to do the right thing she must be acting for the right-making features of the action in question. This means that she cannot unreflectively act on the testimony of others. In Chapter 15, “Epistemic Autonomy and the Right to Confident,” Sanford Goldberg examines an argument for epistemic autonomy as an epistemic ideal, where epistemic autonomy is understood as intellectual self-reliance. This argument grounds the alleged ideal of epistemic autonomy in having the right to be confident. Having motivated the argument, Goldberg shows how it fails. In fact, Goldberg argues that being epistemically autonomous can actually be in conflict with having the right to be confident. This tension arises from our legitimate expectations of
14 Introduction each other and how we manage our intellectual lives. The problem is that epistemic agents can be ignorant of the expectations that others appropriately have of them. Goldberg argues that such agents have no right to be confident, despite meeting all the conditions of epistemic autonomy understood as epistemic self-reliance. In Chapter 16, “We Owe It to Others to Think for Ourselves,” Finnur Dellsén confronts the puzzle of why we should think for ourselves once we realize that others are typically in a better epistemic position to figure things out. Dellsén explores, and rejects, two “egotistic” solutions to the puzzle, which claim that thinking for yourself is epistemically beneficial to the subject. The first proposed egotistic answer appeals to the epistemic value of understanding, and the second to the epistemic benefits of disagreement. Having rejected these egotistic solutions, Dellsén proposes an altrustic solution to the puzzle. According to this altruistic solution, thinking for yourself is epistemically valuable when, and insofar as, it increases the reliability of consensus opinions. Since an increased reliability of consensus opinions is valuable to society at large, the epistemic benefit of epistemic autonomy is directed at others. In Chapter 17, “Epistemic Self-Governance and Trusting the Word of Others: Is There a Conflict?” Elizabeth Fricker explores the prima facie tension between epistemic self-governance and trust in others’ testimony. Fricker develops an account of epistemic self-governance as forming one’s beliefs in accordance with suitable evidence, and she develops and analytic account of trust that characterizes the trust a recipient places in a speaker when she forms belief from taking their word on some matter. She argues that there is no conflict between these epistemic goods since trust, particularly in testimony, can be based in evidence of the speaker’s trustworthiness. When so based, it does not compromise the speaker’s intellectual self-governance.
Notes 1 https://jmp.princeton.edu/announcements/some-thoughts-and-advice-ourstudents-and-all-students 2 For instance, Descartes (1985/1628) forbids inquiring minds from relying on the ideas of others (13). Locke (1975/1689) claims that the opinions of others do not grant us knowledge even when they are true (23). Kant identifies intellectual immaturity as the inability to use your own understanding without the aid of another. 3 See Ebels-Duggan (2014), Elgin (2013), and Nussbaum (2017). 4 Huemer (2005) notes the following quotes from leading introduction to philosophy texts (among others): In this conversation, all sides of an issue should receive a fair hearing, and then you, the reader, should make up your own minds on the issue. (Pojman 1991, 5)
Introduction 15 My hope is that exposure to this argumentative give-and-take will encourage students to take part in the process themselves, and through this practice to develop their powers of philosophical reasoning. (Feinberg 1996, xi) 5 See McMyler, 6. 6 C.A.J. Coady (2002) has a similar conception of intellectual autonomy, seeing it composed of three elements: (1) independence (freedom from interference and domination), (2) self-creation (freedom to make and shape their own distinctive intellectual life by ordering their intellectual priorities), and (3) integrity (standing up for truth). 7 For a discussion see List and Pettit (2004) and Ranalli (2019). 8 See List and Pettit (2004). 9 See Elgin (2014) for an argument that our epistemic success is significantly dependent upon our intellectual community. 10 See Driver (2006), McGrath (2009), and Hills (2009, 2013) for discussion. 11 See Coady (2002). 12 See also Fricker (2006) for an argument that autonomy is compatible with dependence. 13 See Dellsen (2018) for an argument that groups of experts do better when they are autonomous in what they accept and Hazlett (2015) for an argument that autonomous voting beliefs are socially beneficial. 14 See Hills (2009) as well as Zagzebski (2007). 15 In fact, Kyla Ebels-Duggan (2014) sees epistemic autonomy as being composed of intellectual humility and the intellectual virtue of charity. 16 See Elgin (2013, 2014), Elzinga (2019), Grasswick (2018), and Vega-Encabo (2008), for example. 17 It is worth pointing out that epistemic autonomy does not rule out relying on others. For instance, Hazlett gives the helpful analogy of someone standing on a friend’s shoulders to see if Robinson has stolen second base. The subject here relies on his friend to get his belief, but not in any way that compromises his autonomy. 18 Further, epistemic autonomy may be committed to an overly individualistic conception of knowledge itself. See Code (1991) and Grasswick (2018). 19 See Crisp (2014), Enoch (2014), Hills (2009, 2013), Hopkins (2007), McGrath (2009), and Mogensen (2015). 20 There are also difficult questions concerning how a novice can identify an expert. See Goldman (1991) for the central statement of the puzzle. For discussion, see Cholbi (2007), Coady (2002), Collins and Evans (2007), and Nguyen (2018). 21 See Christensen (2007), Feldman (2006), and Matheson (2015). 22 See Fricker (2009) and Hazlett (2015).
References Ahlstrom-Vij, K. (2013). Epistemic paternalism: A defence. Basingstoke: Palgrave. Axtell, G., & Bernal, A. (2020). Epistemic paternalism reconsidered: Conceptions, justifications, and implications. London: Rowman & Littlefield. Bishop, M. A. (2005). The autonomy of social epistemology. Episteme, 2, 65–78. Christensen, D. (2007). The epistemology of disagreement: the good news. Philosophical Review, 116, 187–218.
16 Introduction Cholbi, M. (2007). Moral expertise and the credentials problem. Ethical Theory and Moral Practice, 10(4), 323–334. Coady, C. A. J. (2002). Testimony and intellectual autonomy. Studies in History and Philosophy of Science Part A, 33(2), 355–372. Code, L. (1991). What can she know? Cornell: Cornell University Press. Collins, H. and Evans, R. (2007). Rethinking expertise. Chicago: Chicago University Press. Crisp, R. (2014). Moral testimony pessimism: a defense. Aristotelian Society Supplementary, 88(1), 129–143. Dellsen, F. (2018). The epistemic value of expert autonomy. Philosophy and Phenomenological Research, 11(2), 344–361. Descartes, R. (1985/1628). Rules for the direction of the mind. In J. Cottingham, R. Stoothoff, & D. Murdoch (Eds.), The philosophical writings of descartes, volume I (pp. 7–77). Cambridge: Cambridge University Press. Driver, J. (2006). Autonomy and the asymmetry problem for moral expertise. Philosophical Studies, 128, 619–644. Ebels-Duggan, K. (2014). Autonomy as intellectual virtue. In H. Brighouse & M. MacPherson (Eds.), The aims of higher education (pp. 74–90). Chicago: University of Chicago Press. Elgin, C. Z. (2013). Epistemic agency. Theory and Research in Education, 11(2), 135–152. Elgin, C. Z. (2014). The commonwealth of epistemic ends. In J. Matheson & R. Vitz (Eds.), The ethics of belief: Individual and social (pp. 244–260). Oxford: Oxford University Press. Elzinga, B. (2019). A relational account of intellectual autonomy. Canadian Journal of Philosophy, 49(1), 22–47. Enoch, D. (2014). A defense of moral deference. Journal of Philosophy, 111(5), 229–258. Feinberg, J. (Ed.). (1996). Reason and responsibility: Readings in some basic problems of philosophy (9th ed.). Belmont, CA: Wadsworth. Feldman, R. (2006). Reasonable religious disagreements. In L. M. Antony (Ed.), Philosophers without gods: Meditations on atheism and the secular life (pp. 194–214). Oxford: Oxford University Press. Foley, R. (2001). Intellectual trust in oneself and others. Cambridge: Cambridge University Press. Fricker, E. (2006). Testimony and epistemic autonomy. In J. Lackey & E. Sosa (Eds.), The epistemology of testimony (pp. 225–251). Oxford: Oxford University Press. Fricker, M. (2009). Epistemic injustice: Power and ethics of knowing. Oxford: Oxford University Press. Goldberg, S. (2013) Epistemic dependence in testimonial belief, in the classroom and beyond. Journal of Philosophy of Education 47(2): 168–186. Goldberg, S. (2010). Relying on others: An essay in epistemology. Oxford: Oxford University Press. Goldman, A. (1991). Epistemic paternalism: Communication control in law and society. Journal of Philosophy, 88(3), 113–131. Grasswick, H. (2018). Epistemic autonomy in a social world of knowing. In H. Battaly (Ed.), The Routledge handbook of virtue epistemology (pp. 196–208). New York: Routledge.
Introduction 17 Greco, J. (2010). Achieving knowledge: A virtue-theoretic account of epistemic normativity. Cambridge: Cambridge University Press. Hardwig, J. (1985). Epistemic dependence. The Journal of Philosophy, 82, 335–349. Hazlett, A. (2015). The social value of non-deferential belief. Australasian Journal of Philosophy, 94(1)), 131–151. Hills, A. (2009). Moral testimony and moral epistemology. Ethics, 120(1), 94–127. Hills, A. (2013). Moral testimony. Philosophy Compass, 8(6), 552–559. Hopkins, R. (2007). What is wrong with moral testimony? Philosophy and Phenomenological Research, 74(3), 611–634. Huemer, M. (2005). Is critical thinking epistemically responsible? Metaphilosophy, 36, 522–531. Kant, I. (1991/1784). An answer to the question: What is enlightenment? In H. Reiss (Ed.), Political writings (2nd ed., pp. 54–60). Cambridge: Cambridge University Press. Kelly, T. (2005). The epistemic significance of disagreement. In T. Gendler & J. Hawthorne (Eds.), Oxford studies in epistemology (Vol. 1, pp. 167–196). Oxford: Oxford University Press. Kelly, T. (2010). Peer disagreement and higher order evidence. In R. Feldman & T. Warfield (Eds.), Disagreement (pp. 111–174). Oxford: Oxford University Press. King, N. (2020). The excellent mind: Intellectual virtue for the everyday life. Oxford: Oxford University Press. Kitcher, P. (1990). The division of cognitive labor. Journal of Philosophy, 87, 5–21. List, C., & Pettit, P. (2004). An epistemic free-riding problem. In P. Catton & G. Macdonald (Eds.), Karl Popper: Critical appraisals (pp. 128–158). Abingdon: Routledge. Locke, J. (1975/1689). An essay concerning human understanding. Oxford: Clarendon Press. Lougheed, K. (2020). The Epistemic benefits of disagreement. Cham, Switzerland: Springer. MacKenzie, C., & Stoljar, N. (Eds.). (2000a). Relational autonomy: Feminist perspectives on autonomy, agency, and the social self. Oxford: Oxford University Press. MacKenzie, C., & Stoljar, N. (2000b). Introduction. In C. Mackenzie & N. Stoljar (Eds.), Relational autonomy: Feminist perspectives on autonomy, agency, and the self. Oxford: Oxford University Press. McGrath, S. (2009). The puzzle of pure moral deference. Philosophical Perspectives, 23(1), 321–344. Matheson, J. (2015). The epistemic significance of disagreement. Basingstoke: Palgrave. Mogensen, A. L. (2015). Moral testimony pessimism and the uncertain value of authenticity. Philosophy and Phenomenological Research, 92(3), 1–24. McMyler, B. (2011). Testimony, trust, and authority. Oxford: Oxford University Press. Nguyen, C. T. (2018). Expertise and the fragmentation of intellectual autonomy. Philosophical Inquiries, 6(2), 107–124.
18 Introduction Nussbaum, M. (2017). Not for profit: Why democracy needs the humanities (Updated ed.). Princeton: Princeton University Press. Pojman, L. (Ed.). (1991). Introduction to philosophy: Classical and contemporary readings. Belmont, CA: Wadsworth. Ranalli, C. (2019). The puzzle of philosophical testimony. European Journal of Philosophy, 2019, 1–22. Raz, J. (1988). The morality of freedom. Oxford: Clarendon Press. Roberts, R. C., & Wood, W. J. (2007). Intellectual virtues: An essay in regulative epistemology. Oxford: Oxford University Press. Scanlon, T. (1972). A theory of freedom of expression. Philosophy and Public Affairs, i(2), 204–226. Shieber, J. (2010). Between autonomy and authority: Kant on the epistemic status of testimony. Philosophy and Phenomenological Research, 80(2), 327–348. Tsai, G. (2014). Rational persuasion as paternalism. Philosophy and Public Affairs, 42(1), 78–112. Vega-Encabo, J. (2008). Epistemic merit, autonomy, and testimony. Theoria: An International Journal for Theory, History and Foundations of Science, 23(61), 45–56. Whitcomb, D., Battaly, H., Baehr, J., & Howard-Snyder, D. (2017). Intellectual humility: Owning our limitations. Philosophy and Phenomenological Research, XCIV, 509–539. Zagzebski, L. (1996). Virtues of the mind: An inquiry into the nature of virtue and the ethical foundations of knowledge. Cambridge: Cambridge University Press. Zagzebski, L. (2007). Ethical and epistemic egoism and the ideal of autonomy. Episteme, 4(3), 252–263. Zagzebski, L. (2012). Epistemic authority: A theory of trust, authority, and autonomy in belief. New York: Oxford University Press. Zagzebski, L. (2013). Intellectual autonomy. Philosophical Issues, 23, 244–261. Zagzebski, L. (2015). Epistemic authority. New York: Oxford University Press.
Part I
The Nature of Epistemic Autonomy
1 Epistemic Autonomy and Externalism J. Adam Carter
1.1 Suppose Prometheus is tied to a ship, bound by ropes so that he can’t move an inch. Prometheus is considerably less autonomous than he would be were his ropes cut. But even in this predicament – where he is completely unable to physically act or affect his environment – there remains a sense in which Prometheus is autonomous in a way he would not be were he not only physically bound but also drugged and hypnotized. This difference is a useful reference point for distinguishing between two broad types of personal autonomy: outward-directed autonomy – which is what Prometheus lacks in virtue of being tied up – and inwarddirected autonomy, which is what he retains even when shackled (but not when drugged or hypnotized).1 The species of personal autonomy that this chapter will have as its focus is exclusively inward-directed personal autonomy – viz., autonomy of the mind. But the focus will be much more narrow than this. For one thing, there are two main ways of thinking about structure of the property that inward-directed autonomy picks out. Here we can distinguish between autonomy as (i) a global property of persons, taken as a whole, and as (ii) a property of particular attitudes of a person, such as beliefs and desires. My interest will be the latter – viz., in attitudinal autonomy. Discussions of attitudinal autonomy are almost entirely restricted to the literature on moral responsibility.2 In such debates, the central focus has been how to spell out the kind of attitudinal autonomy that matters for the purposes of moral, rather than epistemic, evaluations. I suggest that there is also an interesting kind of attitudinal autonomy – what I’m calling epistemic attitudinal autonomy (hereafter: epistemic autonomy) – that matters for knowledge. Here is the plan for what follows. Section 1.2 articulates a case for thinking that there is a distinctive kind of autonomy, autonomy of belief, that matters for propositional knowledge. Section 1.3 makes the case that this kind of attitudinal autonomy is importantly different from the kind that matters for moral responsibility. Section 1.4 shows what an
22 J. Adam Carter internalist account of epistemic autonomy would look like and argues that any such account faces intractable problems. Section 1.5 considers two broad ways to be an externalist about epistemic autonomy: counterfactual externalism and history-sensitive externalism. The former is shown to have its own problems, whereas a version of the latter offers much more promise.3
1.2 Let’s take as a starting point Keith Lehrer’s (1990) classic case of Mr. Truetemp: TRUETEMP: Suppose a person, whom we shall name Mr. Truetemp, undergoes brain surgery by an experimental surgeon who invents a small device which is both a very accurate thermometer and a computational device capable of generating thoughts. The device, call it a tempucomp, is implanted in Truetemp’s head so that the very tip of the device, no larger than the head of pin, sits unnoticed on his scalp and acts as a sensor to transmit information about the temperature to the computational system in his brain. This device, in turn, sends a message to his brain causing him to think of the temperature recorded by the external sensor. Assume that the tempucomp is very reliable, and so his thoughts are correct temperature thoughts. All told, this is a reliable belief-forming process. Now imagine, finally, that he has no idea that the tempucomp has been inserted in his brain, is only slightly puzzled about why he thinks so obsessively about the temperature, but never checks a thermometer to determine whether these thoughts about the temperature are correct. He accepts them unreflectively, another effect of the tempucomp. Thus, he thinks and accepts that the temperature is 104 degrees. It is. Does he know that it is? (1990, 162–163) The predominant view in mainstream epistemology is that Mr. Truetemp does not know that the temperature is 104 degrees in the above case, despite the reliability4 of his belief-forming process. There’s little consensus, though, as to why. Lehrer himself, along with William Alston (1988), both reason along the following lines: Truetemp lacks knowledge of the relevant temperature belief because (i) knowledge requires epistemic justification; and (ii) there is at least some additional justified belief that Truetemp lacks but which he’d need in order to be epistemically justified in believing that it is 104 degrees. For Lehrer, that “extra” belief is a “metabelief” about the reliability of the temperature-implant based process; for Alston, the extra belief just needs to be a belief that could serve
Epistemic Autonomy and Externalism 23 as a good reason for believing that it’s 104 degrees (a reason Truetemp presently lacks). If Truetemp had those beliefs, and appropriately based his temperature belief on them, then he’d be a knower. While something like this might look plausible as a diagnosis of Lehrer’s original version of the case, it’s not hard to think up variations on the case where neither the Lehrer nor Alston lines would work. For example, suppose we run a twist on the case that holds everything fixed except the following: what the scientists implant is more sophisticated than the tempucomp: call it the “TempucompDeluxe.” The TempucompDeluxe not only compels5 Truetemp to believe the target proposition (i.e., that it’s 104 degrees), but it also compels him to believe a further proposition “X,” where we can fill in “X” with either a Lehrer-style metabelief or an Alston-style reason. Then – through a powerful form of hypnotism – the TempucompDeluxe closes the circle by causing Truetemp to base his belief on the relevant reason.6 On a Lehrer- or Alston-style diagnosis, it looks as though Mr. Truetemp is now in the clear, knowledge-wise. And yet, he’s surely not. Here’s why he’s not: if there is something epistemically objectionable (in the sense of being knowledge-incompatible) about Mr. Truetemp original’s belief, there should, by parity of reasoning, be something epistemically objectionable about the new beliefs which he’s acquired in just the same way (e.g., compulsion by the implanted mechanism). In short, if one doesn’t know the temperature on the basis of a gadget-compelled belief, then neither does one know the temperature on the basis of a gadget-compelled belief one is compelled to support on the basis of equally gadget- compelled beliefs. An entirely different story for why Mr. Truetemp fails to know – one that looks initially as though it might fare better in the TempucompDeluxe version of the case – is due to John Greco (2010) and Duncan Pritchard (2010, 2012). The Greco-Pritchard diagnosis of the original Truetemp goes as follows: (i) knowledge must derive from cognitive ability in the sense that the correctness of a known true belief must be because of the manifestation of a cognitive ability7; (ii) Truetemp’s true belief derives from a reliable process but not from any cognitive ability of his; (iii) Therefore, Truetemp doesn’t know. The rationale for (ii) is, in short, that knowledge generating cognitive abilities must be appropriately integrated into a thinker’s wider cognitive architecture. What constitutes the right kind of cognitive integration is a complicated issue.8 But here’s one notable idea, due to Greco: integrated dispositions are at least sensitive to the operation of other belief-forming dispositions. The disposition Truetemp has to form temperature beliefs, however, is plausibly not sensitive to other dispositions he has for forming beliefs (2010, 150–152). It’s just controlled entirely by the mechanism. It looks, prima facie, like the Greco-Pritchard line might work not just as a diagnosis for why Truetemp lacks knowledge, but also as one that
24 J. Adam Carter carries over to TempucompDeluxe. After all, in that case, when Truetemp believes correctly, it doesn’t seem to be down to any properly integrated cognitive ability that he has, and this is so even if (as is stipulated on that version of the case) he has some additional beliefs that stand in support of the target proposition. But notice that we can just pull the same trick again! Just imagine a further twist on the case, where Truetemp has installed an even more impressive device – viz., a “TempucompSUPER-Deluxe” – one that significantly rewires his cognitive architecture in such a way that he now is compelled not only to believe truly what the temperature is (104 degrees) on the basis of a reliable disposition, but also – thanks to the TempucompSUPER-Deluxe – that reliable disposition has been “auto-integrated” by the device. In this version of the case, we can think of Truetemp now having an ability, albeit, one that he is being compelled by the device to exercise in such a way as to end up with the belief he does. It’s not hard to see a general pattern emerging here. When a thinker is caused to believe a proposition (even if reliably) in a manner such that the acquisition of the belief “bypasses” the thinker’s own exercise of her cognitive faculties (as it does in all variations of the Truetemp case considered), then two things seem to follow. First, (i) the intuition that that the thinker lacks knowledge, viz., as in the original Truetemp case, carries over; and second, (ii) that a viable explanation for why this knowledge fails to be present can’t simply point to some epistemic (roughly: truthlinked) condition on knowledge which is said not to be satisfied. A different kind of explanation is needed. At least, this seems to be the lesson from the foregoing discussion.9 Let’s now take a step back. At this juncture, there are broadly three paths available: •
• •
Option 1: (a) Grant that one lacks knowledge when the acquisition of the target belief bypasses the thinker’s own exercise of her cognitive faculties; (b) Insist that there is some plausible epistemic condition (e.g., some kind of epistemic justification condition) on knowledge that can’t possibly by satisfied by further and even fancier iterations of the “TempucompSUPER-Deluxe.” Option 2: Bite the bullet and deny that one lacks knowledge when the acquisition of the target belief bypasses the thinker’s own exercise of her cognitive faculties (e.g., as in the original Truetemp case). Option 3: (a) Grant that one lacks knowledge when the acquisition of the target belief bypasses the thinker’s own exercise of her cognitive faculties; but (b) deny that there is some plausible epistemic condition (e.g., some kind of epistemic justification condition) on knowledge that can’t possibly by satisfied by further and even fancier iterations of the “TempucompSUPER-Deluxe.”
Epistemic Autonomy and Externalism 25 The prevailing literature has generally gone, de facto, for Option 1 or, in rare cases, Option 2.10 But what the pattern we’ve seen in this section suggests is that Option 3 – which thus far has remained unexplored – might be the most plausible avenue for those who don’t want to bite the bullet and maintain that Truetemp knows. But if Option 3 is right way to go, then an interesting implication follows for the theory of knowledge: it looks like knowledge requires not just epistemically justified belief, but (in some sense to be articulated) epistemically autonomous belief, where the relevant kind of “epistemic autonomy” must be such that, with reference to it, we could explain why “TempucompSUPER-Deluxe”-style cases in principle are not cases of knowledge. I’m going to assume from here on out that Option 3 is worth exploring.11 And so the question driving the rest of the chapter will be: Guiding question: In what sense, exactly, does propositional. knowledge require epistemically autonomous belief? Put another way: what should an epistemic autonomy condition on propositionalknowledge-apt belief look like?
1.3 The most interesting fault line for answering the above question is – as we’ll shortly see – one between internalist and externalist approaches to the epistemic autonomy of beliefs. But before digging in to this issue, it’s worth briefly addressing the following question: “Won’t an account of the kind of attitudinal autonomy that matters for moral responsibility also work just fine as an account of the kind of attitudinal autonomy (of beliefs specifically) that matters for knowledge?” The answer to this question is “no.” A quick and easy way to see why these issues come apart will be to focus on how they clearly do so in cases of self-arrangement – viz., where one’s lack of attitudinal autonomy at a later time is intentionally pre-arranged by one at a previous time. Consider two versions of an incapacitated driving case: •
•
Version 1: Someone forces you to ingest a potent cocktail of hallucinogenic drugs, puts you behind the wheel of a moving vehicle, and then – driving this vehicle under the influence of the drugs – you cause a wreck. Version 2: Everything is the same except you chose to take the hallucinogenic drugs.
26 J. Adam Carter The prevailing thinking in the moral responsibility literature is that while your attitudes (e.g., beliefs, desires, perceptions, emotions) lack the kind of attitudinal autonomy that matters for moral responsibility in Version 1, this isn’t the case in Version 2. And this is so even though your attitudes are equally at the mercy of the strong drugs in both cases when you cause the wreck. What is said to make the difference is self-arrangement;12 in Version 2, you (unimpeded by any drugs) chose to take the drugs that would later have the effects on you that they did. So these attitudes, no matter the effect the drugs have on them, remain autonomous in the sense that matters for moral responsibility later on (e.g., when you’re behind the wheel) because these effects are self-arranged. So does self-arrangement make a difference when it comes to whether a belief is autonomous in the way that matters for knowledge, as it does when what’s at issue is the kind of attitudinal autonomy that matters for moral responsibility? To get a clearer grip on this, let’s compare selfarrangement versions of the drunk driving case we’ve already considered (Version 1 and Version 2) with the original (non-self-arrangement) and self-arrangement variations on our three tempucomp cases.
Drunk driving Tempucomp TempucompDeluxe TempucompSUPER-Deluxe
No self-arrangement
Self-arrangement
No moral responsibility (V1) No knowledge No knowledge No knowledge
Moral responsibility (V2) ? ? ?
If self-arrangement really made a difference with respect to whether a belief is autonomous in the way that matters for knowledge in a way that is analogous to the difference it in makes (vis-à-vis attitudinal autonomy) in the case of moral responsibility, then we should expect that – in selfarrangement versions of our tempucomp cases – there would be, by parity of reasoning, no epistemic-autonomy related reason for withholding knowledge (just as there is no autonomy related reason for withholding moral responsibility). But this is the wrong result. The import of self-arrangement – once we shift focus from moral responsibility to knowledge – turns out to be clearly disanalogous. Just consider: while the matter of whether you took the drugs or had them forced upon you obviously bears on whether you are morally responsible later for being in whatever state you’re in (e.g., incapacitated and thus dangerous behind the wheel), it’s not at all evident that our verdict on Truetemp, qua candidate knower, should change in the slightest if we added to the Truetemp backstory that Truetemp willingly at some point in the past paid a superscientist to experiment on him. The addition of this kind of historical fact seems entirely irrelevant
Epistemic Autonomy and Externalism 27 to whether he counts as knowing, at a later time, when affected as he is by what the scientist does to him.13 Summing up, then: (i) propositional knowledge requires not just epistemically justified belief, but epistemically autonomous belief (conclusion of Section 1.2); and (ii) the kind of attitudinal autonomy (viz., of beliefs) that matters for knowledge is different from the kind of attitudinal autonomy that matters for moral responsibility (conclusion of Section 1.3) – which means that to get an account of it in view, we need to look beyond the kinds of accounts already available in the literature on moral responsibility.
1.4 What is the nature of the kind of epistemic autonomy a belief must have in order to qualify as knowledge? Here’s one potential answer: Internalism about epistemic autonomy (IEA): The knowledge- relevant (viz., epistemic) autonomy of a belief at a time, T, is determined entirely by the subject’s present mental structure at T. If IEA is true, then (in slogan form) psychological twins at a time do not differ in the epistemic autonomy of their beliefs at that time.14 In the literature on internalism about epistemic justification, psychological twins cases are often used in the service of supporting an internalist view of epistemic justification – viz., one on which psychological twins at a time do not differ with respect to the epistemic justification they have for their beliefs at that time. This idea is, at any rate, at the heart of the New Evil Demon thought experiment against epistemic externalism.15 It would be natural to expect that, if drawing our attention to psychological twins cases is a move that is supposed to lend intuitive support for internalism about epistemic justification, then so it would as well for internalism about epistemic auotnomy. However, the opposite seems to be true. Consider the following case, which is an epistemic twist on a case used by Mele (2001, 145) to argue against internalism about attitudinal autonomy. PSYCHOLOGICAL TWINS: Ann and Beth are psychological twins. They are identically mentally constituted. Both believe that Cicero’s scribe was named Tiro. Ann believes this because she read it in a book. Beth believes it because scientists want her to be psychologically identical to Ann, and so they brainwash her until her psychology – as it pertains to all matters of Roman history – matches Ann’s exactly.
28 J. Adam Carter If IEA is true, then Ann – who is a paradigmatic knower – satisfies an autonomous belief condition on knowledge if and only if Beth does. But Beth looks sure to fail any plausible construal of an autonomous belief condition on knowledge; she is effectively brainwashed. Beth is, at best, a Truetemp. There are two important take-away points here. First, internalism about epistemic autonomy is false. But, secondly, it’s worth emphasising that this verdict is neither implied by, nor implies, a denial of internalism about epistemic justification. After all, it’s possible for all that’s been said that Beth and Ann are equally epistemically justified in believing that Cicero’s scribe is Tiro (in virtue of their matching psychology postbrainwashing) even while they differ with respect to the kind of epistemic autonomy that matters for knowing.
1.5 On the assumption here on out that internalism about epistemic autonomy is a non-starter, the focus will now be squarely on epistemic autonomy and externalism. According to externalism about epistemic autonomy: Externalism about epistemic autonomy (EEA): It’s not the case that the knowledge-relevant (viz., epistemic) autonomy of a belief at a time, T, is determined entirely by the subject’s present mental structure at T. EEA is a minimal conception of externalism about epistemic autonomy in that it consists merely in the denial of IEA. (Compare: a minimal conception of externalism about epistemic justification consists just in a denial of internalism about epistemic justification.) Substantive externalist theses in epistemology involve not only a denial of internalism, but also a positive thesis about what it is in virtue of which something possesses positive epistemic status when it does. For example: process reliabilists about epistemic justification (e.g., Goldman 1979, 1999) submit – in addition to simply denying internalism – that what makes a belief epistemically justified is the reliability of the process that issues the belief. This section will consider two substantive forms of externalism about epistemic autonomy: counterfactual externalism (Section 1.5.2) and history-sensitive externalism (§4.2). 1.5.1 One way to be a substantive externalist about the kind of attitudinal autonomy that matters for moral responsibility is to take a “counterfactual” approach of the sort that has been defended in various places by John Christman (1991, 2007). According to Christman, the question of
Epistemic Autonomy and Externalism 29 whether a given belief, P, is autonomous in the way that matters for moral responsibility is one we can settle by asking how the subject, in the here and now, would respond to P, if she were to critically reflect on P in light of (an accurate description of) its origins. This is an externalist account because what makes the relevant belief autonomous can be something external to the subject’s psychology. (After all, we might at any given time be clueless as to how we would respond were we to critically reflect on a given attitude; for the counterfactual externalist, all that matters is how, in fact, we would respond under appropriate conditions of reflection.) With a bit more precision, the proposal is as follows: Counterfactual externalism about moral-responsibility relevant attitudinal autonomy: Necessary and sufficient for an attitude’s autonomy, at a time, t, is that (i) the agent is able to adequately reflect on the attitude P at t; where (ii) “adequate reflection” requires the possession of a representation of an alternative to that attitude, Q, one that would be necessary for the subject to realistically imagine an alternative to P; and (iii) were the agent at t to adequately reflect on P in light of a “minimally adequate account” of P’s developmental history, she would not be alienated from it.16 With reference to this kind of account, Christman diagnoses why, for instance, certain traits one has would not be autonomous (in the sense relevant to moral responsibility) even if one has never in fact repudiated them and indeed even if one positively endorses them as their own. He gives the following case to lend support to this idea: PIANO: We can imagine a person who finds out she has been severely abused in her childhood by someone who is responsible for several of the proclivities and skills she has developed. In coming to grips with the memories of the abuse, she has repudiated many of those proclivities. But she doesn’t realize or remember that this person also taught her to play the piano, which she still loves to do and does well. If she were told that her piano-playing was also rooted in her time with this abuser, she would feel alienated from that part of herself also; but in her ignorance she plays on contentedly. Does this person count as autonomous? On my view she does not, since, were she to reflect on the trait in light of its origins she would be alienated from it. (2007, 22–23) There are some well-known worries for this kind of proposal – to both the necessity and sufficiency of satisfying the counterfactual condition.17 These objections won’t concern us here.
30 J. Adam Carter Rather, what will be of interest is what an “epistemic twist” on a Christman-style counterfactual externalism might look like, and whether it could offer a plausible account of epistemic autonomy (regardless of whether it is viable or not as an account of the kind of attitudinal autonomy relevant to moral responsibility.) With this in mind, consider the following variation on the view: Counterfactual externalism about epistemic autonomy: Necessary and sufficient for a belief’s epistemic autonomy, at a time, t, is that (i) the agent is able to adequately reflect on the belief P at t; where (ii) “adequate reflection” requires the possession of a representation of an alternative to that attitude, Q, one that would be necessary for the subject to realistically imagine an alternative to P; and (iii) were the agent at t to adequately reflect on P in light of a “minimally adequate account” of P’s developmental history, she would not be alienated from it. For a case (broadly analogous to the PIANO case) that might intuitively motivate this kind of view, consider the following case of “therapy Truetemp”: THERAPY TRUETEMP: Suppose that Mr. Truetemp18 is taken to a therapist, to help him make sense of why he thinks so often, seemingly inexplicably, about the temperature. Through this therapy, Truetemp uncovers suppressed memories of the operation. After some further digging, he comes to believe that many (but not all) of his temperature beliefs are the result of this operation and, as a result, feels alienated from them – such that he desires to give these beliefs up. But, unbeknownst to him, the belief he has presently that it’s 104 is among those sourced in the operation; since he’s unaware of this, he does not in fact feel alienated in any way from this belief. Counterfactual externalism about epistemic autonomy generates the prima facie intuitive result here that, in THERAPY TRUETEMP, the mere fact that Mr. Truetemp doesn’t actually appreciate the origin of the target belief (that it is 104 degrees) and repudiate it on that basis should count for naught if in fact he would have done so were he to have reflected on it while appreciating its origins. Nonetheless, counterfactual externalism about epistemic autonomy both doubly misses the mark – satisfying this kind of counterfactual condition is neither necessary nor sufficient for epistemic autonomy. Let’s consider the sufficiency leg of the view first. Suppose Truetemp reflects on a belief of his about the temperature (e.g., that it was 84 degrees yesterday at noon), which was in fact not the result of the Tempucomp at all, but instead a result of trusting an expert meteorologist. Now, suppose
Epistemic Autonomy and Externalism 31 Truetemp – because he was treated badly by a meteorologist as a child and has discovered the meteorologist to be the source of his belief – feels alienated from it. Counterfactual externalism about epistemic autonomy generates the implausible result that Truetemp’s belief that it was 84 degrees yesterday at noon lacks the kind of autonomy that matters for knowledge. The necessity leg of the view also runs into problems. First, the view generates the wrong result in the case of mathematical knowledge. All mathematical knowledge (as well as logical knowledge) looks like it’s going to be ruled as not epistemically autonomous with reference to counterfactual externalism simply because it will fail the “adequate reflection” clause, and regardless of the way one came to possess that knowledge in the first place. In paradigmatic cases of mathematical knowledge, we will not be in a position to “realistically imagine an alternative” to the known belief given the necessity of mathematical truths. And what goes for mathematical truths plausibly also goes for analytic truths. (When reflecting on your belief that bachelors are unmarried, can you “realistically imagine” an alternative?) A second problem for the necessity leg of counterfactual externalism about epistemic autonomy concerns the alienation condition that is central to it. Suppose Truetemp is a transhumanist – deeply influenced by the thinking of Ray Kurzweil19. Truetemp, having discovered the origins of his tempucomp-generated belief that is 104 degrees, does not feel alienated from it in any way, owing to his philosophical view that he is (in the words of Andy Clark 2003) a “natural born cyborg.” It’s implausible that whether Truetemp’s belief that it’s 104 degrees has the kind of autonomy that matters for whether he knows that it’s 104 degrees depends in any way whatsoever on whether he aligns himself philosophically with Kurzweil’s and Clark’s transhumanist thinking or, instead, rejects transhumanism for bioconservativism. 1.5.2 Here’s a summary of where we’ve got to: (a) Propositional knowledge requires not just epistemically justified belief, but epistemically autonomous belief (conclusion of Section 1.2). (b) The kind of attitudinal autonomy (viz., of beliefs) that matters for knowledge is different from the kind of attitudinal autonomy that matters for moral responsibility (conclusion of Section 1.3). (c) We should reject internalism about knowledge-relevant (i.e., epistemic) autonomy, the view that the epistemic autonomy of a belief at a time, T, is determined entirely by the subject’s present mental structure at T. (conclusion from Section 1.4).
32 J. Adam Carter (d) Counterfactual externalism is implausible as a substantive form of externalism about epistemic autonomy (conclusion from Section 1.5.1). In this section, we’ll look at a substantive form of externalism about epistemic autonomy that has real promise. The view I want to now defend takes as a starting point two conditions that form the backbone of Mele’s (2001) history-sensitive externalism about the kind of attitudinal autonomy that matters for moral responsibility. These two key conditions are: • •
a bypass condition – viz., a condition pertaining to whether the attitude in question was acquired in a way that “bypassed” the subject’s cognitive faculties. an unsheddability condition – viz., a condition pertaining to whether the subject is able to give up, or at least attenuate the strength of, the relevant attitude.
According to Mele, an attitude is autonomous in the way that matters for moral responsibility only if it has a certain kind of history (and, regardless of whether one is aware of that history). In particular, the attitude has to have a history that is free from compulsion. And an attitude has a compulsion-free history only if it’s not the case that the acquisition of the attitude satisfies both the bypass and unsheddability conditions, spelled out in a particular way. I will set aside entirely whether Mele’s history-sensitive externalism is a viable account of attitudinal autonomy relevant to moral responsibility. Rather, what I want to show is that versions of these two key conditions can be used to frame a very plausible externalist account of epistemic autonomy. With this in mind, let’s get a simple version of the view on the table. History-sensitive externalism about epistemic autonomy (HSEEA): S’s belief that p is epistemically autonomous (viz., autonomous way that is necessary for propositional knowledge) at a time, t, if and only if p has a compulsion-free history at t; and this is a history it has if and only if it’s not the case that S came to acquire her belief that p in a way that: (i) bypasses S’s cognitive faculties, and (ii) the bypassing of such faculties issues in S’s being unable to shed P. HSEEA looks strong from the start. For one thing, it looks, at least prima facie, as though it deals nicely with a range of cases we’ve considered so far. Beginning with PSYCHOLOGICAL TWINS: whereas internalism can’t explain why Ann’s belief is epistemically autonomous but Beth’s is
Epistemic Autonomy and Externalism 33 not, HSEEA has a simple answer: Beth’s belief has a compulsion history and Ann’s doesn’t. Likewise, HSEEA is not threatened by the kinds of cases that made trouble for the necessity and sufficiency legs of counterfactual externalism, and this is because the matter of how one would reflect on a given belief one has acquired is irrelevant to epistemic autonomy on HSEEA. Granted, the counterfactual condition of counterfactual externalism looked initially like it would be essential to get the right result in cases like THERAPY TRUETEMP, the epistemic analogue to Christman’s PIANO case. However, HSEEA gets the right result in that case as well. It follows from HSEEA that Therapy Truetemp’s belief that it’s 104 degrees is not epistemically autonomous. This is not, on HSEEA, because he would have repudiated it had he reflected on it in light of an accurate description of its origins. Rather, it’s because that belief has a compulsion history, with reference to the bypass and unsheddability clauses of HSEEA. And even more, notice that HSEEA looks well suited to deal not only with the original Truetemp case, but also with the kinds of “epistemically improved” variations on the case we considered in Section 1.2, which were used to motivate an epistemic autonomy condition on propositional knowledge in the first place. What these cases exploited after all was the fact that (in imagined scenarios) the epistemic justification-related credentials of a belief could be attained in ways that bypass the subject’s exercise of any of her cognitive faculties. A bypass condition of the sort that features in HSEEA is exactly the kind of condition that can rule out these cases in principle as cases where knowledge is present. Question: given the work a bypass condition is able to do, isn’t the addition of an unsheddability condition theoretically superfluous? The answer is that it’s not superfluous. To see why, consider a further twist on the Truetemp case, one where sheddability is present. TRUETEMP-SHEDDABLE: In the original version of the case, it’s not said explicitly, but it is implied, that Mr. Truetemp can’t easily shed this belief (by eradicating it or attenuating its strength). He’s at any rate stuck with it. Let’s now suppose that, on the present variation of the case, this is explicitly not so. Mr. Truetemp can easily shed the belief, by simply judging the content to be false (e.g., in light of other things he believes) or otherwise attenuating its strength. Finally, let’s suppose he elects not to revise this belief in any way, despite having the power to, after subjecting it to (non-compelled) rational scrutiny, including scrutiny by which he comes to find out that the mechanism he’s using is a reliable one. It’s not at all evident that Truetemp doesn’t know that it’s 104 degrees in TRUETEMP SHEDDABLE, even though the history of the belief includes
34 J. Adam Carter the fact that it was acquired in a way that clearly bypassed Truetemp’s cognitive faculties. In Truetemp sheddable (unlike in the original case, presumably) the power to revise or give up the belief is entirely Truetemp’s, and the fact that he has not revised this particular belief (given the new evidence he has) has nothing whatsoever to do with the workings of the tempucomp. The situation would be different if, unlike in TRUETEMP SHEDDABLE, we make explicit that whatever beliefs are caused by the tempucomp are unrevisable in any way, such that even if Truetemp acquired new evidence against such beliefs, this evidence would have no sway for him. In light of the above, the initial presentation of HSEEA will include an unsheddability condition along with a bypass condition. That said – unfortunately – HSEEA cannot quite stand up to scrutiny without further refinements. In what follows, I want to describe three problems for HSEEA. None requires that we abandon the core HSEEAstyle view, but these three problems require three distinct improvements on HSEEA. The first problem is a kind of preemption problem which goes as follows: you can’t “bypass” a cognitive faculty that isn’t there to bypass. However, presumably, if a tempucomp was installed in infancy prior to the development of any cognitive faculties, then provided a tempucompissued belief caused at such a point was unsheddable, it looks like it should fail an epistemic autonomy condition on knowledge no less than in a case where the cognitive faculties of a thinker were first developed and then bypassed by the tempucomp. Fortunately, there is an easy fix here, which will be to tweak the bypass component so that it is a disjunctive condition that includes either bypassing or preempting cognitive faculties. Let’s say that the subject comes to possess a belief in a way that preempts one’s cognitive faculties if and only if the process that issues the belief lacks the opportunity (trivially) to bypass the subject’s cognitive faculties. The resulting tweaked version of HSEEA is as follows: History-sensitive externalism about epistemic autonomy (HSEEA)*: S’s belief that p is epistemically autonomous (viz., autonomous way that is necessary for propositional knowledge) at a time, t, if and only if p has a compulsion-free history at t; and this is a history it has if and only if it’s not the case that S came to acquire her belief that p in a way that: (i) bypasses or preempts S’s cognitive faculties, and (ii) the bypassing or preemption of such faculties issues in S’s being unable to shed P. In short, HSEEA* can deal with “infant hardwiring” cases whereas the original HSEEA cannot. So far so good.
Epistemic Autonomy and Externalism 35 But there remain more substantive challenges. The second that I want to consider – the opportunity problem – concerns the “cognitive faculty” dimension of the bypass clause. To get a feel for this problem, let’s consider two possible variations on the Truetemp backstory: •
•
Variation 1: The neurosurgeons, prior to implementing the tempucomp, are allowing Truetemp to play a role in the customization of the device prior to fitting – by giving Truetemp an option to prearrange exactly which beliefs (at a later time) will be compelled by the device. Truetemp, however, is very drunk, and agrees (capable only of muddled thought) to each belief the neurosurgeons propose that the temupcomp will later compel him to endorse. Variation 2: The neurosurgeons, prior to implementing the tempucomp, are allowing Truetemp to play a role in the customisation of the device prior to fitting – by giving Truetemp an option to prearrange exactly which beliefs (at a later time) will be compelled by the device. Truetemp is sober and clear-headed; however – in a room with no books or computer – he is deprived of any information on the basis of which to assess, for any given belief the neurosurgeons propose the tempucomp will later compel him to endorse, whether it would be reasonable to agree to have it later so compelled. He accordingly makes each decision arbitrarily, by simply flipping a coin.
Without further qualification, it looks as though the bypass clause is not satisfied in either Variation 1 or Variation 2. But if that’s right, then HSEEA* implies, counterintuitively, that – whereas Truetemp in the original case lacks knowledge because his belief is not epistemically autonomous – this is not the case if the backstories in either Variation 1 or Variation 2 are added. Bizarrely, then, what HSEEA* appears to imply is that your cognitive faculties haven’t been “bypassed” (in the way that matters for epistemic autonomy of a given belief) so long they were in any manner whatsoever exercised in a way that made a difference to the acquisition of the target belief. A more plausible picture would hold that Truetemp – even though he trivially exercised his faculties (while in poor shape, and poorly situated) – in the course of acquiring the target belief(s), his doing so made no difference with respect to the epistemic autonomy of those beliefs. Put another way, a better formulation of the bypass clause in HSEEA* will allow us to say that Truetemp is in effectively the same position, vis-à-vis the epistemic autonomy of the beliefs he acquires in Variations 1 and 2, as he is in the original case. A strategy for a fix, which will get Variations 1 and 2 right, is to incorporate the following idea: to replace the generic “cognitive faculties” with
36 J. Adam Carter “cognitive competences,” where – and following a well-known idea in epistemology due to Ernest Sosa (2007, 2015) – a cognitive or epistemic competence is not merely a disposition to reliably attain epistemic ends (e.g., true beliefs), but a disposition to do so when (i) in proper shape; and (ii) properly situated.20 By way of analogy: it doesn’t count against one’s competence to drive a car if one would drive off the road when drugged or placed on abnormally slick roads. Likewise: it doesn’t count against, e.g., your perceptual, reasoning and memory-related competences if, in exercising them, you gain false beliefs when mentally compromised (e.g., improper shape) or improperly situated (e.g., in a room with no possibility to acquire the kind of evidence that would normally bear on whether to accept the target proposition.) Whereas the acquisition of the relevant beliefs acquired in Variations 1 and 2 above do not bypass Truetemp’s cognitive faculties, they do bypass his relevant cognitive competences in these cases. Thus, what Variations 1 and 2 suggest is a transition from HSEEA* to HSEEA**: History-sensitive externalism about epistemic autonomy (HSEEA)**: S’s belief that p is epistemically autonomous (viz., autonomous way that is necessary for propositional knowledge) at a time, t, if and only if p has a compulsion-free history at t; and this is a history it has if and only if it’s not the case that S came to acquire her belief that p in a way that: (i) bypasses or preempts S’s cognitive competences, and (ii) the bypassing or preemption of such competences issues in S’s being unable to shed P. While HSEEA** can handle all the cases we’ve discussed thus far, there remains one further important area where the condition needs a refinement, and which this section will conclude by discussing. We can call this third issue the remote unsheddability problem. The crux of the remote unsheddability problem is illustrated in cases like the following. Stipulate that Truetemp’s acquisition of some belief, X, bypassed his cognitive competences, as per HSEEA**. But now suppose Truetemp’s belief that X is not, strictly speaking, unsheddable, simply because it is remotely possible that it be shedded, even though it would not be shedded through any normal course of competent inquiry. Remote sheddability cases like the above reveal the need for an amendment to the unsheddability clause in HSEAA** in order to avoid the unwanted result that almost no belief, any circumstance, will be unsheddable. The natural fix here is to block an unrestricted modal reading of “unshededability” by restricting the relevant class of worlds to nearby worlds – thus replacing “unable to shed” with unable to “easily enough” shed, as follows:
Epistemic Autonomy and Externalism 37 History-sensitive externalism about epistemic autonomy (HSEEA)***: S’s belief that p is epistemically autonomous (viz., autonomous way that is necessary for propositional knowledge) at a time, t, if and only if p has a compulsion-free history at t; and this is a history it has if and only if it’s not the case that S came to acquire her belief that p in a way that: (i) bypasses or preempts S’s cognitive competences, and (ii) the bypassing or preemption of such competences issues in S’s being unable to easily enough shed P. HSEEA***, unlike HSEEA**, can make sense of the fact that – for example – Truetemp’s belief couldn’t change from (i) lacking the kind of epistemic autonomy that’s necessary for knowledge, to (ii) possessing that kind of autonomy, simply were we to alter the belief’s sheddability profile such that there is a far-off world (e.g., one where he is struck just right by a bolt of lightning, remapping his cognitive architecture) in which the belief could be shed.
1.6 Let’s recap. Although a flat-footed history-sensitive externalist account of epistemic autonomy – i.e., HSEAA – outperforms its internalist and counterfactual externalist competitors, HSEAA nonetheless faced three distinct kinds of problems discussed in §4.2: (i) the preemption problem, (ii) the opportunity problem, and (iii) the remote unsheddability problem. I’ve discussed these problems separately because each forces a different kind of revision to HSEAA. The version we finished with, HSEEA***, reflects all three revisions to the original HSEAA account, and so has the resources to handle all three kinds of problems. HSEAA*** might well benefit from further technical refinements, given that the view cobbles together different working parts. The aim here is not to make those further refinements, but rather, to have at least made enough of the key refinements to illustrate what the view has the power to do in comparison with internalist and externalist alternatives.21
Notes 1 For further discussion of this distinction, see Mele (2001, 144–146). 2 See, e.g., Mele (2001, 2003); Fischer (2011); Fischer and Ravizza (2000); Weimer (2009); Cyr (2019); Cuypers (2006); and Levy (2011). 3 For a more detailed treatment of the case for history-sensitive externalism about epistemic autonomy, see Carter (2021, Ch. 2). 4 Cf., Nagel (2016). 5 We are continuing to hold fixed the compelled belief results from the sophisticated causal mechanism Lehrer describes.
38 J. Adam Carter 6 Suppose this is either a causal basis or a doxastic basis – or a combination of both – depending on what kind of account of the epistemic basing relation one favors. For discussion, see Korcz (2019) and Carter and Bondy (2019). 7 Note that Greco and Pritchard disagree about the extent to which the correctness of a true belief must be creditable to ability in order for one to know. 8 For some representative discussions, see Carter and Kallestrup (2019); Palermos (2011, 2014); Menary (2007); Andrada (2019); Clark (2015); Carter and Kallestrup (2017); and Carter and Palermos (2015a, 2015b). 9 The reason for (ii) is that, plausibly, for any proposed epistemic condition we might appeal to in an attempt to explain why the no-knowledge intuition in (i) holds, we can simply imagine a variation of that case where the following conjunction applies: that proposed epistemic condition is satisfied and it’s the case that the acquisition of the target belief bypasses the thinker’s own exercise of her cognitive faculties. 10 See Beebe (2004). For an alternative diagnosis that does not fall neatly into any of these categories, see Kaplan (2018), whose position is that the matter of whether Truetemp knows lacks any methodological import, and that the theorist should simply remain agnostic on the point. 11 For a detailed defense of pursuing this strategy, see Carter (2021, Ch 1). 12 See, for discussion, Mele (2001, 176), Fischer and Ravizza (2000, 50), and Carter (2018). 13 This is not to say that it would be irrelevant to whether Truetemp knows if in addition to self-arranging the procedure (e.g., by paying a superscientist), Truetemp also came to know the ins and outs of how the device works. (For related discussion on this point, see Carter 2018, 2021, Ch. 2). Rather, the idea here is just that the relevance of the fact of prior self-arrangement is clear when what’s at issue is moral responsibility, and not when what’s at issue is whether one possesses knowledge. 14 Examples of internalism about epistemic autonomy would be (epistemic variations on) internalist views of attitudinal autonomy, such as those defended by Frankfurt (1988) and Dworkin (1981). 15 See, e.g., Cohen (1984) and Lehrer and Cohen (1983) for presentations of the problem, and Littlejohn (2009) for an overview. 16 I borrow this succinct summation of Christman’s view from Weimer (2009, 186). 17 See Weimer (2009), and particularly, his discussion of the case of Dora and Cara (Weimer 2009, 190). 18 Mr. Truetemp from the original case – suppose – though any of the versions from §2.1 will do. 19 See, e.g., Kurzweil (2005). 20 See, e.g., Sosa (2010). 21 Thanks to Jonathan Matheson and Kirk Lougheed for helpful feedback. This chapter was written as part of the Leverhulme-funded ‘A Virtue Epistemology of Trust’ (#RPG-2019-302) project, which is hosted by the University of Glasgow’s COGITO Epistemology Research Centre, and I’m grateful to the Leverhulme Trust for supporting this research.
References Alston, W. P. (1988). An internalist externalism. Synthese, 74(3), 265–383. Andrada, G. (2019). Mind the notebook. Synthese, 1–20. https://doi.org/10.1007/ s11229-019-02365-9.
Epistemic Autonomy and Externalism 39 Beebe, J. R. (2004). Reliabilism, truetemp and new perceptual faculties. Synthese, 140(3), 307–329. BonJour, L. (1980). Externalist theories of empirical knowledge. Midwest Studies in Philosophy, 5, 53–73. Carter, J. A. (2018). Virtue epistemology, enhancement, and control. Metaphilosophy, 49(3), 283–304. Carter, J. Adam. (2021). Autonomous knowledge: Radical enhancement, autonomy, and the future of knowing. Oxford: Oxford University Press. Carter, J. A., & Bondy, P. (2019). Well-founded belief: New essays on the epistemic basing relation. London: Routledge. Carter, J. A., & Kallestrup, J. (2017). Extended circularity. In J. Adam Carter, A. Clark, J. Kallestrup, S. Orestis Palermos, & D. Pritchard (Eds.), Extended epistemology. Oxford: Oxford University Press. Carter, J. A., & Kallestrup, J. (2019). Varieties of cognitive integration. Noûs. https://doi.org/https://doi.org/10.1111/nous.12288. Carter, J. A., Jesper, K., Palermos, S. Orestis, & Pritchard, D. (2014). Varieties of externalism. Philosophical Issues, 24(1), 63–109. Carter, J. A., & Palermos, S. Orestis (2015a). Active externalism and epistemic internalism. Erkenntnis, 80(4), 753–772. Carter, J. A., & Palermos, S. Orestis (2015b). Active externalism and epistemology. Oxford Bibliographies. https://doi.org/10.1093/OBO/9780195396577-0285. Carter, J. A., & Pritchard, D. (2020). Extended entitlement. In P. J. Graham & N. Pederson (Eds.), New Essays on Epistemic Entitlement. Oxford: Oxford University Press. Christman, J. (1991). Autonomy and personal history. Canadian Journal of Philosophy, 21(1), 1–24. Christman, J. (2007). Autonomy, history, and the subject of justice. Social Theory and Practice, 33(1), 1–26. Clark, A. (2003). Natural-born cyborgs: Minds, technologies, and the future of human intelligence. Oxford: Oxford University Press. Clark, A. (2015). What ‘extended me’ knows. Synthese, 192(11), 3757–3775. Cohen, S. (1984). Justification and truth. Philosophical Studies, 46(3), 279–295. Cuypers, S. E. (2006). The trouble with externalist compatibilist autonomy. Philosophical Studies, 129(2), 171–196. Cyr, T. W. (2019). Why compatibilists must be internalists. The Journal of Ethics, 23(4), 473–484. Dworkin, G. (1981). The concept of autonomy. Grazer Philosophische Studien, 12, 203–213. Fischer, J. M. (2011). Deep control: Essays on free will and value. Oxford: Oxford University Press. Fischer, J. M., & Ravizza, M. (2000). Responsibility and control: A theory of moral responsibility. Cambridge: Cambridge University Press. Frankfurt, H. G. (1988). The importance of what we care about: Philosophical essays. Cambridge: Cambridge University Press. Goldman, A. (1979). What is justified belief? In Justification and knowledge (pp. 1–23). Springer. Goldman, A. (1999). Knowledge in a social world. Oxford: Oxford University Press.
40 J. Adam Carter Goldman, A. (2016). Reply to Nagel. In H. Kornblith & B. McLaughlin (Eds.), Alvin Goldman and his critics (pp. 253–256). Oxford: Blackwell. Greco, J. (2010). Achieving knowledge: A virtue-theoretic account of epistemic normativity. Cambridge: Cambridge University Press. Kaplan, M. (2018). Austin’s way with skepticism: An essay on philosophical method. Oxford: Oxford University Press. Kass, L. R. (2004). Life, liberty and the defense of dignity: The challenge for bioethics. New York: Encounter Books. Korcz, Keith Allen. (2019). The epistemic basing relation. In The Stanford Encyclopedia of philosophy, edited by Edward N. Zalta, Fall 2019. https:// plato.stanford.edu/archives/fall2019/entries/basing-epistemic/; Metaphysics Research Lab, Stanford University. Kurzweil, R. (2005). The singularity is near: When humans transcend biology. London: Penguin. Lackey, J. (2007). Why we don’t deserve credit for everything we know. Synthese, 158(3), 345–361. Lehrer, K. (1990). Theory of knowledge. London: Routledge. Lehrer, K., & Cohen, S. (1983). Justification, truth, and coherence. Synthese, 55(2), 191–207. Levy, N. (2011). Hard luck: How luck undermines free will and moral responsibility. Oxford University Press UK. Littlejohn, C. (2009). The new evil demon problem. Internet Encyclopedia of Philosophy https://iep.utm.edu/evil-new/. Mele, A. R. (2001). Autonomous agents: From self-control to autonomy. Oxford: Oxford University Press on Demand. Mele, A. R. (2003). Agents’ abilities. Noûs, 37(3), 447–470. Menary, R. (2007). Cognitive integration: Mind and cognition unbounded. Springer. Nagel, J. (2016). Knowledge and reliability. In H. Kornblith & B. McLaughlin (Eds.), Alvin Goldman and his critics (pp. 237–256). Oxford: Blackwell. Palermos, S. O. (2011). Belief-forming processes, extended. Review of Philosophy and Psychology, 2(4), 741–765. Palermos, S. O. (2014). Knowledge and cognitive integration. Synthese, 191(8), 1931–1951. Pritchard, D. (2010). Cognitive ability and the extended cognition thesis. Synthese, 175(1), 133–151. Pritchard, D. (2012). Anti-luck virtue epistemology. Journal of Philosophy, 109(3), 247–279. Sandel, M. J. (2009). The case against perfection. Harvard, MA: Harvard University Press. Sosa, E. (1991). Knowledge in perspective: selected essays in epistemology. Cambridge: Cambridge University Press. Sosa, E. (2007). A virtue epistemology: Apt belief and reflective knowledge, volume 1. Oxford: Oxford University Press. Sosa, E. (2010). How competence matters in epistemology. Philosophical Perspectives, 24(1), 465–475. Sosa, E. (2015). Judgment and agency. Oxford: Oxford University Press. Weimer, S. (2009). Externalist autonomy and availability of alternatives. Social Theory and Practice, 35(2), 169–200.
2 Autonomy, Reflection, and Education Shane Ryan
2.1 Introduction I argue that if we accept the promotion of autonomy as an aim of education, then we should accept the promotion of skillful reflection as an aim of education. I set out the Dual Process Hypothesis of Reflection (DPHR), according to which both Type 1 and Type 2 cognitive processes play a role in an agent’s reflection. I discuss how an agent’s reflection may be skillful, and how such reflection contributes to superior autonomy. I argue, however, that, on the DPHR, skillful reflection is not cultivated by staying neutral on the good. The cultivation of reflection requires the training of what is worthy of reflection. This has implications for the view that autonomy can be promoted while remaining neutral on the good.
2.2 Autonomy and Education: An Overview For now, let us simply stipulate that an autonomous agent is a self- governing agent (Dryden 2020).1 A central aim of education in contemporary liberal societies is the promotion of autonomy, which involves attempting to develop the autonomy of those being educated.2 The promotion of autonomy as an aim of education has been supported by numerous theorists (Brighouse, 1998, 2005; Norman, 1994; White, 1991; Siegel, 1988), although there are also some dissenters (Swaine, 2012; Hand, 2006).3 Why promote autonomy? Autonomy may be regarded as an end in itself or as instrumental to achieving valuable ends, either of which could provide a justification for its promotion as an aim of education. If autonomy is regarded as an end itself, then a view about what is good for human beings is being adopted.4 On such a view, autonomy might be regarded as a necessary part of living well. Such views tend to be based on further views regarding human nature, views about how it’s best for beings like us to live or perfect our nature. Views with such commitments are perfectionist views.5 Perfectionist theorists defend accounts of human good that they take to be objective (Wall, 2008). A perfectionist’s particular account of human
42 Shane Ryan good shapes her view of the politics that we should have. Likewise, such an account has implications for the aims we set for education. More specifically, with such an account of human good at hand, some aims for education or the fulfillment of such aims could serve to promote or hinder human good.6 An alternative approach is to defend the cultivation of autonomy as an educational aim without making the same substantial commitments that the perfectionist makes regarding the good. By doing so, one could try to stay neutral on questions about human nature and objective human goods, and, therefore, not commit to a state education policy based on particular answers to these questions. Clayton (2009: 95) argues, for example, that, with regard to children, “educational decisions must be defended in an anti-perfectionist manner.” The promotion of autonomy may instead be defended on the basis that autonomy puts individual agents in a position to determine their own ends and as such has instrumental value in a liberal, pluralistic society.7 While this chapter provides an argument that we should educate for skillful reflection assuming we should educate for autonomy, a discussed implication of the argument is that we can’t promote skillful reflection while staying neutral on the good.
2.3 The Argument This chapter argues that we should educate for skillful reflection, or reflection as a virtue.8 The argument rests on a premise, previously discussed, that we should educate for autonomy. The argument is as follows: P1. We should educate for autonomy. P2. If we should educate for autonomy, then we should educate for reflection. C1. We should educate for reflection. P3. If we should educate for reflection, then we should educate for either skillful or unskillful reflection. P4. We shouldn’t educate for unskillful reflection. C2. We should educate for skillful reflection. Key points made in support of the premises are that autonomy requires reflection (P2), that reflection can be of better or worse quality and that we should accept the Dual-Process Theory of Reflection (P3). The chapter doesn’t stop at (C2) but rather, when this point is reached in the argument, the chapter sets out the DPHR as a basis for accounting for what educating for skillful reflection would entail. An interesting offshoot of the argument, which is flagged for future consideration, is that autonomy may be of better or worse quality on the basis that reflection, a requirement and perhaps constitutive part of autonomy, can be of better or
Autonomy, Reflection, and Education 43 worse quality. If it’s right that there may be low quality autonomy, then this raises the question as to the sort of respect we should show for such autonomy.
2.4 Accounts of Autonomy Let’s return now to how autonomy should be understood. Roughly, autonomy is a capacity for self-determination or self-governance (Dryden, 2020; Christman, 2020). There are different sorts of accounts of autonomy; content neutral, or procedural, accounts (Frankfurt, 1988; Dworkin, 1988), and substantive accounts (Wolf, 1987; Benson, 1991). Substantive accounts don’t necessarily deny proposed procedural requirements, they merely go further than mere procedural accounts.9 As such, we’ll focus our attention on procedural accounts. Procedural accounts defend a conception of autonomy according to which autonomy is constituted by certain capacities. Autonomous action is action that involves the appropriate exercise of capacities constitutive of autonomy. According to Frankfurt (1988), there must be second-order endorsement of first-order volitions in order for there to be autonomous action. It’s natural to think that reflection will play a crucial role in such second-order endorsements. Dworkin (1988) also endorses the same sort of view. In fact, for Dworkin the true self is constituted by higherorder preferences. If the self is so constituted, then it’s understandable that reflection is thought crucial for self-governance. In fact, accounts like these that give reflection a central role are typical. Dryden (2020) writes that on a procedural account, action is “autonomous if it has been endorsed by a process of critical reflection” and that such accounts represent the majority of accounts of personal autonomy. In other words, procedural accounts of autonomy generally attribute an important role to reflection. It might seem that accepting reflection as a condition for autonomy risks over-intellectualizing autonomy, and so yields an overly demanding account of autonomy. Nevertheless, if we identify the intellect in some strong sense with the self, and if autonomy is self-governance or something relevantly similar, then perhaps autonomy is necessarily intellectual.10 If we want to promote autonomy, then given the significance of reflection for autonomy, we have reason to promote reflection. Of course, this conclusion is in line with a vast literature on philosophy of education that accepts that we should promote autonomy. As has been discussed, on various accounts of autonomy, autonomy is constituted by certain capacities and reflection plays a central role.11 The question as to what promoting reflection might involve, however, remains. In order to develop an answer, we need to know more about the nature of reflection. I do this first by considering Dewey’s characterization of reflection, after which I make the case that reflection may be of
44 Shane Ryan better or worse quality. Having argued that we should promote skillful reflection, I set out the DPHR as basis for providing detail as to what good quality reflection looks like.
2.5 Characterizing Reflection John Dewey (1933, p. 9) plausibly characterizes reflection as “active, persistent, and careful consideration” (Mi and Ryan, 2016). It seems right that a wandering mind moving quickly without direction from one thing to another shouldn’t be thought of as engaged in reflection. Such a person’s thoughts seem neither active, nor persistent. The careful characterization is less straightforward. It also seems right that the daydreamer who is enjoying her thoughts and not engaged in careful consideration has not passed a necessary threshold to be counted as being engaged in reflection. Such an agent seems to lack a level of critical attentiveness to one’s own thoughts to be counted as having engaged in reflection. Reflection is a cognitive capacity that can be directed at oneself or at some matter, and as such has a broad scope (Mi and Ryan, 2016; Sosa, 2014). Our everyday use of “reflection” bears out this description of reflection. We sometimes say that one should reflect on oneself or one’s actions, if one behaves badly. Similarly, we sometimes say that some matter, say a job offer or business proposal, should be reflected upon prior to decision.
2.6 Skillful and Unskillful Reflection While reflection is a capacity that requires active, persistent, and careful consideration, there’s more to reflection than this. Reflection often produces judgments and beliefs. As such, reflection may be better or worse epistemically. When one reflects on, say, what lesson one should draw from an experience, one may make a judgment that’s good or bad epistemically. Depending on a final account of epistemically relevant factors, good epistemically might mean, for example, sensitive to evidence. The person whose reflection is epistemically bad might be an agent who is subject to epistemic vices, wishful thinking, poor reasoning, and so on. If we accept that reflection plays a central role in autonomy and that on that basis we should promote reflection as a way of promoting autonomy, then it makes sense to promote (epistemically) good or skillful reflection. After all, why promote the act of reflecting regardless of the quality of that reflection? If it’s epistemically bad or unskillful reflection, then it may be error strewn, muddled, and so on. It’s hard to see how one’s reflection, if it is very poor epistemically, enables autonomy. Should we say that an agent whose reflection is of an epistemic quality such that they will make lots of mistakes in their thinking is acting autonomously on the basis of such reflection? If autonomy is
Autonomy, Reflection, and Education 45 constituted by capacities and an agent’s capacity of reflection may be better or worse, it seems natural to wonder what bearing such variation has on autonomy. Even if it is the case that use of reflection regardless of quality can enable autonomy, we’ll merely then have displaced our problem. We’ll rightly wonder about the quality of autonomy that is informed by low quality or unskillful reflection. The agent whose choice to A comes from confused and irrational reflection to A, if they have autonomy at all, seems to have a sort of defective or compromised autonomy. If their self-reflection is such that the agent has a significantly mistaken view of themselves, then again their self-governance seems compromised. Their autonomy such as it is would be from a capacity, reflection, that isn’t working well. Perhaps then it is natural to think that an agent with a low quality of reflection will have a lower quality of autonomy. Autonomy that arises from epistemically skillful reflection certainly seems preferable.12 If a child learns to reflect in a way such that she’s capable of considering various relevant alternatives when faced with a decision, she’s able to reason logically, and so on, she seems better placed to intelligently examine and choose actions to realize her ends, and to choose those ends in the first place. If she’s bad at reflecting, that all seems less likely. Ultimately, if promoting autonomy educationally requires promoting reflection educationally and reflection can be skillful or unskillful, good or bad, then we have reason to promote skillful reflection. If we want to educate for skillful reflection, then we need to have a more detailed account of the nature of reflection. In the section that follows I will set out just such an account.
2.7 The Dual Process Hypothesis of Reflection As has been said, some agents are better at reflecting than others. An agent skilled at reflection might be thought to enjoy autonomy of a higher quality than an agent who is, say, clumsy in his reflection. The agent who is clumsy in his reflection may be so in that he, say, often fails to reflect in a logical way. An account of reflection would give us a basis to understand the various ways reflection can go wrong, as well as right. The DPHR is a hypothesis about how reflection works (Mi and Ryan, 2020; Ryan and Mi, 2018; Mi and Ryan, 2016). To understand the hypothesis, we should understand dual process theory. According to dual process theory, there are two types of cognitive processes, Type 1 (or system 1) and Type 2 (or system 2) processes. Type 1 processes are “fast, automatic, high capacity,” while Type 2 processes are “slow, controlled, low capacity” (Evans, 2014, p. 130; also see Kahneman, 2011, pp. 20–21). There is also widespread agreement that Type 2 processes are effortful, while Type 1 processes aren’t. (Evans, 2008, p. 270; Kahneman,
46 Shane Ryan 2011, p. 21). Evans (2014, p. 130) labels Type 1 processes “intuitive” and Type 2 processes “reflective.”13 Kahneman (2011, pp. 21–22) offers the following examples of the workings of Type 1 and Type 2 cognitive processes: Type 1 examples Detecting that one object is more distant than another. Orienting to the source of a sudden sound. Completing the phrase “bread and …” Making a “disgust face” when shown a horrible picture. Detecting hostility in a voice. Answering to 2 + 2 = ? Reading words on large billboards. Driving a car on an empty road. Finding a strong move in chess (if you are a chess master). Understanding simple sentences. Type 2 examples Bracing for the starter gun in a race. Focusing attention on the clowns in the circus. Focusing on the voice of a particular person in a crowded and noisy room. Looking for a woman with white hair. Searching memory to identify a surprising sound. Counting the occurrences of the letter a in a page of text. Comparing two washing machines for overall value. Checking the validity of a complex logical argument. DPHR is a hypothesis about how cognitive processes constitutive of reflection operate within a dual process model of human cognition. The characterization of reflection as active, persistent, and careful consideration suggests a type of cognitive process that is effortful, slow, and controlled – a Type 2 process. Certainly, this seems consistent with how reflection is often experienced. There is deliberation, effort, sequential reasoning involved when reflection takes place. On the dual process of model of human cognition, it might, therefore, be natural to think of reflection as solely a Type 2 process. Nonetheless, it would be a mistake to claim that it is solely Type 2 processes that are involved in reflection. The when and how of our reflection are plausibly accounted for by Type 1 processes. This, as we shall see, is significant for educating for reflection and so the promotion of autonomy as an aim of education. The initiation of reflection is often the result of Type 1 processes.14 Sometimes, something seems odd about a situation or statement and we
Autonomy, Reflection, and Education 47 find ourselves prompted to reflect. Sometimes something goes wrong, say, the table we were assembling doesn’t seem to be turning out the way it was supposed to turn out. Again, an automatic response is to reflect on where we have gone wrong. On other occasions, nothing is odd and nothing has gone wrong, but we may nevertheless be in a sort of situation that prompts a reflective response. We may, for example, face important choices, such as to whether or not we should accept the job offer. Of course, many of us start to reflect in situations in which we’re not faced with a particularly important choice; we might simply be faced with a choice about which juice to buy. Simply thinking that something is important might on some occasions be enough to initiate reflective thought. Regarding my brother as important, I might simply reflect on his life and how he became the person he became. Sometimes, however, one’s reflection may be prompted not by oneself, but by another – for example, by a teacher’s instruction.15 Type 1 processes also shape the how of our reflection with regard to the substance of our reflection. As an agent engages in reflection, memories, associations, and ideas may feed her reflection (Mi and Ryan, 2016). If, for example, one reflects on the current political situation in the Middle East, say about the prospects of the region becoming more peaceful, then memories of past events in the region, and ideas one has about the relevant dynamics of the conflict, as well as conflict resolution, may come to mind and inform one’s reflection on the matter. Type 1 processes also provide the reflecting agent with intuitions, hunches, and so on as she reflects (Pelaccia et al., 2011; Mi and Ryan 2016). Type 1 processes are often shaped by learnt as well as native responses (Hogarth, 2001, 2005; Mi and Ryan, 2016). Via such learnt responses, Type 2 processes may ultimately shape Type 1 processes. Such responses may be inculcated through systems of reward and punishment, though they may also be inculcated in other ways. Reflection and repetition is one such way (Bortolotti, 2011). A procedure, say in a new job, that an agent has to carefully reflect on to follow, can be expected to be followed with ease after sufficient repetitions of that procedure (Mi and Ryan, 2016). An example of the far-reaching influence of our reflection can be seen in relation to scholarly work. Reflection allows us to question the world as it is presented to us by others and to think independently. Reflection is a process by which we may construct or conceive of alternatives to the way things appear to us. Good scientific, philosophical, and religious hypotheses, and many of our theories, are the result of skillful reflection. Such hypotheses and theories plausibly contribute greatly to our understanding of the world around us and furnish us with alternative models that facilitate independent action. How things appear will often be the product of a background theory. Skillful reflection can positively influence Type 1 processes epistemically;
48 Shane Ryan skillful reflective thinking can, and often does, play a role in the training of Type 1 processes. Kahneman’s example, of the master chess player is a case in point. Such a player can be expected to have had to reflect on moves that she can later simply “see.” Similarly, the morally virtuous agent, who on many accounts will also have trained and reflected, comes to see the world in moral color (Fricker, 2007). Reflection can thus play a role not only in scrutinizing appearances but also in generating appearances. Skillful reflection is likely to generate appearances that promote truth and understanding, which enables a superior quality of autonomy than one fueled by, say, ignorance or epistemically vicious thinking. Accurate and relevant thinking feeding reflection, as well as careful and logical thinking regarding those thoughts, together makes for skillful reflection. A further aspect of reflection on the Dual Process Hypothesis that is noteworthy, however, is the impossibility of an end-neutral cultivation of skillful reflection. On the model presented here, Type 1 and Type 2 processes work in tandem to produce reflection. An education that seeks to cultivate skillful reflection for the end of autonomy will, however, have to cultivate reflective responses to certain situations. We need Type 1 processes to function – we encounter lots of information in our environment and have lots of stimuli that may induce reflection. However, as finite cognitive creatures, we can’t reflect on everything. This raises the question for those interested in educating for reflection, what is worthy of reflection?
2.8 Education and Reflection DPHR has implications for educating for reflection and, ultimately, for promoting autonomy. If we accept DPHR, then we accept a hypothesis regarding how reflection as a process works. Depending on a number of variables, reflection in particular individuals on certain sorts of matters may work well or badly. Working well involves, amongst other things, working well epistemically.16 Whether reflection works well epistemically will depend on the form and content of both Type 1 and Type 2 processes. Type 1 processes, as previously mentioned, feed Type 2 processes with memories, perceptions, associations, and so on. Whether they are accurate or not can be expected to impact the epistemic quality of the resulting reflection. If the active, persistent and careful consideration draws on misrememberings, irrelevances, or misleading associations as its content, then it is less likely that reflecting will result in getting things right. Similarly, if the steps that Type 1 processes prompt in reflection are illogical, then, generally we should be less optimistic about getting things right than if the steps prompted were logical.17 Type 2 processes can be expected to be negatively affected in a like way if the patterns of reasoning that we’re prone
Autonomy, Reflection, and Education 49 to are intellectually vicious – say narrow-minded, bigoted, intellectually cowardly, and so on. Furthermore, our cognitive capacity to retain and manipulate information in active memory may differ, as may our capacity to sustain concentration on some matter. Education can contribute to the sort of variables discussed above being made more optimal such that when agents engage in reflection, that reflection is more likely to work well epistemically. When we reflect is also a matter of epistemic significance. As mentioned in the previous section, we can’t reflect all the time about all matters. Sometimes the results are better if only Type 1 cognitive processes are employed (Kahneman, 2011). There is a developing literature on the circumstances in which reflection is particularly prone to going wrong or when it is simply the wrong cognitive tool to employ. (Kornblith, 2012; Kahneman, 2011). An education concerned with reflection – and an education should be concerned with reflection if autonomy is an aim of education – should be concerned with educating for when reflection should be employed. The occasions when an agent typically reflects has been discussed previously. One of the occasions flagged was when a person faces an important decision. This is important for the autonomy of the agent, see Frankfurt (1988) and Dworkin (1988), and, perhaps, is also important epistemically. What is considered an important decision will, of course, vary from person to person and from context to context. Educating for reflection when faced with important decisions seems appropriate, although educating children to engage in reflection at certain points will inevitably involve educating for particular values. Does accepting a job that leaves one with very little time to spend with one’s family warrant reflection? Does the offer to join a trade union warrant reflection? Does the impact of a consumer choice on the environment warrant reflection? Perhaps children should be educated to treat each of these decisions as important and so worthy of reflection. Regarding these choices as important is, however, done so on the basis of particular values. Nevertheless, if we are to educate for reflection when faced with important decisions, then inevitably we have to take a stand on the sort of things that are important. This point doesn’t just apply to important choices. As mentioned previously, we sometimes reflect when something goes wrong. When I find that my bicycle brakes aren’t working, I reflect on why that is the case. Again, however, what will count as going wrong often presupposes certain values. When a friend makes a borderline racist joke, should one reflect on one’s friendship with that friend? Should one reflect on one’s romantic relationship when one’s partner flirts with someone else? Part of an education for reflection will have to include educating for when to reflect. It doesn’t seem that this can be done, however, without taking substantive positions on values and hence ultimately on the good.
50 Shane Ryan One response might be to argue that autonomy may be promoted without educating as to when one should engage in reflection. Reflection might then be treated as a tool that can be educated for and this tool can be interfaced with various value systems. When an agent experiences a decision as important, or, say, has a bad experience, reflection, as a result of education, may be prompted. Educating for reflection though seems to make impossible such neutrality. One way or another, there will be explicit or implicit suggestions as to when to reflect over the course of educating for reflection.18 So while, as was pointed out near the beginning of the chapter, a central aim of education in contemporary liberal societies is the promotion of autonomy, it’s one that isn’t achieved by adopting neutrality on the good.
2.9 Conclusion As discussed, there is a large body of work in the literature that supports promoting autonomy in education and that holds reflection to be central to autonomy. This chapter presents an argument that we should promote reflection in education if we want to promote autonomy in education. The nature of reflection is discussed and the case is made that reflection may work better or worse. This discussion involves an explication of DPHR to further our understanding of reflection. According to the approach, reflection depends on both Type 1 and Type 2 cognitive processes. On the basis of DPHR, detail is provided as to how reflection might be educated for but how doing so would involve making value commitments that prevent education remaining neutral on the good.
Notes 1 Zagzebski (2015) prefers “rational self-governing,” which is in line with a Platonic formulation of autonomy. For criticism of Zagzebski’s own particular conception of rationality, see Pritchard and Ryan (2014). 2 The argument in this chapter depends on the promotion of autonomy not merely being the promotion of the value of autonomy. If it were only the value of autonomy that concerned us, then there would not need to be any related concern with reflection. It is not, however, the promotion of the mere value of autonomy that concerns us. 3 Swaine (2012) rejects the promotion of the ideal of autonomy as its typically construed. He argues that that ideal is neither sound, nor worthy of promotion. Swaine offers an alternative ideal that involves the development of critical capacities and skills and which is character-based. He writes that his approach would facilitate “astute rational reflection.” Also see Schinkel (2010), who rejects justifications provided for the compulsory promotion of autonomy. 4 See, for example, Callan (2002). Morgan (2013) provides a discussion of autonomy as instrumentally valuable for reaching Nirvana, rather than autonomy itself being regarded as constitutive of the good life.
Autonomy, Reflection, and Education 51 5 Levinson (1999) endorses a weak perfectionist view in her account of a liberal education. 6 Of course, a perfectionist justification for the promotion of autonomy is not uncontroversial. Value pluralists, such as Isaiah Berlin, hold that there is no appropriate way for us to set out values and their rankings to one another. This falls out of Berlin’s (n.d., p. 217) claim that the goods we seek are sometimes incommensurable. We can’t, for example, justify to another agent the good of autonomy over, say, the good of pleasure (Gaus and Courtland 2011). 7 Such a view wouldn’t commit one to saying what one chooses is valuable for human flourishing or even that one chooses is valuable for human flourishing. The thought rather is that in a liberal society different individuals have different ends, rather than a commitment that those ends are conducive for human flourishing. 8 Skilful reflection is conceived of as a reliabilist virtue (Mi and Ryan 2016, 2020). In other words, it is a virtue that produces a preponderance of true beliefs over false beliefs. Skilful reflection as a reliabilist virtue, then, isn’t reflection that is understood to be procedurally good independent of the outcomes it produces. See Mi and Ryan (2020; 2016) for more on how “skilful” reflection is understood. 9 Substantive accounts go further than mere procedural accounts. The ways they do so differ, although they may place a normative constraint on autonomy such as, Wolf (1990), requiring that an autonomous agent can distinguish right from wrong (Piper 2020). 10 I take it that the case for a reflection condition is all the stronger for intellectual autonomy. 11 There is a challenge in the literature to the claim that reflection or “critical reflection” is necessary for autonomy; see Noggle (1997). My point here isn’t to argue that it’s conceptually impossible to have autonomy without reflection. My aim isn’t to provide an analysis of the necessary conditions of autonomy. My point is that reflection plays an important role in the sort of autonomy with which we are familiar and that without reflection an agent’s autonomy would likely be of lower quality. Noggle (1997: 507) grants that critical reflection seems to contribute to a more efficient autonomy. 12 A related though distinct point is that autonomy that is produced by bad thinking also seems less valuable than it could be. 13 Not too much should be made of the reflective label here. The label is intended as a guide to the process type, rather than an analysis of the process type. Type 2 processes have also been labelled analytic, while Type 1 processes have been labelled heuristic (Evans 2008, p. 257). 14 Also see Evans and Stanovich (2013, pp. 236–237) for the claim that the when of our reflection is often down to Type 1 processes. 15 Type 1 processes will still play a crucial role in the treatment of such instructions (Mi and Ryan, 2016; Ryan, 2014). 16 Reflection may also work well or badly prudentially and whether it is doing so may depend on an evaluation independent of whether it’s working well epistemically. Conceptually it’s possible that on some occasions that reflection, even epistemically good reflection, may sometimes be detrimental to the prudential value of an agent. Think, for example, of an agent who regularly reflects on her greatest failings. 17 Doing well epistemically need not only involve the quality of beliefs produced but also the quantity of beliefs produced. If this is right, then following certain heuristics, though less accurate than following logical steps, may be
52 Shane Ryan desirable even if it results in more false beliefs as it would also result in significantly more true beliefs. 18 If one accepts that students should be educated to reflect, there is a separate question as to the range of matters on which they should be educated to reflect. Aside from differences in what might be included in such a range of matters, one could advocate reflecting in response to a narrower or broader range of matters.
References Benson, P. (1991). Autonomy and oppressive socialization. Social Theory and Practice, 17(3), 385–408. Berlin, I. (n.d.). Two concepts of liberty. In Liberty. Retrieved Mar. 22 2012, from www.oxfordscholarship.com/view/10.1093/019924989X.001.0001/ acprof-9780199249893-chapter-4 Bortolotti, L. (2011). Does reflection lead to wise choices? Philosophical Explorations, 14(3), 297–313. Brighouse, H. (2005). On education. London: Routledge. Brighouse, H. (1998). Civic education and liberal legitimacy. Ethics, 108(4), 719–745. Callan, E. (2002). Autonomy, child-rearing, and good lives. In D. Archard & C. M. Macleod (Eds.), The moral and political Status of children (pp. 118–141). Oxford: Oxford University Press. Cherniss, J., & Hardy, H. (2010). Isaiah Berlin. In E. N. Zalta (Ed.), The Stanford Encyclopedia of philosophy. http://plato.stanford.edu/archives/fall2010/ entries/berlin/. Christman, J. (2020). Autonomy in moral and political philosophy. The Stanford Encyclopedia of philosophy, Edward N. Zalta (ed.), forthcoming URL = https:// plato.stanford.edu/archives/fall2020/entries/autonomy-moral/. Clayton, M. (2009). Reply to Morgan. Studies in Philosophy and Education, 28(1), 91–100. Dewey, J. (1933). How we think: A restatement of the relation of reflective thinking to the educative process. Lexington, MA: Heath. Dryden, J. (2020). Autonomy. The Internet encyclopedia of philosophy https:// iep.utm.edu/autonomy/. Dworkin, G. (1988). The theory and practice of autonomy. New York: Cambridge University Press. Evans, J. S. B. T. (2014). Two minds rationality. Thinking & Reasoning, 20(2), 129–146. Evans, J. S. B. T. (2008). Dual-processing accounts of reasoning, judgment, and social cognition. Annual Review of Psychology, 59: 255–278. Evans, J. S. B. T., & Stanovich, K. E. (2013). Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science, 8, 223–241. Frankfurt, H. (1988). Freedom of the will and the concept of a person. In The importance of what we care about by Harry Frankfurt (pp. 11–25). Cambridge: Cambridge University Press. Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford: Oxford University Press.
Autonomy, Reflection, and Education 53 Gaus, G. and Courtland, S. D. (2011). Liberalism. The Stanford Encyclopedia of philosophy, Edward N. Zalta (ed.), URL = http://plato.stanford.edu/archives/ spr2011/entries/liberalism/. Hand, M. (2006). Against autonomy as an educational aim. Oxford Review of Education, 32(4), 535–550. Hogarth, R. M. (2005). Deciding analytically or trusting your intuition? The advantages and disadvantages of analytic and intuitive thought. In T. Betsch & S. Haberstroh (Eds.), The routines of decision making, (p. 67–82). Lawrence Erlbaum Associates Publishers. Hogarth, R. M. (2001). Educating intuition. Chicago: University of Chicago Press. Kahneman, D. (2011). Thinking, fast and slow. New York: Macmillan. Kornblith, H. (2012). On reflection. Oxford: Oxford University Press. Levinson, M. (1999). The demands of liberal education. Oxford: Oxford University Press. Mi, C., & Ryan, S. (2020). Skilful reflection as a master virtue. Synthese, 197, 2295–2308. Mi, C., & Ryan, S. (2016). Skilful reflection as an epistemic virtue. In C. Mi, M. Slote, & E. Sosa (Eds.), Moral and intellectual virtues in western and Chinese philosophy (pp. 34–48). New York: Routledge. Morgan, J. (2013). Buddhism and autonomy-facilitating education. Journal of Philosophy of Education, 47(4), 509–523. Noggle, R. (1997). The public conception of autonomy and critical self-reflection. The Southern Journal of Philosophy, 35(4), 495–515. Norman, R. (1994). ‘I did it my way’: some thoughts on autonomy. Journal of Philosophy of Education, 28(1), 25–34. Pelaccia, T., Tardif, J., Triby, E., & Charlin, B. (2011). An analysis of clinical reasoning through a recent and comprehensive approach: The dual-process theory. Medical education online. Piper, M. (2020). Autonomy: Normative. The Internet encyclopedia of philosophy, https://iep.utm.edu/aut-norm/. Pritchard, D., & Ryan, S. (2014). Zagzebski on rationality. European Journal for Philosophy of Religion, 6(4), 39–46. Ryan, S. (2014). A human account of testimonial justification. Logos and Episteme, 5(2), 209–219. Ryan, S., & Mi, C. (2018). Reflective knowledge: Knowledge extended. In J. Adam Carter, A. Clark, J. Kallestrup, D. Pritchard, & S. Orestis Palermos (Eds.), Extended epistemology (pp. 162–176). Oxford: Oxford University Press. Schinkel, A. (2010). Compulsory autonomy-promoting education. Educational Theory, 60(1), 97–116. Siegel, H. (1988). Educating reason: Rationality, critical thinking, and education. New York: Routledge. Sosa, E. (2014). Reflective knowledge and its importance. Universitas: Monthly Review of Philosophy and Culture, 41(3), 7–16. Swaine, L. (2012). The false right to autonomy in education. Educational Theory, 62(1), 107–124. Stanovich, K. E., & Toplak, M. E. (2012). Defining features versus incidental correlates of Type 1 and Type 2 processing. Mind and Society, 11(1), 3–13.
54 Shane Ryan Wall, S. (2008). Perfectionism in moral and political philosophy. In E. N. Zalta (Ed.), The Stanford Encyclopedia of philosophy. http://plato.stanford.edu/ archives/fall2008/entries/perfectionism-moral/. White, J. (1991). Education and the good life: Autonomy, altruism, and the national curriculum. New York: Teachers College Press. Wolf, S. (1990). Freedom and reason. New York: Oxford University Press. Wolf, S. (1987). Sanity and the metaphysics of responsibility. In F. Schoeman (Ed.), Responsibility, character and the emotions (pp. 46–62). New York: Cambridge University Press. Zagzebski, L. T. (2015). Epistemic authority: A theory of trust, authority, and autonomy in belief. Oxford: Oxford University Press.
3 The Realm of Epistemic Ends Catherine Elgin
Autonomy and interdependence seem antithetical. An autonomous agent thinks for himself and behaves as he sees fit. He is, according to a prevailing stereotype, a rugged individualist. Nobody tells him what to think or what to do! Interdependent agents rely on one another. They regularly confer with one another, eventually achieve consensus, then jointly act on that consensus. They correct, refine, and extend one another’s thinking. On the basis of shared beliefs, they work together to achieve common goals. Although much of the philosophical discussion of autonomy occurs in ethics and action theory, the tension between autonomy and interdependence also arises in epistemology. An individualist stance frames the epistemological problematic we inherited from Descartes (see Code, 1991). In its quest to secure knowledge, the Cartesian ego fends for itself. It purposely, even ruthlessly, distances itself from other minds. But if autonomy requires individualism, we are in a bind. For as has become increasingly evident, epistemic agents are ineluctably interdependent. We cannot survive on our own (see Grasswick, 2018). Jettisoning autonomy is not an option, though. Thinkers are blamed for sloppy reasoning, jumping to conclusions, ignoring base rates, and a host of other epistemic sins and indiscretions. They are praised for rigorous arguments, creative ideas, fruitful conceptual innovations. Such praise and blame is warranted only if their cognitive condition is under their control. To be praiseworthy or blameworthy, they must be responsible. An agent is responsible only for what she freely does or freely refrains from doing. She is not responsible for things she is powerless to affect. Responsibility requires autonomy. Is there a way to reconcile epistemic autonomy with interdependence? I will argue that there is. Rather than being antithetical, I will urge, epistemic autonomy and interdependence are mutually reinforcing.
3.1 Individualism By tradition, epistemology is individualistic. The isolated thinker is supposed to be solely and wholly responsible for her epistemic condition. Relying only on her personal resources, she either knows or does not.
56 Catherine Elgin Descartes was the champion of rugged epistemic individualism. The Cartesian ego credited only deliverances that it was, on its own, at that very instant, in a position to validate. Not only testimony, but long-term memory, perception, even extended inferences were initially rejected as sources of knowledge because their deliverances might be wrong. The hope was to ground all genuine knowledge on a base of convictions that an epistemically isolated ego could not doubt. Unfortunately, by the end of Meditation II, the game was up (Descartes, 1641/1979). Since the ego could not prove that God exists and is not a deceiver, it ended up knowing very little. Skepticism being an unattractive option, less rugged forms of individualism emerged. A certain amount of scaffolding was needed to get beyond the Cartesian framework. Not all putatively perceptual deliverances, apparent memories, outputs of introspection, or seemingly plausible inferences are epistemically creditable. A faint glimpse out of the corner of one’s eye, a hasty mental calculation while emerging from anesthesia, a dim recall of a trivial event (such as the third place finisher in the fifth grade sack race), the heartfelt conviction that one is in love with a person one has never met – none of these is creditable. Conditions on epistemic acceptability had to be framed to exclude such deliverances. Still, many mundane deliverances were held to pass muster. The sighting of a familiar, middle-sized object in good light in the center of one’s visual field, a careful, double-checked, not too difficult calculation, a clear memory of a significant event came to be acknowledged to be pro tanto creditable. Deliverances of perception, introspection, memory, and calculation answer to conditions that an agent can satisfy on her own. They are, let us say, individualistically grounded. If individualistically grounded deliverances exhausted our epistemic resources, there would be no difficulty in construing thinkers as epistemically autonomous. Each would be solely and wholly responsible for her beliefs’ meeting or failing to meet the conditions on epistemic acceptability.
3.2 Interdependence But individualistically grounded factors do not come close to exhausting our epistemic resources, satisfying our epistemic needs, or accounting for our epistemic successes. We depend heavily on one another to supply information we do not ourselves possess and could not – or not easily – get. A vast number of our beliefs are based on testimony: the shortest route to Porter Square, the date of the Norman Conquest, the atomic number of gold, the departure time of the train to Bedford. The problem is that testimony seems inherently precarious, or at least considerably more precarious than other grounds. Informants are fallible; they are prey to the same vulnerabilities in perception, memory, introspective acuity, and inference as we are. In accepting testimony, we apparently
The Realm of Epistemic Ends 57 inherit their vulnerabilities on top of our own. Informants may, moreover, be careless, misinformed, or dishonest. As the length of a testimonial chain extends, vulnerability increases. At every link in the chain, warrant wanes. Moreover, although each person may be able to recognize whether she herself is being scrupulous, we typically have no way to check on the moral and epistemic bona fides of everyone whose word we rely on. Often we do not even know who they are. It might seem that we could easily diminish our reliance on informants. Rather than asking a passerby for directions, we could pull out a map. Instead of checking with a lab partner, we could consult the periodic table of elements. Rather than querying the ticket agent, we could look at the timetable. And so forth. Although the documents we draw on may be more reliable than an arbitrarily chosen passer-by, they do not eliminate our dependence on others. For such checking just substitutes one sort of testimony for another. Someone made the map, devised the table of elements, wrote the timetable. In trusting the documents we implicitly trust the agents who made and vetted them. Besides relying on others for bits of information, we depend on them for instruction. If you know how to calibrate an instrument, calculate an integral, conjugate a verb, tune a piano, or judge a dive, it is likely that someone taught you. The knowledge you glean from such activities is derivative from that teaching. If your instructor was incompetent or slipshod, you might, through no fault of your own, regularly and systematically misinterpret the sources you appeal to and thus misjudge. Even so, one might think that reliance on instruction is not so serious a threat to autonomy as our dependence on testimony is. For once a subject is suitably trained, she can conduct the requisite investigation for herself. She can, in Wittgenstein’s terms kick the ladder away once she has climbed it (1922/1947, §6.54). This is not quite right, though. The current user did not design the measuring device or figure out how to calibrate it (Chang, 2004). The current judge did not set the standards for assessing a dive or specify the respects in which and degrees to which deviating from those standards constitutes a flaw (Elgin, 2017). The current student did not construct the table of logarithms. Nor did she invent calculus (Klein, 1972). The epistemic standing of the beliefs an agent forms by deploying such techniques and devices depends on the adequacy of the techniques and devices themselves. That depends on how they were designed, created, and validated. If a measuring device is poorly calibrated, if the table of logarithms is inconsistent, if the standards for assessing a dive are jointly unsatisfiable, the results will be unreliable. I’ve chosen examples that are well entrenched. Others have used them effectively in the past. This is evidence of their adequacy. The examples are collective achievements. Even if a single individual invented a particular technique or device, in order for its products to be epistemically estimable, others had to validate it. Users typically have no idea what
58 Catherine Elgin justifies many of the techniques and devices they rely on. They have no clue, for example, how logarithms are generated or gas gauges are calibrated. People who insist “Nobody tells me what to think” are wrong. That’s lucky for them. Human beings are epistemically interdependent. It would be ludicrous to contend that an agent does not know that her gas tank is about half full, or that the atomic number of gold is 79, or that π is an irrational number, simply because she did not figure it out for herself. If epistemic autonomy is incompatible with interdependence, we are not autonomous. But the objections I have raised are directed against epistemic individualism, not against epistemic autonomy. We go too fast in identifying the two.
3.3 Heteronomy So far, I have construed autonomy as freedom. What sort of freedom is at issue here? To be free in the relevant sense is not to be utterly unconstrained, bouncing about like a gas particle in random motion. Nor is it to be uninfluenced by others. Personal autonomy is a matter of individual self-governance. There is an analogy to the political realm. An independent nation is autonomous in that it is self-governing. It makes and enforces its own laws. No other nation has dominion over it. In the Grounding of the Metaphysics of Morals, Kant distinguishes between heteronomy and autonomy (1785/1981). The autonomous agent acts on maxims she reflectively endorses, and does so because she reflectively endorses them. In invoking the categorical imperative, Kant maintains, she makes the laws that bind her. The heteronomous actor acts on the basis of inclination. He desires p and believes that he will gain or improve his prospects of gaining p if he does q. His desire for p then causes him to do q. For now, let us focus on heteronomy. And rather than speaking of heteronomous acts, as Kant does, let us speak of heteronomous actors – those who behave heteronomously. It might seem that because he is driven by his desires, a heteronomous actor does not deliberate. He is some sort of stimulus-response machine. As soon as a desire presents itself, he acts to satisfy it.1 Some heteronomous acts and some heteronomous actors may fit this profile. It is plausible to think that non-human animals act heteronomously. On seeing a mouse, a barn owl desires it and immediately strikes. But not all animals – indeed, not all predatory animals – are like that. A clever cat holds back. Instead of immediately pouncing, she bides her time. She waits until the mouse is far enough from its hole that it cannot escape. Then she pounces. Something akin to deliberation seems involved. Other animals are even more sophisticated. Prides of lions work together to stalk and catch their prey. They apparently co-ordinate their behavior, learning from experience what strategies are effective. They teach their cubs how to hunt – that is, how to work together to catch their prey. They may be
The Realm of Epistemic Ends 59 heteronomous – driven to act by their desire for the gazelle. But even if their behavior is instinctive, they are evidently not impulsive. Like the impulsive owl, heteronomous humans sometimes act unthinkingly to get what they desire. The bonbon appears on his plate, and Max immediately, unthinkingly, pops it into his mouth. Much heteronomous human behavior is more complex. Human beings have conflicting desires. Sometimes we are aware of the conflict. In that case let us say that we have manifestly conflicting desires – desires that the desirer recognizes cannot all be satisfied. Confronted with such desires, we can deliberate about which is strongest. Considerable thought may go into weighing alternatives. It may involve assessing not just the attractiveness of our conflicting desiderata, but also the probability of achieving them. Let us say that the product of such deliberation, if the deliberation is done rigorously, yields the actor’s strongest desire. What makes the actor heteronomous is not that he is impulsive; it is that his action is not guided by any assessment of final ends. His ends are fixed by whatever he happens to take himself, on balance, to most strongly desire. They are given rather than chosen. A heteronomous actor, human or not, thus seems to be a victim of circumstances. He finds himself with certain desires, perhaps thinks about which desires are strongest, perhaps thinks about how to achieve his desires. But he is bound to attempt to achieve whatever he takes to be his strongest desire. Although he may be able to figure out which desire is strongest, being heteronomous, he cannot determine whether that desire, or indeed any of his desires, is worthy of satisfaction. It might seem that from a subjective perspective, this is no problem. Given that he desires p more strongly than he desires any competing alternative, he deems p worthy of satisfaction. No other desire overrides it. Frankfurt’s unwilling addict belies this (1971). Although the unwilling addict strongly desires the drug she is addicted to, she deeply regrets her desire. She cannot overcome her addiction, but she desires to not desire the drug. She is overpowered by the desires she finds herself having. They dictate the actions she engages in.
3.4 Heteronomous Belief Discussions of heteronomy typically focus on inclination or desire. The heteronomous agent is in an important sense unfree because his actions are determined by desires he simply finds himself with. But action is a joint product of belief and desire (see Davidson, 1963/1980). So an actor is equally heteronomous if he is driven to act on the basis of beliefs he simply finds himself with. If there are such beliefs, heteronomy becomes an issue for epistemology. Phobias seem to fit the bill. Phobias are irrational fears with specific targets. Aerophobia is an irrational fear of flying; arachnophobia, an irrational fear of spiders. Fear is an emotion, though, so it might seem that
60 Catherine Elgin phobias are irrelevant to our problem. But emotions are not just spontaneous upwellings of feeling. They embed beliefs (see Elgin, 1996). To fear something involves believing that it is dangerous. Aerophobia thus embeds the belief that flying is dangerous. If one cannot help but believe that flying is dangerous, one’s unreasonable avoidance of flying, being driven by an inescapable belief, is heteronomous.2 Besides diagnosable phobias, entrenched stereotypes, deep-seated prejudices, and products of wishful thinking are plausible candidates for heteronomous belief. Despite the dismal prognosis, Amy, a cockeyed optimist, convinces herself that she recover. She willfully blinds herself to extent to which her test results indicate otherwise. Despite no- hidden-variable theorems, David, a diehard determinist, believes that the Heisenberg indeterminacy principle is false; the grip of the causal law on his reasoning is too tight for him to escape. Such attitudes are largely impervious to evidence. When someone is under the sway of such an attitude, it seems to make sense to say that he can’t help but believe what he does. If this is a literal description rather than a rhetorical trope, such a person is doxastically heteronomous. Whether or not a doxastically heteronomous actor is blameworthy, his state is epistemically defective.3 Phobias, prejudices, inaccurate but entrenched stereotypes, and products of self-deception are failures of rationality. Although they are serious impediments, it may seem that they have little bearing on epistemology. The problem is that the locution “x can’t help but believe what she does” seems equally true of ordinary believers vis à vis their ordinary beliefs. If Brenda sees a blue car in broad daylight in the center of her visual field, and has no reason to think that she is perceptually impaired or that the setting is abnormal, she can’t help but believe that there is a blue car in front of her. If Jen tells Jon that the Kenmore Square is three blocks to the right, and Jon has no reason to doubt Jen’s word or his interpretation of her utterance, he can’t help but believe what Jen tells her. Many of our beliefs are products of our culture, our education, our encounters with a motley crew of informants. Many were unsought. Many were unvetted. Some may be unwelcome. We just find ourselves with them, having been brought up in a particular society and having had a certain range of encounters. They are not individualistically grounded, and for all we know, may not be grounded at all. Inasmuch as we live in an information rich (and largely unfiltered) environment, much testimony is likely to “influence us unawares” in the way that Plato feared the arts would influence the denizens of the Republic (1974, Book III). We have no idea how we came upon or what would justify some of the views that are common currency in our society. Moreover, our experiences are parochial. Beliefs formed on the basis of them may well be skewed. Our education is spotty. We may know how to take square roots, but have no idea why the method works. We may know the date of the Magna Carta but have no idea how that date was established. To be sure, some beliefs
The Realm of Epistemic Ends 61 that are responsive to evidence, and some products of testimony are wellfounded. The agent can’t help but believe them given that support. Still, no more than the suspect beliefs are they up to her. They are not products of free choice. A cat might have a first-order belief about where a mouse is. Wanting the mouse, she might act on that belief. But the cat is not responsible for the belief, any more than she is responsible for the desire. Both arose unprompted, stimulated by that distinctive, mouth-watering mousearoma. She cannot deliberate about whether that smell is actually a good way of telling whether there is a mouse behind the bookcase. If, because of its etiology, a person’s belief is not under his control, then like the cat, he is not responsible for it. Beliefs just happen to him. This creates a problem. Belief aims at truth. To find oneself believing that p is to find oneself taking p to be true. On learning that p is false, one deems believing p to be a mistake. A believer, as such, wants to believe only what is true. This is so, regardless of the etiology of belief. But just as we can have manifestly conflicting desires, we can have manifestly conflicting beliefs. Ned finds himself believing that p and believing that q, although he recognizes that p and q conflict. p and q cannot both be true. Perhaps he believes that the ethics seminar meets on Monday and believes that it meets immediately after the logic lecture which meets, not on Monday, but on Tuesday. Something is amiss. Taking a cue from the way he handles manifestly conflicting desires, he might ask himself which one he believes most strongly. He can answer that question. He believes, let us say, p more strongly than he believes q. The issue here is how strongly he believes that p, how confident he is that p. It is not what subjective probability or credence he attaches to p. In the absence of a reason to think that the strength of his belief correlates with the likelihood of its being true, he has no grounds for thinking that preserving the stronger belief and abandoning the weaker will promote the aim of believing only what is true. To reason in a way that promotes that aim he must critically reflect on the supports for his beliefs. To figure out which of the conflicting beliefs he ought to retain, he needs to assess the two beliefs, and decide whether they and their supports stand up to scrutiny. He thus must adopt a second-order stance. But if his second-order stance is heteronomous – if, that is, he just finds himself harboring beliefs about his various first-order beliefs – he is still in a bind. For the problem will simply recur. Should he discover a conflict between his second-order beliefs, or between a second-order belief and a first-order one, he will have to adopt a third-order stance to determine which he believes most strongly. A regress threatens. It won’t go far. Before long, he is apt to have no views about which of his beliefs about his beliefs about his beliefs … is stronger. Moving up the hierarchy is thus not likely to help. Nor is that
62 Catherine Elgin the only problem. If he finds himself with a higher-order belief that casts doubt on a lower-order one, he still has no basis for preferring one to the other. They are just two beliefs that he finds himself with – one, curiously enough, being about the other. Even if his higher-order beliefs ratify his original strength of belief, he is not out of the woods. For although he more strongly believes p and more strongly believes he more strongly believes p, and more strongly believes he more strongly believes … that gives him no reason to think that retaining p and abandoning q promotes his goal of believing only what is true. If he is subject to confirmation bias, this self-reinforcing, ill-advised hierarchy of beliefs is exactly what one would expect. Merely adopting a higher-order stance is not enough. To resolve the difficulty, Ned must adopt a critical stance – one from which he can assess his beliefs. He needs to be able to identify and evaluate his reasons – the considerations he takes to bear on whether p is true. Distorting factors such as differences in salience, proximity bias, and confirmation bias strengthen beliefs illicitly. He thus needs to be able to exclude irrelevant influences from his thinking. This means that his stance must be more than merely a standpoint from which to find faults. A sportscaster doing a play-by-play can describe the events on the field, identify good and bad moves, characterize the strengths and weaknesses of the various players, discern the teams’ strategies and tactics, contextualize the current game in the season, and so on. Still, his perspective is third-personal. Although he can recognize opportunities, he cannot capitalize on them; although he can discern flaws, he cannot correct them. He embodies and can convey a rich understanding of the game as it unfolds, but in his current role, he is an on-looker. His critical stance is spectatorial. If Ned is just a spectatorial critic of his own mental life, he is not in a position to promote his truth-seeking goal. He can only recognize how he falls short. What he needs is a way to use his critical stance to improve his epistemic lot. He has to be able to correct the flaws. To do that, his critical stance must be agential. Earlier I characterized beliefs as largely involuntary. Given our circumstances, we cannot help believe what we do. So it might seem that Ned’s predicament is irresolvable.
3.5 From Belief to Acceptance To address the difficulty, let us distinguish between belief and acceptance. In articulating the distinction, I draw heavily on the work of L. Jonathan Cohen (1992). To believe that p is to feel that p is so. This is largely involuntary. To accept that p is to be willing and able to use p as a basis for inference and action when one’s ends are cognitive.4 One’s ends are cognitive when one attempts to know, understand, comprehend, or otherwise grasp how things stand in a given domain.
The Realm of Epistemic Ends 63 An autonomous epistemic agent’s reflective endorsement – be it of a proposition, a principle, a norm, or a method – is, I suggest, a matter of acceptance rather than belief. Acceptance is voluntary. Thus the agent can exercise epistemic self-discipline; she can accept or reject considerations as she sees fit (see Weatherson, 2008). Someone who, because of her phobia, cannot help but feel – that is, in Cohen’s sense, believe – that flying is dangerous, may nevertheless, rely on the relevant research and accept that flying is safe. She is then willing (if, she admits, unreasonably reluctant) to fly, and willing to use “flying is a safe mode of travel” as a premise in her inferences. Acceptance neither requires nor always eventuates in belief. A scientist might accept h as a working hypothesis, and devote years to teasing out its consequences, without ever believing that it is true. Alternatively, it might be an idealized model that does not even purport to be true. If so, regardless of the support, it is not a candidate for truth. Or it might be a promising hypothesis for which he never is able to garner sufficient evidence. In that case, although it may be well supported, it remains somewhat speculative (see Elgin, 2017). The cat’s inability to escape heteronomy is not a problem for her. Even if she harbors conflicting beliefs, she is unaware of the conflict. But because we humans recognize that our beliefs can conflict, and recognize that belief aims at truth, our situation is different. Nor is this just a problem that emerges with logically inconsistent beliefs. The real threat is not inconsistency, it is non-cotenability. Belief contents that are not logically at odds may fail to be co-tenable. Meg may believe that it will take several hours of focused attention to finish the project that is due tomorrow and believe that she has plenty of time for her to go out for drinks with friends, postponing the project until she gets back. Either belief might be plausible, but their conjunction is not promising. If p sufficiently lowers the probability that q, then an agent who finds herself believing both p and q ought not accept both. In light of one, the other does not reach the threshold of acceptability. The problem ramifies. We cannot resolve it simply by assessing beliefs pairwise. We need to evaluate how tenable entire networks of belief are. Individual commitments are acceptable only if they belong to systems of mutually supportive commitments – commitments that are reasonable in light of one another.
3.6 Condemned to Freedom Because belief aims at truth and Ned has (or might easily have) conflicting beliefs, if he is to promote his goal as a believer, he must be in a position to endorse or repudiate some of his beliefs. He must be able to do so on the basis of considerations that he reasonably takes to advance his truthseeking agenda. And he must be able to act on his decision – not merely applaud or rue his findings. Arguably, he can’t help but believe what he does. But he can accept or reject considerations, depending on whether
64 Catherine Elgin he takes them to promote his cognitive objectives; and he can live his life accordingly. So he is an agent. As such, he can be held responsible for the way he conducts his cognitive life. This raises the question of how he ought to conduct his cognitive life. Ned should accept only considerations he reflectively endorses, for these are the only ones he has reason to think his epistemic principles vindicate. The worry is that this just pushes the problem up a level. For the issue of acceptability arises for principles as well. This is so, but it does not lead to an infinite regress. Since Ned is an agent, he can start and end a justificatory chain as he sees fit.5 He can, that is, decide on reflection that the type and degree of justification he has are sufficient for his cognitive purposes. This may, but need not, lead him to seek justification for his secondorder principles, but it need not (and should not) lead him to relentlessly demand ever higher orders of justification. He is responsible for his commitments because he takes responsibility for them. At whatever level he stops, he concludes that the considerations he accepts provide a good basis for inference and action when his ends are cognitive. Ned’s stance is autonomous. Forming an opinion by deciding whether a consideration is worthy of reflective endorsement is something he does, not just something that happens to him. Worries about provenance get no purchase. Ned may and often should consider how he came by a hypothesis. But in the end, what matters is not where it came from, but whether he is prepared to stand behind it. When Jan wakes up in the morning and sees puddles, she reasons that it rained last night. She does so neither because she saw the rain fall nor because she was so informed, but because the best way to accommodate the evidence was to infer that it rained. Similarly, physicists appeal to symmetry principles to justify introducing particles for which they have no direct evidence. In both cases, the justification lies in a commitment to the idea that the best explanation – one that meets the threshold for being a good enough explanation – for the phenomenon in question lies in its meshing with a system of epistemically tenable commitments that sustain it. Sometimes considerations about provenance provide prima facie reason for reflective endorsement; but, ultimately, the capacity to integrate into and strengthen a network of antecedently acceptable commitments is what matters (see Elgin, 1996). The requirement of reflective endorsement does not leave Ned free to decide however he likes. Minimally, he should constrain himself to accept only lower-order considerations that, as far as he can tell, are vindicated by higher-order principles that he accepts, and accept only higher-order principles that mesh with and provide a rationale for the lower-order considerations he accepts, while providing grounds for repudiating the lowerorder considerations he rejects (see Goodman, 1954/1983). Adjudication may be required to bring his various commitments into accord. There are then strong consistency and coherence conditions on his acceptances.
The Realm of Epistemic Ends 65 They require that, as far as he can tell, like cases are treated alike, that the considerations he accepts be jointly, not just individually, acceptable, and that the methods and standards he uses actually vindicate the conclusions he draws from them. So they block such epistemic indiscretions as making snap judgments, appealing to false equivalences, jumping to conclusions, and special pleading. This is all to the good. Nevertheless, the process may still seem woefully subjective. In accepting considerations he reflectively endorses, Ned satisfies his own standards. But he still seems vulnerable to systematic errors that stem from lax or skewed standards, such as confirmation bias, tunnel vision, neglect of base rates, and so forth. If epistemic autonomy leaves him in such a sorry state, it is not much of an asset.
3.7 Kant Redux To escape this predicament, again it pays to turn again to Kant. Kant characterizes an autonomous agent as someone who acts on laws she makes for herself. But from the fact that she makes the laws for herself, it does not follow that she makes them by herself. She does not. Although Kant takes the Categorical Imperative to apply exclusively to ethics, by extrapolation we can secure the basis for epistemic normativity (Elgin, 2017). Kant provides multiple versions of the Categorical Imperative, three of which concern us here. Under extrapolation, each reveals something important about epistemic acceptability. First, a point about terminology. A maxim for Kant is a principle on the basis of which an agent acts (1785/1981, §400). Since accepting is acting, the principle according to which an epistemic agent accepts qualifies as a maxim. The initial formulation of the Categorical Imperative (CI1) is the principle of universalizability: act only on a maxim that you could simultaneously will to be a universal law (1785/1981, §421). It precludes making an exception for your own case. By extrapolation, an epistemic agent should accept a consideration only if it would be universally acceptable. For now, let us bracket the question of how wide the relevant universe needs to be. I will return to it below. Whatever the answer, if the evidence is good enough provide sufficient reason for Ned to accept p, it is good enough to provide sufficient reason for his compatriots to accept p as well. The responsible epistemic agent thus recognizes that the principles on the basis of which he accepts or rejects a consideration are ones that other members of his community ought to countenance too. It might seem that this brings us no further. If Ned reflectively endorses p, he accepts p and thinks that his reasons for accepting p are sufficient. That being so, he naturally thinks that other epistemic agents should accept p. After all, he thinks, he is right! Because p is correct, his compatriots ought to accept it. No real appeal to them is necessary.
66 Catherine Elgin Universalizability apparently comes for free. (CI1) may seem to invite this conclusion. The second formulation of the Categorical Imperative (CI2), the principle of humanity, blocks it. “Act in such a way that you treat humanity, whether in your own person or in the person of another, always as an end, never merely as a means” (1785/1981, §429). Here the focus is on agents rather than maxims. To treat others as ends in themselves is to treat their perspectives as worthy of respect – as worthy of respect as one’s own. So Ned cannot simply conclude that his compatriots ought to accept p because he has established to his own satisfaction that it is right. Rather, to satisfy the epistemic analog of (CI2), he should ascertain whether, from their perspectives p appears acceptable. Only if it does, ought he accept it. Still, there is a problem. (CI2) seems to say nothing about the basis of unanimity. Should Ned accept a contention just because his compatriots happen to agree with him, regardless of why? The agreement might just be evidence that the contention is popular. The third formulation (CI3), integrating the previous two, answers that question. An agent should act only on a maxim that she can endorse as a legislating member of a realm of ends (1785/1981, §431). This requires explication. A realm, Kant maintains is “a systematic union of different rational beings through common laws” (1785/1981, §433). Because members of a realm of ends make the laws they are subject to, and consider themselves subject to the laws precisely because they have made them, the satisfaction of the categorical imperative is a manifestation of autonomy.6 The acceptability of a consideration is dependent on the capacity of each member to endorse it. Kant characterizes the agent as a legislator, not an autocrat. Enacting a law is different from issuing an edict. There are procedures that must be followed – procedures that the legislators collectively contrive and reflectively endorse because they believe that those procedures will further their legislative purposes. Not being an autocrat, Ned cannot secure his conclusion simply by decreeing: “Because this is what I think, everyone else should think it too! That settles it!” A realm of ends has multiple members. Ned is a legislator, not the legislator. An absolute monarch can issue edicts on his own. Legislators enact laws collectively. So, if he has any hope of succeeding, the consideration that an individual legislator advances should be acceptable from the points of view of the other legislators. That is what universalizability requires. To enact a law that he favors, a legislator has to convince other lawmakers that the proposed legislation is acceptable from their points of view. And he has to do so by appealing to considerations that are acceptable from their points of view. He cannot bully or bribe or befuddle his compatriots into submission. It is not enough that he thinks it is a good
The Realm of Epistemic Ends 67 idea. Nor would it be enough if each of them independently happened to consider it a good idea. The fact that they agree and the reasons why they agree with one another should be factors in their reflective endorsement. Each should accept it at least in part because it is acceptable from the points of view of the others. Each should treat the others not just as sources of evidence, but as co-assessors who deem the evidence acceptable from their own perspectives.7 Ned thus has to engage with his colegislators, and appreciate how things look from their perspectives. To make his case, he needs to articulate reasons that they can understand and accept. Only if the consideration Ned advances stands up to scrutiny from other perspectives will it pass muster. It follows that only if it stands up to scrutiny from other perspectives should Ned hold that it stands up to scrutiny from his own. If he wrongly believes that it is acceptable from their points of view, he is in error about what he ought to accept. A factor in its being acceptable to Ned is that it is acceptable to his compatriots. So far, I’ve at best shown how we might construct a conception of epistemic acceptability by extrapolating from Kant. But why should we accept it? By stipulation, Ned’s relevant ends are cognitive. He wants to know, understand, or grasp how things actually are, not just how they seem to him. He recognizes some of his limitations – some of the ways his evidence, methods, or standards of assessment might systematically lead him astray. And he appreciates that if they systematically lead him astray, he may lack the resources on his own to identify or rectify them. He recognizes that he is not alone in being so limited. Many of his limitations afflict any individual epistemic agent. So it is reasonable for him to attempt to shore up his commitments – even those that seem unobjectionable from his point of view. And it is reasonable for him to do so in a way that simultaneously shores up the commitments of his compatriots. Counterparts to the several formulations of the Categorical Imperative afford resources for doing so. A universalizability principle akin to (CI1) holds that whatever considerations justify him in accepting p equally justify others in accepting p. And whatever considerations justify him in rejecting q equally justify others in rejecting q. An analog to (CI2) maintains that it is not enough that Ned, from his perspective, think that p is justified. Others, from their own perspectives, should think so too. That is, it is not enough that were others to adopt his perspective, they too would think that p. They must be able to accept it from diverse perspectives. And an analog of (CI3) insists that the grounds for their agreement should consist of considerations that they could also reflectively endorse on the basis of considerations that they individually and collectively consider to set reasonable constraints on epistemic acceptability. This yields an epistemic imperative. An epistemic agent should accept only considerations that she could reflectively endorse as a legislating member of a realm of epistemic ends (see Elgin, 2017).
68 Catherine Elgin I began by saying that epistemic autonomy and interdependence are mutually supportive. It might seem that I’ve mainly argued that the autonomous agent gains from the support of an epistemic community. Not only does he gain access to additional information via testimony and additional skills via instruction; community support also stabilizes his commitments by giving him reason to think that they can be sustained from other points of view. But the benefits must go both ways. A community supports epistemic autonomy by underwriting principles that are tenable only if generated, ratified, and endorsed by autonomous agents – agents who would and could withhold acceptance if, from their perspectives, the principles seemed untenable. This will only work if the individual actors are ready, willing, and able to withhold their acceptance. Unless an actor is autonomous, he has nothing to contribute. Perhaps, desiring their approval, the heteronomous actor just reiterates the commitments of his compatriots. He goes along to get along. In that case he is an in epistemic parasite. If they cannot be confident that he would raise objections if a consideration were not supported from his perspective, his agreement counts for nothing. Because he is just a yes-man, the community gains nothing from his affirmation. He would have agreed with anything the powers that be maintained. Perhaps, on the other hand, he goes his own way, ignoring his compatriots’ views completely. Again, he contributes nothing. Either way, the community’s epistemic position is weakened. A perspective that might in principle have something to contribute is left without an advocate. Actors must be epistemically responsible to constitute a realm of epistemic ends. No doubt, an epistemic community can get by with a few free riders. But unless the majority of its members are autonomous, consensus counts for little. Indeed, epistemic interdependence is disastrous if the community is cognitively corrupt. If they merely parrot one another’s convictions, their yea-saying does not supply independent support. Unless the individual autonomous agent draws on the support of a realm of epistemic ends, he is epistemically vulnerable. He may be subjectively secure in that he satisfies standards he sets for himself. But he has no way to ensure that he is not prey to weaknesses that his parochial perspective cannot disclose. Unless the epistemic community is composed of autonomous epistemic agents, consensus affords no basis for confidence. The grounds for agreement may be spurious. Epistemic autonomy and interdependence are thus mutually supportive. Each is too weak to sustain acceptability without the other.
3.8 Epistemic Communities Although Kant takes the moral realm to include every rational agent throughout history, his characterization allows realms to be more restricted. This is mandatory if the extrapolation to epistemology is
The Realm of Epistemic Ends 69 to be remotely plausible. Few if any epistemic commitments would be acceptable to all rational agents, regardless of time, place, or education. Nevertheless, more restricted communities qualify as systematic unions of different rational beings through common laws (Kant, 1785/1981, §433). We speak of, for example, the community of plasma physicists or the community of economic historians. It is plausible to construe the members of a discipline as composing a self-constituted realm, bound together by cognitive commitments that they take to govern and circumscribe their professional lives. These are their common laws. There are more informal realms as well – for example, the guys down at the bar who deem one another’s opinions about football worth listening to, even when those opinions diverge. They tacitly accept standards for what sorts of considerations deserve to be taken seriously. The critical point is that the ends the members of an epistemic realm collectively endorse, whether formal or informal, must be cognitive. They must be grounded in the goal of grasping how things are. I have argued that epistemic autonomy and epistemic interdependence go hand in hand. Autonomy without interdependence leaves an agent in a precarious position; she has no assurance that her convictions are free from biases that are invisible from her point of view. Interdependence without autonomy leaves the community in an equally precarious position. It has no assurance that consensus is not a product of epistemically irrelevant or even epistemically pernicious features. But a community comprised of autonomous agents, functioning as such, provides grounds for epistemic acceptability. There is no assurance that their conclusions are true, but there is assurance that they are reasonable in the epistemic circumstances.8
Notes 1 Kant speaks of inclinations rather than desires. An inclination, for Kant is the desire that a heteronomous actor acts on. It is the desire that she considers strongest. Here I will use the term “desire,” as that is standard in contemporary action theory. 2 Indeed, it is worth noticing that if one believes that flying is dangerous, there is nothing unreasonable or irrational about desiring not to fly. 3 I thank Jonathan Matheson for helping me clarify this point. 4 According to Cohen, to believe that p is to feel that p is so. To accept that p is to be willing to use p as a premise in inferences or as a basis for action when one’s ends are cognitive (1992, p. 2). Although it draws heavily on Cohen, my conception of acceptance is both broader and narrower than his. It is narrower in that I restrict it to assertoric inferences. One can, using Cohen’s criterion, accept a premise for reductio. Since reductios play no role in my argument, I set them (and the extended conception of acceptance) aside. My conception is broader in it holds that an agent must not only be willing, but also able to use p. Moreover, I eliminate “as a premise” because I think we also accept rules, standards, and methods when our ends are cognitive. In my
70 Catherine Elgin terms, to be willing to use modus ponens in cognitively serious inferences is to accept it (see Elgin, 2017). 5 I am grateful to Jonathan Adler for this point. 6 The realm of ends is an ideal. The legislators are idealized agents. There is no suggestion that it is realized in any actual legislature. 7 I thank Jonathan Matheson for helping me articulate this point. 8 I am grateful to Jonathan Matheson and Kirk Lougheed for constructive comments on an earlier version of this chapter.
References Chang, H. (2004). Inventing temperature. Oxford: Oxford University Press. Code, L. (1991). What can she know? Feminist theory and the construction of knowledge. Ithaca: Cornell University Press. Cohen, L. J. (1992). An essay on belief and acceptance. Oxford: Clarendon Press. Davidson, D. (1963/1980). Actions, reasons, and causes. In Essays on actions and events. Oxford: Oxford University Press. Descartes, R. (1641/1979). Meditations on first philosophy. Indianapolis: Hackett. Elgin, C. (1996). Considered judgment. Princeton: Princeton University Press. Elgin, C. (2017). True enough. Cambridge, MA: MIT Press. Frankfurt, H. (1971). Freedom of the will and the concept of a person. Journal of Philosophy, 68, 5–20. Goodman, N. (1954/1983). Fact, fiction, and forecast. Cambridge, MA: Harvard University Press. Grasswick, H. (2018). Epistemic autonomy in a social world of knowing. In H. Battaly (Ed.), The Routledge handbook of virtue epistemology (pp. 196–201). London: Routledge. Kant, I. (1785/1981). Grounding of the metaphysics of morals. Indianapolis: Hackett. Klein, M. (1972). Mathematics through the ages. Oxford: Oxford University Press. Plato (380 BC/1974). The republic. Indianapolis: Hackett. Weatherson, B. (2008). Deontology and Descartes’ demon. Journal of Philosophy, 105, 540–569. Wittgenstein, L. (1922/1947). Tractatus logico-philosophicus. London: Kegan Paul, Trench, Trubner & Co.
4 Professional Philosophy Has an Epistemic Autonomy Problem Maura Priest
4.1 Introduction 4.1.1 Thesis This chapter argues that contemporary professional philosophy has some epistemic autonomy problems. “Problems” are understood in an epistemic sense, i.e., there are social and institutional features of professional philosophy that, (1) place many barriers in the way of epistemic autonomy, and (2) because of (1), both the philosophy profession and individual philosophers are worse off epistemically. Due to these barriers, the average professional philosopher leads an intellectual life that is far less autonomous than it could be without them. Moreover, the profession as a whole, and to some extent the world at large, is worse off epistemically because of these autonomy problems. The chapter will start by arguing in favor of a self-governance conception of autonomy. This section is mostly designed to clarify the concept under consideration, rather than to argue that this is the right or best way to understand the concept. That being said, arguments are made to support the claim that the self-governance conception of autonomy is a reasonable one, that it matches on to how the concept is used in many contexts. Utilizing the conception of autonomy explained in Sections 4.1 and 4.2 argues that this type of autonomy can fall along a spectrum, and that there is a meaningful gap between what it takes to earn the label “epistemically autonomous” and the degree of epistemic autonomy expected from professional philosophers. Given the nature of their trade, philosophers can and should be expected to fall further toward the epistemically autonomous edge of the spectrum than others. Said differently: philosophers ought to (epistemically ought) to land closer to the side of excess autonomy than most other persons. So what might be an excess of epistemic autonomy for the average person, is what we should normatively expect from philosophers. One way this “excess” of autonomy might manifest is that philosophers ought to “delegate away” less epistemic tasks than the average non-philosopher. While delegating epistemic tasks to others is compatible with leading an epistemically autonomous life, it
72 Maura Priest is often incompatible with the philosophical life. The take-away is that we can justifiably hold philosophers to a distinct standard that demands an especially autonomous epistemic existence. Despite the reasonable expectation that professional philosophers lead epistemic lives that are especially autonomous, social and institutional features of the philosophy profession impose serious barriers that work against self-governance. These barriers are (1) bad in themselves, and, (2) lead to bad epistemic consequences. To preview, insufficient epistemic autonomy is correlated with: weaker philosophical arguments; less interesting and less valuable scholarship selection; increased odds that a meaningful segment of philosophically talented students will forgo the philosophy profession; professional philosophers who are intellectually unsatisfied, or less satisfied than they would have been otherwise. After arguing for why we have reason to be concerned about, (1) the likelihood of these consequences, and (2) the negative epistemic of those consequences, then the chapter proceeds to explore potential remedies for, and preventative measures against, the profession’s epistemic autonomy problems. 4.1.2 Epistemic Autonomy in General Within everyday discourse, it seems that “autonomy” is used in more than one way, suggesting more than one possible conception (i.e., Wittgensteinian family resemblance). This chapter does not argue that one conception of epistemic autonomy gets closer the “core,” “real,” “true,” or “best” concept, nor that one conception more accurately grasps the most important conceptual contours of autonomy. Instead, the chapter’s narrows its focus to one (among several possible) conceptions of epistemic autonomy, for reasons other than an appeal to what is correct or best. The hope is that what I will refer to as the self-governance conception of epistemic autonomy grasps on to one plausible type of autonomy, or at the very least, one important *aspect* of autonomy (take your pick).1 Moreover, this circumscribed definition helps with clarity; after all, using “epistemic autonomy” in several competing ways makes evaluating the truth of our central claims difficult (i.e., the claim that professional philosophy has a worrisome epistemic autonomy problem.) Hence, in what follows, I define the relevant notion of epistemic autonomy, argue that the philosophy profession suffers from an insufficiency of it, and then consider solutions to the mentioned problems. 4.1.3 Epistemic Self-governance: What It Is and What It Is Not As hinted at, throughout this chapter, epistemic autonomy will be understood as a type of epistemic self-governance. It is uncontroversial that autonomous nations, as opposed to non-autonomous ones, are nations that govern themselves (or that govern themselves to some meaningful
Professional Philosophy 73 extent.) Contrastingly, nations are considered non-autonomous when they are governed or controlled by other nations. Hence, epistemically autonomous agents are agents who govern their own intellectual lives. Contrastingly, the less epistemically autonomous the agent, the more an agent is intellectually “governed” (i.e. influenced, controlled, pressured, etc.) by external forces. Hence, applying this conception to individuals, autonomous persons govern themselves while non-autonomous persons are governed by others; perhaps these others are persons, but they might also be social forces and pressures that cannot be attached to any given individual. Sociologist George Simmel said that, “The deepest problems of modern life derive from the claim of the individual to preserve the autonomy and individuality of his existence in the face of overwhelming social forces, of historical heritage, of external culture, and of the technique of life” (2006, p. 174). Simmel connects autonomy with individuality, which again speaks to the importance of an agent possessing a certain type of control over themselves and their lives, as opposed to just being a member or a group, i.e., lost in the crowd. Simmel sees threats to autonomy as including social forces and external culture, both of which are clearly coming from outside the agent and threaten to take over internal control. To cite a source more recent Medium article, “Autonomy increases employee loyalty.” The excerpt from the article states, Autonomy at work is about bestowing employees with discretion and independence to schedule their work and to regulate how it is to be done on their own terms. Hire people who can naturally engage without the need for commands, controls, or rigid structures. (Siragusa, 2020) In the above, autonomy is associated with employees who are given authority to schedule their working hours, to manage their own productivity, and possess the skills to regulate one’s work life without the regulations and rules from supervisors. In other words, the autonomous employee governs themselves rather than being governed by their employer. Something similar occurs with epistemic self-governance: autonomous epistemic agents monitor their own epistemic life and can do this without a rule book or epistemic expert to guide them. Autonomous epistemic agents choose which epistemic activities to undertake and which to forego; they choose how to go about those activities and are comfortable searching for their own epistemic justifications. Like the autonomous employee, the autonomous worker can utilize their own sense of (epistemic) discretion in difficult situations.2 Again, we are not making an argument that epistemic autonomy as epistemic self-governance is either the only, or the best, conception of
74 Maura Priest epistemic autonomy. The claim is that this is one reasonable conception that fits the term to some degree, in some cases, to some extent, etc. And for our purposes, it might not even matter if you agree that epistemic autonomy can be understood as epistemic self-governance. What is more important, is answering this question: does the contemporary philosophy profession impose road blocks to epistemic self-governance? Moreover, if such interferences exist (or could exist) would this pose epistemic threats to the profession and/or to professional philosophers themselves? You might disagree with how we define epistemic autonomy, and still agree that social and institutional features of professional philosophy are epistemically restrictive. You can also agree that such restrictions are epistemically worrisome and threaten unfortunate epistemic results. Imagine that Professor Mill is a philosophy professor, and for nearly every paper, his central thesis begins as a suggestion from his wife, Dr. Mill. When Professor Mill was invited to write an article for a journal’s special issue titled, “animal rights,” he was at a loss of fitting ideas. Until, that is, he talked to his wife: Dr. Mill suggested dolphin abuse in the entertainment industry. Having decided to write about “show dolphins,” Professor Mill panics, realizing he has no idea how to frame the argument. But after a conversation with his wife, his anxiety eases. His wife had plenty to say about his paper, she especially lamented that amusement parks dolphins are used as mere means to an end. Breathing a sigh of relief, Professor Mill puts pen to paper, framing his argument along Kantian lines. It seems uncontroversial that Professor Mill has key epistemic selfgovernance deficiencies. Rather than govern his own scholarly life, he hands off that task to his spouse. Another example demonstrating the same epistemic short-coming runs as follows: Tanner, a first-year university student, is unsure of what to think about COVID-19 and associated regulations. Nor does he have much of an opinion on social justice, the decline in educational funding, or the opiate crisis. However, Tanner joins a fraternity where many of his fellow fraternity members do have these opinions, and the group itself takes a stance on some of them too. Days after joining, Tanner arrives at new found “convictions” about COVID19 legislation, social justice policies, educational funding, and the opiate epidemic. Coincidently (or not coincidently), Tanner’s views on the aforementioned mirror the general consensus of the fraternity. Once again, this seems an uncontroversial case of an agent falling short in epistemic self-governance (assuming the case is straightforward, without hidden or unexpected information). If neither Tanner nor Dr. Mill are models of epistemic self-governance, who is a model? Imagine a scientific researcher, Dr. Smith, that is part of an elite vaccine research group, “Team-19.” The team suggests a research path that Dr. Smith finds unpromising, and she voices her disapproval. But she is overruled by her research mates. However, while continuing to do her part in the project, Dr. Smith is
Professional Philosophy 75 always looking out for evidence supporting her unpursued idea. She even devotes some evenings to solitary research along these lines. Dr. Smith is one (among many possible) models of epistemic self-governance, insofar as she follows a path that her own intellect recommends, despite social pressures to do otherwise. 4.1.4 Self-governance Is Not Self-Reliance The Dr. Smith example can serve as a starting point to fend off the following sort of criticism. Let us call it the self-reliance criticism: Self-reliance criticism: epistemic self-governance is arrogant; self-governing agents are prone toward failures of proper deference to other epistemic agents, especially to experts. Contrary to the sentiment above, epistemic self-governance precludes neither working with others, nor taking expert advice, nor trusting the opinion of others more than one’s personal opinion. In short, the self-reliance criticism shows a fundamental misunderstanding of self-governance. Trusting expert opinion is sometimes the best epistemic move according to the self-governing agent’s own epistemic evaluation.3 Self-governing agents can trust both other agents and agencies, insofar as the agent themselves plays a role in recognizing others as trustworthy. For example, self-governing agents might trust the United Society of Surgeons (USS), and their trust in the USS might be grounded in awareness of the organization’s track record and/or credentials. Not only can it be an act of epistemic autonomy to identity experts, but it also can be an act of epistemic autonomy to identify experts that identify experts. Indeed, many or even most instances of epistemic delegation fit this description. We can even take our example of trusting the USS. For most, this type of trust is not grounded in a personal investigation of each member of this society, e.g., an investigation that dug into the educational and cognitive qualifications of each USS member. Instead, the trust is more likely grounded in sources that go several layers deeper and branch out in multifaceted directions. Trust in the USS might be founded on trust in a variety of news and academic sources, and the trust in news and academic sources is in turn grounded in trust in a variety of testimonial sources; moreover, trust in such testimonial sources is grounded in various types of observed experiences. What epistemically justifies appropriate “epistemic delegation trust” is quite complex, when it comes down to it. But many other sorts of epistemic justification, even with non-delegated tasks, are also complex.4 Trust that arises from autonomous agency can be distinguished from trust that arises from non-autonomous agency: the former is grounded in good epistemic reasons or reliable epistemic methods (when reliability cannot be attributed entirely to luck), while the latter might be grounded in no reasons at all or irrelevant reasons, or the agent might use
76 Maura Priest non-epistemic methods. Trust grounded in autonomous epistemic agency did not just arise by luck. If Tanner trusts his fraternity out of convenience and social expediency, he indeed has his reasons, but they are not epistemic reasons, and they are likely unreliable (and if they were reliable, they would only be that way by luck). Tanner might be governing his life in some sense, just not in an epistemic sense. Rather, Tanner hands over the steering wheel of his epistemic life to others. He does this not because he has grounds for thinking they will steer with accuracy and professionalism, but because the driver happens to be within arm’s reach and might even improve his social status. Tanner’s desire to rule his non-epistemic life, i.e., to have control over his popularity and his pleasure, motivate what we might call his “epistemic sacrifices.” Persons need not be scholars to be self-governing epistemic agents. Some persons put comparatively higher priority on non-epistemic aspects of life, and, hence, they spend less time on epistemic endeavors. These agents might delegate epistemic tasks more frequently than others who place epistemic concerns at the centerfold of their identity. Yet even agents who delegate key epistemic tasks can still be epistemically autonomous, as long as the delegating is done in an epistemically responsible fashion. 4.1.5 Self-governance Is a Spectrum If it is possible for autonomous agents to delegate epistemic responsibilities, we might wonder if epistemic autonomy as self-governance is missing something important. Because there are agents who focus on non-epistemic matters, i.e., agents who delegate a wide array of epistemic tasks, seem to govern their life less than those who don’t delegate. After all, if one agent makes more self-originated epistemic decisions in comparison to another agent, it seems intuitive that the former is more self-governing.5 One way to differentiate between “delegating autonomous agents,” and, “self-sufficient epistemic agents” is via an Aristotelian spectrum. On one side of the spectrum, we have the negative extreme of never passing off epistemic tasks to others, even when doing so makes epistemic sense. Few if any of us are excellent at all things epistemic, so sometimes handing the intellectual controls to others is the best we can epistemically do. Persons who refuse to delegate fall short in this respect, yes, they are self-governing but irresponsibly so. On the other side of the spectrum is the agent who happily hands over the epistemic reigns to anyone and everyone, leaving few to no epistemic areas within their control. Imagine a municipality, let’s call it Athens, that takes some legislative orders from superseding governments, but still maintains enough of a local hold to have its own legislature. Let us compare Athens to a different municipality, “Damascus.” Like Athens, Damascus is also selfgoverning. But unlike Athens, Damascus rarely takes orders from higher
Professional Philosophy 77 legislative bodies. Instead, the overwhelming majority of their municipal policies are designed and instituted by Damascus themselves. Damascus seems self-governing to a greater extent than Athens, even though both are independent cities. A far-reaching spectrum of legislative independence is compatible with political self-governance. This does not mean all such city states govern themselves to the same extent, but only that it makes sense to understand them each as independent enough for their political recognition as a stand-alone entity. Likewise, individual agents might manifest a wide range of intellectual self-governance, yet despite the range, all of these agents have enough self-governance to be aptly recognized as epistemically autonomous.6
4.2 How Much Autonomy? If epistemic autonomy is a spectrum, what part of the spectrum should agents aim for? This section argues that while everyone should aim for a base level of epistemic autonomy, some of us should aim to be further on the spectrum than others. “Should” is understood again in an epistemic norm, i.e. what is best insofar as the agent seeks to acquire (and share) epistemic goods like knowledge and understanding, and also seeks to avoid the epistemic determents on the other side of the coin. Just like it might be appropriate for some municipalities to have more independent legislatures, individual variance in talents, preferences, and life styles can motivate justified variance in degrees of epistemic autonomy. For some, epistemic ends are best served with less epistemic autonomy than others. Let us finally turn the spotlight to ourselves: where on the autonomy spectrum should professional philosophers land, supposing we understand should epistemically? We can also put aside variance between philosophers: while individual variance might justify some autonomy variance, we can focus on the ceteris paribus aspect of this issue: insofar as professional philosophers have a career steeped in intellectual advantages and years of intellectual training, how much epistemic autonomy is best (ceteris paribus)? There are several reasons to think that philosophers ought to aim to fall rather far toward the epistemically autonomous side of the spectrum. For one, philosophers are trained in reasoning. Because of this, and arguably because of abilities that contributed to graduate school admittance, we can assume that most professional philosophers have comparatively advanced skills in reasoning, argumentation, and creative thought. When a class of persons has unique skills, experience, and training, it often benefits the community for these agents to utilize the relevant talents to a greater degree and frequency than others. Persons trained in automechanics, music, or surgery, benefit the community by exercising their skills more often, and in especially advanced ways. Communities bestow special trust in mechanics to fix vehicles, and surgeons to perform surgery.
78 Maura Priest Given that we generally put special trust in those with special skills, it might be expected that we have higher epistemic expectations of those trained in reasoning, truth-seeking, and creative thought. Likewise, just as I delegate my oil change to my mechanic, it might make sense for some to delegate epistemic tasks to persons with special epistemic skills (which can be philosophers, among others). All of this suggests that persons like philosophers have reason to manifest greater epistemic self-governance than others, i.e., we might expect philosophers to fall further to the epistemically autonomous end of the spectrum than others. Taking on special epistemic responsibilities increases the odds of falling toward the far end of the epistemic autonomy spectrum; however, there is another reason to think that professional philosophers ought to trend toward the far end of the relevant spectrum. Let us return to Professor Smith, the philosopher who places overwhelming epistemic reliance on his spouse. Not only does Professor Smith fail to manifest epistemic autonomy via his personal approach to his research, we might wonder if he is even “doing philosophy” in the first place. Suppose you learned that a colleague (another professional philosopher) had been employing a ghostwriter most of his career, and indeed all of his publications were ghostwritten. This wouldn’t merely be surprising; many would interpret it as intellectual fraud, and as an egregious ethics violation. But why? Ghostwriting as a practice need not raise eyebrows nor qualms: politicians hire ghostwriters as a matter of practice, and many celebrities publish autobiographies via known ghostwriters. Rarely are politicians or actors blamed regarding this ghostwriting, some might even contend that using ghostwriters was the best epistemic option available. If the president wants to convey a certain message, a ghostwriter might be the most effective way to do this, and, hence, epistemically best. Likewise, if an actor isn’t great at communicating in writing, refusing to use a ghostwriter might be epistemically blameworthy. While ghostwriting and otherwise delegating intellectual tasks is often unproblematic, even praiseworthy, professional philosophy seems an exceptional case. Professional philosophers delegating essay writing is not merely controversial, it is unacceptable, even if the particular person delegated was chosen with good reason. Why is this? Are ghostwriters in philosophy only problematic in an ethical sense, perhaps? After all, the ghostwriter might produce work as good or better than the philosopher; this would seem an epistemic boon, not a problem. We must concede this point to an extent: if we learned that Hume or Kant used a ghostwriter, this would have little bearing on the praised philosophical arguments associated with their names. It would be surprising and unfortunate if intellectual credit went to the undeserving, however, “Hume’s” critique of inductive reasoning and “Kant’s” categorical imperative would carry the same epistemic weight.
Professional Philosophy 79 Even though we would appreciate The Critique of Pure Reason to the same extent and in the same way after the surprising ghostwriter revelation, we would not appreciate Kant himself to the same degree and in the same way. Indeed, philosophers might be expected to quickly reevaluate Kant’s philosophical greatness, likely striping him of such greatness honors all together. The irrelevancy of an author’s identity to the quality of arguments is one thing. The relevancy of the author’s identity is irrelevant for assessing the quality of the philosopher is another. Learning that our colleague, or god forbid, Kant himself, used ghostwriters fundamentally changes our evaluation of their merit qua philosopher. We reassess not merely negatively, but we question whether this revelation undermines their very status as a philosopher. Hence when philosophers delegate their autonomy in ways that are acceptable for other epistemic agents in other epistemic circumstances, philosophers cannot do the same. This suggests that philosophers who perform their job minimally well must govern their own epistemic life to a special extent and degree. This is because in outsourcing intellectual tasks, philosophers are often outsourcing their very trade.
4.3 Thwarted Autonomy We have suggested that professional philosophers ought to err further toward the epistemically autonomous end of the spectrum than others. Doing so is not only an epistemic boon to the community, but demanded by the nature of the philosophical trade. This section delineates particular ways that professional philosophy has gone awry with respect to this normative spot on the spectrum. The arguments that follow are not deductive. Many of the points below might be true even while the profession exists without any paucity of epistemic autonomy. However, the argument aims to show that this paucity is likely to occur given, (1) institutional features of the profession, (2) cultural features of the profession, and (3) what we know about human behavior from fields like psychology, anthropology, and sociology. Because of (1) through (3), we can surmise, that (4) epistemic autonomy problems within professional philosophy are likely to arise or to have already arisen. 4.3.1 Early Career Philosophers and Pressure to Transfer Agency As noted, one way that agents weaken the epistemic grasp over their intellectual lives is by handing over that control to others. This is what Dr. Smith did in our previous example: he handed, or perhaps pushed, the reigns to his intellectual life into the hands of his wife. And this is also what Tanner did: he gave his fraternity authority over critical epistemic parts of his existence. Some social and institutional features of
80 Maura Priest professional philosophy might motivate its members to let go of the hold on their intellectual life so others might instead carry that weight. This might be done out of various motivations, but plausibly what occurs is that philosophers’ epistemic motivations are overpowered by other motivations, e.g., professional and social ones. When this happens, philosophers might give others control over their intellectual life not because they believe others are better suited to the task, but because they believe that doing so will advance professional, social, and other non-epistemic aims. Here are some potential justifications that philosophers might have for handing the governance of their scholarly life to others: . Pleasing other philosophers can advance professional ends. 1 2. Pleasing university administrators can advance professional ends. 3. Pleasing external groups that bestow grants and awards can advance professional ends. 4. Pleasing other philosophers might advance one’s combined social and professional ends, i.e., it might help a philosopher make their way up the profession’s social and prestige hierarchy. Let us discuss each of the above in turn. To begin with (1), we should start with the caveat that human motivation is complicated, and often humans have multiple motivations for any individuated action. At times, philosophers might be motivated in part by the desire to please other professional philosophers, and also in part by the pursuit of truth. But even with multifaceted motivations, motivation arising from the desire to please other philosophers poses a threat to epistemic autonomy, insofar as it creates the risk of the philosopher abdicating epistemic control. This happens in various ways; here is one example: a grad student experiences professional pressure to select a dissertation topic/focus/area of interest that pleases their committee. This desire to please can be so strong that the student doesn’t hesitate in selecting research focus areas that they themselves find uninteresting. The pressure to please their committee is then compounded by the pressure to please employment search committees. For instance, the student might choose a trendy topic in the hope that this choice will increase the odds of impressing a search committee, which can ultimately result in a job offer. The non-epistemic motivation fueled by the desire for employment can plausibly overtake intellectual governance to such an extent that the student fails to even reflect on epistemic justifications for their area of focus. The more competitive the market, the more likely that philosopher job seekers will push aside their intellectual judgement to increase their odds of professional success. There is something deeply and sadly ironic about this series of events, as the trade of philosophy is fundamentally epistemic. Hence, philosophers are tossing aside the very thing that makes them philosophers in the hope that doing this will secure philosophical employment.
Professional Philosophy 81 While early career philosophers might not be “forced” to give up intellectual autonomy (although this depends on the nuances of the concept), it is less controversial to suggest that many young philosophers are institutionally and socially pressured. The pressures the grad student faces above seem a blend of the social and the institutional: pressures arising from the job market are largely institutional, i.e., they come from the institution of professional philosophy which is itself part of the institution of academia more generally. The social aspect concerns the way in which “established” professional philosophers are bestowed with a type of trust, respect, and authority that generally evades the average early career counterpart. While young philosophers might misestimate how their committee would react to students maintaining intellectual self-governance, fear of critical responses must be explained in part by the social milieu in which the student is immersed. The mere existence of students who worry about committee members who display disapproval toward disagreement speaks poorly of the profession’s place on the autonomy spectrum. A professional environment that unquestionably valued epistemic autonomy, one in which intellectual self-governance was both taught and encouraged, seems unlikely to breed students who give-up epistemic autonomy in the face of counteracting professional pressures. Ideally, professional pressures should align with, rather than push against, intellectual self-governance. If the profession of philosophy ought to aim for its members to not merely meet the minimum threshold of epistemic autonomy, but to supersede it, then professional pressures should motivate in the opposite way just described. Professional pressures should motivate philosophy graduate students to maintain epistemic autonomy because doing so increases their odds of professional success. The point is this: if the story of the young graduate student pressured to give up epistemic autonomy in the way I describe is something that many philosophers find plausible, then the presence of an autonomy problem is overwhelmingly likely. If the professional environment encouraged epistemic autonomy as it should, then an example of such a grad student should appear wildly implausible. 4.3.2 Autonomy Impediment at Any Career Stage At this point I want to pause the argument and reflect on the following: our focus of concern is not how intellectual autonomy is impeded, but rather the likelihood that such systematic impediments are prone to occur. This is brought up because pressures faced by early career philosophers often seem to arouse ethical qualms; hence, it makes sense to clarify that this chapter is concerned with the epistemic aspects of weakened epistemic autonomy, not the ethical aspects of either. We are considering whether culture and institutional features of professional philosophy might plausibly
82 Maura Priest impede epistemic autonomy, and if so, what epistemic conclusions follow? Even if professional philosophers handover epistemic autonomy “willingly,” we can still worry that this phenomenon has a negative epistemic impact on the profession. And given the central role of epistemology to the trade of philosophy, negative epistemic impact is an especially detrimental kind of impact (perhaps the most detrimental of them all). Digression completed. Let us return to non-epistemic pressures that might result in philosophers with diminished governance over their intellectual lives. While non-epistemic pressures faced by early career philosophers are especially salient, neither finding a job nor securing tenure guarantees that these non-epistemic pressures subside, nor even that they stop mounting.7 After tenure, philosophers might come face to face with new professional (and non-epistemic) pressures, e.g., the pressure to do research in areas that increase the odds of receiving research grants, or of advancing their reputation either within their subfield or the profession at large, or that make it more likely their work will be published in prestigious journals, or that simply increase the odds of publication in general. While post-tenure professional pressures are eased insofar as a philosopher’s place in the profession is relatively secure, a strange blend of social and professional pressures might end up just as forceful. For instance, while a tenured philosopher does not need to worry about publishing for the sake of tenure, they might worry about publishing for the sake of perceived professional success, or for the sake of social admiration and/ or social inclusion. Having differentiated between pressures faced by early career vs. posttenure philosophers, the following worries are ones that might affect philosophers at any stage of their career. Given what was said earlier, often these pressures are especially forceful at the pre-tenure stage, but they nonetheless might exist after it. The time and the extent of these pressures will of course vary not only according to career stage, but also institution, country?, and the personal traits of any given philosopher. 4.3.3 Publishing, Threats to Self-Governance, and Bad Epistemic Consequences For many philosophers, the pressure to publish is an especially impactful aspect of professional life. The desire to publish might make philosophers eager to please relevant authorities, e.g., referees, editors, and any philosopher that seems socially and professionally positioned to extend invitations to special issues, edited volumes, and invite-only journals. Pleasing these authority figures might not always align with producing scholarship in epistemically autonomous ways. There might be pressure to make changes to an argument to please a referee, even if the author does not believe that these changes are epistemically best. I have done this myself: I responded to a referee comment, changing my paper in a way to please
Professional Philosophy 83 the referee even though I believed that these changes made the paper worse off. In this instance, I handed over some of my intellectual governance for the sake of professional expediency. At the time, I cared more about the ability to influence my professional success than I cared about the ability to control my intellectual scholarship. I knew that this scholarship would be presented to the community as mine. And I preferred that this representation had a particular type of content, content that the referee deemed unworthy. But it appears that I wanted that less than I wanted the chance to participate at all. Said differently, myself and other philosophers might calculate that inclusion, even inclusion that comes at the cost of sincerity, is better than exclusion. Hence, fear of exclusion from the philosophical community can motivate epistemic insincerity. Not all referee comments pressure agents to forfeit autonomy. Referee feedback might include genuine questions without implicit suggestions of a right answer. Referees can make a recommendation in such a way that the writer does not think that publication is contingent on accepting it. Referees might also simply point out flaws or inconsistencies, which the author had overlooked, and thus, is happy to accommodate. This type of feedback need not impede autonomy. However, there are all sorts of ways that referee comments can impede on autonomy, and some are more harmful than others. Not every impediment on autonomy is necessarily “bad,” epistemically speaking. If the philosophical community is seen as a joint epistemic enterprise, at times it might be fine if one member of the community impedes on the epistemic autonomy of another for the good of the larger group. This impediment might be small enough so that the agent who was impeded on remains, over all, an autonomous epistemic agent. Suppose, for instance, that a referee requests that an author cite a certain conversation in the literature. The author, suppose, does not think that this citation is relevant. But also suppose that the author is objectively mistaken (from an epistemic perspective.) In cases like this, the philosophical community might epistemically benefit. Having admitted that not all refereeing impedes epistemic autonomy, and having admitted that sometimes impediment from a referee is an epistemic boon, there remains room to worry. There are various ways that, speaking generally, we have reason to worry that referees might be prone to a harmful type of epistemic impediment, a type of impediment that threatens two worrisome epistemic consequences. Each of these two consequences threaten the community at large, and also the philosopher individually. First, referees might impede to such an extent that the work is no longer the author’s own. When chatting with philosophers in hallways, online, at conferences, I noticed that philosophers often complain about referees who are upset that the author, “didn’t write another paper.” The take away seems to be this: the referee tries to force the author to write about issues that are irrelevant to the paper’s thesis, and that are best brought up in a different paper, if at all. Notwithstanding, the author
84 Maura Priest might abide (for the sake of career advancement). In this type of case the author’s epistemic autonomy is detrimentally infringed because their scholarship never comes to fruition, rather, some other type of scholarship does, scholarship that is not really the author’s. Thus, epistemic efforts that had been a central part of their epistemic self-governance (for most writing a paper is difficult) seems meaningfully damaged. Besides damaging a philosopher’s intellectual project and weakening their self-governance, there is also a more general epistemic harm from publishing philosophy of diminished epistemic quality. All things considered equal, we can reasonably believe that work produced by dedicated and engaged agents will be better than work produced by agents that are neither dedicated nor engaged. When referees request that authors make significant changes based purely on the referee’s own epistemic assessment, two things might happen. First, the author might bring in an entirely fresh discussion, and one that the author is neither dedicated to or engaged in. After all, in this instance the author is making changes because the referee suggested as much, not because the author believes the changes are best. And in many cases the author makes these changes not because they respect the referee’s intellect and opinion, but for the sake of professional advancement, i.e., for the sake of getting published. In other words, the author is very much not motivated by epistemic ends. Second, the referee is imposing their own intellect on the author’s paper. Because the referee will never be associated with the paper, and because the referee probably has not thought as long and as hard about the paper as the author, it seems unlikely that their suggestions (when the suggestions significantly impact content) are epistemically superior to the author’s own assessment. Besides suggesting that the author make fundamental and significant changes to their argument, referees might also dangerously impede on epistemic autonomy in a variety of other ways. It is not only the author that stands to be motivated via professional instead of epistemic ends. Referees might be motivated by professional ends, e.g., the referee might desire that the author cite the referee’s publications. Indeed, this is again something I have heard discussed among professional philosophers, i.e., the way that referees fudge the need to cite certain articles (because these citations offer the referee professional gains). Such overheard testimony has come from both the author (in the form of a complaint) and also the referee (in the form of a rationalized justification, “My paper might not be the most relevant, but it is more relevant than other papers the author cited”). 4.3.4 A Tentative Assessment At this point, some readers might be critical of my examples as they are anecdotes, not data. Here is why this criticism misses the mark. The relevance of pointing to an instance of referees and authors acknowledging the relevant autonomy infringement is not to try and prove that the
Professional Philosophy 85 problem is widespread, but rather, to point out that the current systemic organization of the profession creates opportunity for these situations. Even if I had been making up my example out of thin air (I wasn’t), the institutional structure of the profession would still render the examples plausible. If it is plausible, and if it follows that the profession is prone toward certain epistemic shortcomings, we all (professional philosophers) have reason for concern. If philosophers ought to err toward the more autonomous end of the self-governance spectrum, the profession is netting embarrassing results. This is so even if we accept (against better arguments and evidence) that philosophers regain intellectual autonomy after tenure. For one, many philosophers are not on the tenure-track. Contingent employment defines the entire career of a meaningful part of the profession. Of those who do enter the tenure-track, all of the following pose high risk to epistemic autonomy, resulting in minimal likelihood that the philosopher’s epistemic life is sufficiently self-governed: time spent in graduate school, followed by the market search, possibly followed by temporary employment, followed by years as an assistant (non-tenured) professor. The just mentioned time period is not only lengthy, it is the foundation of an intellectual career. By the time philosophers achieve tenure and then supposed intellectual freedom, habits of autonomy abdication might be engrained. Moreover, at this point, a philosopher’s specialty/focus area has been long established. While it is possible to change specialties, doing so can have a high cost, especially for philosophers working at institutions with high teaching, service, or administration loads. The point is twofold, first, it is doubtful that achieving tenure results in new found intellectual autonomy sometimes alluded to, and second, even if it did, the paucity of epistemic autonomy pre-tenure is worrisome enough. An autonomy problem within professional philosophy does not imply that this problem is present all the time, for all philosophers.
4.4 Can Things Change? 4.4.1 Pie in the Sky and Welcome to the Real World. If it is true that many features of the contemporary philosophy profession are barriers to epistemic autonomy, we might wonder if there is anything that can be done to change things. I will argue that there are things that can be done, but also that doing them in the current environment might prove especially difficult. Some philosophers might object that this is just “life,” and that professional philosophy cannot be immune from all the pressures of the imperfect world. In some sense this is certainly true. There will always be non-epistemic pressures that will interfere with some type of philosophical utopia and it would be naïve, and perhaps harmful, to think otherwise.
86 Maura Priest Knowing that some problem will always exist to some extent does not justify ignoring it, nor does it imply that nothing can be done about it. Even from institution to institution, significant differences in these external pressures are common. Sometimes these pressures are much more profound and have a greater interference with self-governance, and sometimes the interference is less profound and less likely to interfere. Hence, it is possible to take action that mitigates epistemic autonomy, even if there will always be external pressures that threaten our intellectual self-governance. The fact that we can take steps to improve but not perfect the stifling of epistemic autonomy likely applies to all forms of professional shortcomings. Take, for instance, sexual harassment, racism and biases against women, advantages to persons from certain socio-economic backgrounds, the unfair treatment of adjuncts and others without secure positions, the extent to which philosophers talk past each other, the extent that some subdisciplines of philosophy overlook other subdisciplines, or the extent that philosophers overlook work done outside of philosophy all together. The list, of course, can go on. The point is that it is unrealistic to expect to eliminate most problems in the philosophy profession, but that is not a good reason for just giving up on mitigating the problem, nor does it follow that attempts to mitigate it are far-fetched, unreasonable, pie in the sky. It is epistemically worrisome if professional philosophers take this type of objection seriously, because the same argument structure clearly doesn’t hold water in non-philosophical situations. The objection is similar to something you might hear when shooting the breeze at a crowded bar on Friday night. One person complains, for instance, that the rich don’t pay nearly enough in taxes and then another person shrugs and says, “look the rich always have advantages that is just the way of the world.” But, of course, even if the rich will always have advantages, it is possible for social, political, and institutional changes to mitigate the unfairness of these advantages, even if not eliminate them. The same can be true of the epistemic autonomy problem within the contemporary philosophy profession. 4.4.2 A Worthy Cause? Professional philosophy has limited social and institutional resources. It is true that we cannot fight every problem, and we must pick and choose our battles. So someone might wonder whether the philosophy autonomy problem is a battle worth fighting. Even if the problem can be mitigated, would it be worth the resources to do so? We can begin answering this question with some general thoughts on what criteria justifies putting scarce resources toward a goal/project. One uncontroversial factor is the odds of success. For even if it is technically possible to achieve some goal, if the odds of achieving it are especially low, it might be better to use the resources toward goals that are more achievable. That being
Professional Philosophy 87 said, a separate consideration is the importance of the aim. Some aims are arguably worth working toward even though achievement would be very difficult to attain. The importance might be viewed as intrinsic, i.e., that the cause itself is good, just, honorable, etc. Or it might be viewed as consequential, i.e., that if the goal is achieved, the results that follow are especially desirable. One more consideration that distinguishes itself both from the odds of success and the importance of the aim, is the amount of resources required. All things equal, it would see that an aim that requires significant resources should demand higher justificatory standards. With the above in mind, a case can be made for committing meaningful professional resources to the philosophical autonomy problem. This is not a claim that philosophy’s autonomy problem is more serious than any other mentioned or unmentioned cause. Nor that it is necessarily, clearly, or obviously, true that limited resources should be devoted to fight it. The case to be made is weaker but still important: the autonomy problem is a “worthwhile cause” that the profession should take seriously. Said differently, there is enough prima facie grounds for concern that it makes sense for the profession to reflect on the autonomy problem and consider making it a professional priority. Perhaps, after reflection, this problem is not worth the resources. Or maybe we should sacrifice some resources, but a comparatively small amount. Let us start off by agreeing that, ceteris paribus, the philosophy profession is better off if it has better philosophers. Next, all things again held equal, it seems plausible that philosophers who (for the most part) govern their own intellectual life are better philosophers than those who (for the most part) have their intellectual life governed by others. The latter claim might even be true by definition. The very act of “doing philosophy” seems to require some degree of epistemic self-governance: if a professional philosopher had a ghostwriter, never penning their own essays, this “philosopher” would not be doing philosophy at all. Less extreme but still problematic is if a philosopher arrives at most of their claims and arguments through the advice of their spouse. Whether, and to what extent, this philosopher might be doing philosophy is murky at best (if they are doing philosophy at all). Hence, the nature of “philosophy” requires some degree of intellectual autonomy, i.e., some degree of governing your own intellectual life. It follows that one reason to care about the autonomy problem is that if it were to creep up on the profession, slowly tightening its grasp, the profession might deteriorate until it is a “philosophy” profession in name only.
Notes 1 While autonomy as self-governance is by no means the only conception of autonomy (e.g., Febinberg 2009 & Arpaly, 2004, argue for other conceptions) neither is it a controversial one. The very first sentence under Stanford Encyclopedia’s entry on “Personal Autonomy” reads, “Autonomous agents
88 Maura Priest are self-governing agents” (Buss & Westlund, 2018). And in Stanford Encyclopedia’s entry titled, “Autonomy in Moral and Political Philosophy,” The first paragraph under section 1 states, “In the western tradition, the view that individual autonomy is a basic moral and political value is very much a modern development Putting moral weight on an individual’s ability to govern herself, independent of her place in a metaphysical order or her role in social structures and political institutions is very much the product of the modernist humanism” (Christman, 2020). While this second quote is skeptical of the value of self-governance, it does not question that autonomy is a form of self-governance, indeed it seems to assume that is exactly that. Of course, what it means to self-govern is a hotly debated issue with an entire literature unto itself. Some argue that self-governance fundamentally involves the ability to be responsive to reasons in a particular sort of way (Christman 2001 and 2014; Mele 1993, 1997, 2010), while others contend that self-governance involves an alignment of motivation, action, and cognitive state (Frankfurt, 1988; Watson, 1975; Jaworska, 2007a, 2007b; Shoemaker, 2003, and Bratman 1979). And there are several other proposals not mentioned. However, since the point of this chapter isn’t to uncover the exact nature of autonomy, but rather to point out a particular way in which the profession is epistemically lacking, the paper will not dwell on the differences. Instead, we will proceed with assuming nothing more than the uncontroversial, i.e., autonomy is a form of self-governance, and we can define self-governance only by the very basic way in which we can distinguish persons governed by external sources from those who are instead governed by the self. Interestingly, there does not seem much disagreement about which sort of agents govern themselves and which sorts do not. The disagreement comes in when philosophers attempt to construct a conceptual illustration of what is going on in those instances in which most agree agents are self-governing. 2 See Coady’s (2002) accountant of “independence” and “self-creation” for related ideas. 3 In her 2010 book, Epistemic Authority: A Theory of Trust, Authority, and Autonomy in Belief, Linda Zagzebski makes arguments along similar lines, i.e., that epistemic autonomy is compatible with trust in epistemic authorities. 4 In the philosophical literature on trust, several philosophers have argued that the ability to trust benefits society both by increasing the quantity of societal cooperation, the efficiency, and the overall social benefits (Dimock, 2020; Gambetta, 1998; Hardin, 2002.) There are obvious connections between this type of societal trust and what I am calling “epistemic delegation.” Trust is a critical aspect of *appropriate* epistemic delegation, or at least *warranted* trust is such an example. If we left no room for warranted epistemic trust, either everyone would know a lot less (because they would lack the cognitive skills and the time to figure everything out on their own) or society as a whole would miss out on important aspects of modern everyday life, as all citizens would be obviously focused on epistemic needs. If we were all so focused on epistemic needs, we would be deprived of experts contributing via other nonepistemic talents and skills. 5 I am not the first to suggest autonomy might be something other than an “all or nothing” or “have it or not” concept. See Mackenzei (2014) and Killmister (2020), for other discussions along these lines. 6 There are various manifestations of what I am calling the “autonomous enough” point on the autonomy spectrum. In businesses and non-profits that use project managers, the project manager might have the most autonomy
Professional Philosophy 89 on the team, i.e., they make not only more decisions, but they also have the authority to make various decisions that might override the opinions of other team members. Notwithstanding, the team members still are given a fair amount of autonomy; they are not merely under the beck and call of the project manager. 7 There is no shortage of articles and empirical support that connect intense social and reputational pressures with perceived and/or actual academic career success, especially in relation to networking and publishing. See, for instance, Gendron, 2008; Labianca et al., 2001; Haddock-Fraser, 2020; Faria, 2002; Haeussler, 2011; Mckay et al., 2008; Muhs et al., 2012; Ovadia, 2014; Barnes et al., 1998; Nicholas et al., 2015; Kim et al., 2018; Sutherland, 2017; Liu & Yang, 2003; Osterloh & Frey, 2020, and Faria & Goel, 2010. While these articles are not specifically focused on philosophy, in many of them it is fair to assume the pattern would hold.
References Arpaly, N. (2004). Unprincipled virtue: An inquiry into moral agency. Oxford, UK: Oxford University Press. Barnes, L. L., Agago, M. O., & Coombs, W. T. (1998). Effects of job-related stress on faculty intention to leave academia. Research in Higher Education, 39(4), 457–469. Bratman, M. (1979). Practical reasoning and weakness of the will. Nous, 153–171. Buss, S., & Westlund, A. (2018). Personal autonomy. In E. N. Zalta (Ed.), The Stanford Encyclopedia of philosophy. https://plato.stanford.edu/archives/ spr2018/entries/personal-autonomy. Christman, J. (2001). Liberalism, autonomy, and self-transformation. Social Theory and Practice, 27(2), 185–206. Christman, J. (2014). Relational autonomy and the social dynamics of paternalism. Ethical Theory and Moral Practice, 17(3), 369–382. Christman, J. (2020). Autonomy in moral and political philosophy, The Stanford encyclopedia of philosophy, Edward N. Zalta (ed.), URL = Coady, C. A. J. (2002). Testimony and intellectual autonomy. Studies in History and Philosophy of Science Part A, 33(2), 355–372. Dimock, S. (2020). Trust and cooperation. In J. Simon (Ed.), The Routledge Handbook of trust and philosophy (pp. 160–174). New York, NY: Routledge. Faria, J. R. (2002). Scientific, business and political networks in academia. Research in Economics, 56(2), 187–198. Faria, J. R., & Goel, R. K. (2010). Returns to networking in academia. NETNOMICS: Economic Research and Electronic Networking, 11(2), 103–117. Frankfurt, H. G. (1988). The importance of what we care about: Philosophical essays. Cambridge, UK: Cambridge University Press. Febinberg, J. (2009). Autonomy. In J. P. Christman & J. Anderson (Eds.), Autonomy and the challenges to liberalism: New essays (pp. 27–53). Cambridge, UK: Cambridge University Press. Gambetta, D. (1998). Trust: Making and breaking cooperative relations. New York, NY: Blackwell.
90 Maura Priest Gendron, Y. (2008). Constituting the academic performer: The spectre of superficiality and stagnation in academia. European Accounting Review, 17(1), 97–127. Haddock-Fraser, J. (2020). The unseen pressures of academia. In M. Antoniadou & M. Crowder (Eds.), Modern day challenges in academia: Time for a change (pp. 211–225). Cheltenham, UK: Edward Elgar Publishing. Hardin, R. (2002). Trust and trustworthiness. New York, NY: Russell Sage Foundation. Haeussler, C. (2011). Information-sharing in academia and the industry: A comparative study. Research Policy, 40(1), 105–122. Jaworska, A. (2007a). Caring and internality. Philosophy and Phenomenological Research, 74(3), 529–568. Jaworska, A. (2007b). Caring and full moral standing. Ethics, 117(3), 460–497. Killmister, S. (2020). Taking the measure of autonomy: A four-dimensional theory of self-governance. London, UK: Routledge. Kim, E., Benson, S., & Alhaddab, T. A. (2018). A career in academia? Determinants of academic career aspirations among PhD students in one research university in the US. Asia Pacific Education Review, 19(2), 273–283. Labianca, G., Fairbank, J. F., Thomas, J. B., Gioia, D. A., & Umphress, E. E. (2001). Emulation in academia: Balancing structure and identity. Organization Science, 12(3), 312–330. Liu, Y., & Yang, Y. R. (2003, March). Reputation propagation and agreement in mobile ad-hoc networks. In 2003 IEEE Wireless Communications and Networking, 2003. WCNC 2003 (Vol. 3, pp. 1510–1515). IEEE. Mackenzie, C. (2014). Three dimensions of autonomy: A relational analysis. In A. Veltman (Ed.), Autonomy, oppression, and gender (pp. 15–41). New York, NY: Oxford University Press. McKay, R., Arnold, D. H., Fratzl, J., & Thomas, R. (2008). Workplace bullying in academia: A Canadian study. Employee Responsibilities and Rights Journal, 20(2), 77–100. Mele, A. (1993). History and personal autonomy. Canadian Journal of Philosophy, 23(2), 271–280. Mele, A. (1997). Strength of motivation and being in control: Learning from Libet. American Philosophical Quarterly, 34(3), 319–332. Mele,A. (2010). Moral responsibility for actions: Epistemic and freedom conditions. Philosophical Explorations, 13(2), 101–111. 10.1080/13869790903494556. Muhs, G., Niemann, Y., Harris, A., & Gonzalez, C. (Eds.). (2012). Presumed incompetent: The intersections of race and class for women in academia. Boulder, CO: University Press of Colorado. Nicholas, D., Herman, E., Jamali, H., Rodríguez-Bravo, B., Boukacem-Zeghmouri, C., Dobrowolski, T., & Pouchot, S. (2015). New ways of building, showcasing, and measuring scholarly reputation. Learned Publishing, 28(3), 169–183. Osterloh, M., & Frey, B. S. (2020). How to avoid borrowed plumes in academia. Research Policy, 49(1), 103831. Ovadia, S. (2014). Research gate and academia. edu: Academic social networks. Behavioral & Social Sciences Librarian, 33(3), 165–169. Shoemaker, D. W. (2003). Caring, identification, and agency. Ethics, 114(1), 88–118.
Professional Philosophy 91 Siragusa, T. (2020, January 14). Autonomy increases employee loyalty. Retrieved January 10, 2021, from https://medium.com/radical-culture/ autonomy-increases-employee-loyalty-c72a0e31eb31 Simmel, G. (2006). The metropolis and mental life. In M. Featherstone, D. Frisby, & G. Simmel (Authors), Simmel on culture: Selected writings (pp. 174–186). London, UK: Sage Publications. Sutherland, K. A. (2017). Constructions of success in academia: An early career perspective. Studies in Higher Education, 42(4), 743–759. Watson, G. (1975). Free agency. The Journal of Philosophy, 72(8), 205–220. Zagzebski, L. T. (2015). Epistemic authority: A theory of trust, authority, and autonomy in belief. New York, NY: Oxford University Press.
Part II
Epistemic Autonomy and Paternalism
5 Norms of Inquiry, StudentLed Learning, and Epistemic Paternalism Robert Mark Simpson
5.1 Introduction There is a compelling case for epistemic paternalism – roughly: interfering with people’s inquiry, in order to make it go better – in certain narrowly defined scenarios. Evidence control in law is one example that’s routinely cited in work on this topic. We set things up in legal trials so that jurors can’t access all the evidence that’s relevant to a case. We don’t let jurors themselves decide whether to review hearsay evidence or information about the defendant’s prior convictions, and this is because having access to that evidence makes it less likely that jurors will accurately judge the facts of the case. Similar logic applies to fact-finding situations in everyday life. Suppose I’m asking for my colleague’s opinion on whether a student might have paid someone else to write his essay. I have a few bits of evidence that lead me to suspect as much, so I ask my colleague for her take on what that evidence shows. But I may decide to withhold one specific piece of (potentially) relevant evidence, e.g. the defensive reaction my student had when I raised my worries with him. I may think that this piece of evidence is easily misinterpretable, and that my colleague’s take on the situation will be more reliable if they aren’t trying to factor in this evidence. I take the in-principle justifiability of epistemic paternalism in cases like these to be a settled conclusion (see Goldman, 1991, pp. 115–121; Ahlstrom-Vij, 2013, pp. 138–153). But it is an open question whether we should use information control and other epistemic paternalistic measures outside of the specific contexts in which their justifiability appears to be clear-cut. There are lots of people today believing lots of falsehoods (e.g. related to public health, or the climate crisis) that have the potential to result in lots of harm. It seems possible that a well-designed system of information control, e.g. involving legal penalties for spreading misinformation online, could reduce the prevalence of these false beliefs and mitigate the harms. And a similar rationale to the one that we saw in the fact-finding cases seemingly speaks in favor of this. If we let people see all the notionally relevant evidence, they’re liable to be led astray by
96 Robert Mark Simpson misleading bits of evidence. So we should filter the evidence we make available to people based on what evidence is most likely to result in people forming true beliefs.1 I want to identify some factors that speak against any such scaled-up, further-reaching program of epistemic paternalism. There are good reasons to worry about practices of non-consultative information control being misused or abused.2 There are also reasons to worry about how a widespread program of information control could lead to longer-term epistemic disutility, by pushing us towards a broadly dysfunctional socialepistemic culture. Respectful consultation between experts and their clients encourages a culture where people listen, trust, venture ideas, and deepen their understanding of the world through dialogue. Widespread information control may be incompatible with this sort of culture. The worries I have in mind are related, but they run in a slightly different direction. Epistemically benefiting people is about helping them to get true beliefs, knowledge, understanding, etc. But as we know from our own experience as inquirers, acquiring and retaining these epistemic goods isn’t a linear, forward march. We forget things. Information goes in one ear and out the other. We learn things that mean nothing to us, or we fail to learn things because we can’t get interested in them in the right way. At every turn, our interests and temperament affect our ability to learn, as well as our retention and continued understanding of what we have already learned. The information controller must take himself to have a good idea as to what information is likely to lead other people towards accurate beliefs, and what is likely to confuse or mislead them. But the epistemic benefits of information control in a given case don’t just depend on the “raw” tendencies of certain bits of information. They also depend on how an inquirer’s interests, temperament, and overall state of mind orient them toward the range of possible inquiries and other epistemic “tasks” that they might pursue at any given moment. A non-consultative information controller will rarely be in a position to know how all these other factors are in play: which inquiries are likely to be most fruitful, which learning is likely to be retained, and which epistemic tasks are most likely to yield epistemic benefits, given the inquirer’s state of mind at a particular moment. To use a broad simile: non-consultative information controllers are a bit like an inflexible teacher with a rigid mindset about the content of his syllabus. He aims to transmit the optimal package of information, given his expert insight about what his students, taken as a cohort, ought to know. But this simile gives us a sense of the limitations of non-consultative information control. It tries to be epistemically beneficent, but it doesn’t avail itself of insights into the learner-specific factors that bear on its effectiveness. This may not be a problem in narrow fact-finding cases, where the “learning agenda” is settled in advance. But it is an issue if we intend to “go paternalistic” in a wider range of scenarios.3
Norms of Inquiry 97 I will develop the argument sketched above across four sections. In Section 5.2 I discuss how the open-ended nature of people’s interests makes it hard to epistemically benefit people non-consultatively. In Section 5.3 I consider how people’s individual temperaments have an effect on the success of their inquiries, and why this also makes it hard to epistemically benefit people non-consultatively. In Section 5.4 I expand on the suggestions outlined above, about educational practices, and consider how a student-led approach to teaching and learning may be justified on epistemic grounds. And in Section 5.5 I discuss how we should think about epistemic autonomy, having recognized the importance of individuals’ interests and temperaments vis-à-vis the success of their inquiry.
5.2 Open-ended Inquiry Let’s get a sharper definition on the table. Epistemic paternalism means . Interfering with the inquiry of some agent, A 1 2. Without consulting A 3. In order to epistemically benefit A.4 I see information control – i.e. withholding information from people, or making them attend to information they don’t yet have – to be one straightforward way of interfering with people’s inquiry. The problems I will be exploring all tie into the non-consultation part of our definition. I want to illuminate a cluster of challenges that arise in attempts to nonconsultatively bestow epistemic benefits on others. One key assumption I am making, in explaining these problems, is that beliefs must be of some interest to their recipient in order for their acquisition to be epistemically beneficial. In this assumption I’m following the two most prominent defenders of epistemic paternalism, Goldman and Ahlstrom-Vij, who both think the epistemic benefit of an agent gaining a true belief depends on the extent to which she is interested in it. It is “a function of whether the … agent is interested in the questions to which the belief pertains” (Ahlstrom-Vij, 2013, p. 53; see also Goldman, 2000, p. 321). A certain kind of epistemological purist might baulk at this assumption. But for most of us it accords with our intuitions about the nature of epistemic value.5 By itself this assumption doesn’t create a huge problem for defenders of epistemic paternalism. There are lots of cases where you have a good sense of what a person is interested in, and where you can see whether the epistemic goods you’re trying to bestow tie into the recipient’s interests in such a way as to constitute a benefit. Suppose you overhear someone at the library saying she’s there to look for books on the Vietnam War, a topic that she says she’s previously only learned about in dribs and drabs. You happen to be standing by the shelf that houses the library’s small
98 Robert Mark Simpson collection on the Vietnam War. You are well-read on the topic, and you notice a book there that is a less reliable source than all the others. It misrepresents unsourced anecdotes as hard facts, and proposes a wildly tendentious interpretation of the historical record. Before the would-be reader gets to the shelves to start browsing, you remove this book from its spot, hiding it in your bag until she’s come and gone. In this situation you may think of yourself as doing the inquirer a favor, by preventing her, as a novice, from unwittingly choosing a book that would have disserved her pursuit of knowledge. But the example also clues us into the difficulties that come with trying to beneficently guide other people’s inquiry via information control. Things seem fairly simple in the library case because you know what knowledge the inquirer is trying to gain, and you’re just helping her take more effective means to her ends. But, often, including in cases just like this, people’s inquisitive interests are open-ended. The aims of their inquiries evolve as those inquiries proceed. Someone starts out wanting to learn about the Vietnam War, but along the way she gets interested in something broader (e.g. cold war politics, south-east Asian history), or narrower (e.g. the culture of wartime Saigon, the battles that her father fought in), or something tangential (e.g. war photography, French colonial architecture). Granted, not all inquiries are like this. Sometimes you want to know where your keys are, so you check your pocket and there they are. Or you ask your partner, and she says the keys are on the table, and that’s that. But even in situations that involve off-the-cuff observational or conversational queries, as opposed to more planned-out sequences of investigative research, it’s a common enough experience to inquire into P, only to find that the evidence that settles the question of whether P opens up further questions about Q and R, which you’re just as interested in as you were in P – or indeed, which lead you to think that the real underlying reason why you were interested in P was because of P’s connection to these further matters, Q and R. The worry that’s lurking, for the epistemic paternalist, is that if you aren’t being consultative, it’s hard to make a person’s inquiry more successful, because the aims of her inquiry, and thus what would qualify as a success in it, evolve as the inquiry develops, in a way that the paternalizer isn’t well placed to keep track of. Whether you are epistemically benefiting someone depends on whether you’re helping them gain interest-relevant beliefs, knowledge, or understanding. And even if you have a good initial sense of what your beneficiary is interested in, her interests will have nuances that surface as the inquiry proceeds, and which alter its aims. If you consult as you go, to get a more responsive understanding of the inquirer’s interests, you can generally do a better job at helping her inquiry succeed. But then your intervention will be collaborative or lateral assistance, as opposed to a non-consultative interference.
Norms of Inquiry 99 In the library case, hiding the dubious book may be epistemically beneficial. But you could also ask the inquirer more about what she’s interested in. It may turn out that the book is, despite its failings, a good starting point for learning about some adjacent aspects of the Vietnam War that specially pique the inquirer’s interest. In short, there are cases in which you will be more effective in your epistemic beneficence by being consultative, and thus tuned into the inquirer’s dynamic curiosities, and indeed, where declining to consult with an intended beneficiary could be downright epistemically harmful. Far-reaching programs of epistemic paternalism seem questionable at least until we have a sense of how prevalent such cases are, in comparison to cases where the “agenda of inquiry” is fixed in a way that nullifies the worry.
5.3 Norms of Inquiry and Finite Brainpower In the above I’m assuming that people are entitled to have their own interests, and hence that there is some amount of agent-relativity in judgements about the comparative interestingness of different inquiries. If we are asking “should A be interested in P?”, or “should A be more interested in P than Q?” we will often need to know some things about A – her ideals, longer-term goals, personal fascinations, sense of identity, etc. – and how they all add up to a particular schedule of interests. Granted, we may think that what matters isn’t only people’s active interests, i.e. the interests they consciously recognize in themselves, but also the interests they would have if they were fully informed and reflective (Ahlstrom-Vij, 2013, p. 55, Goldman, 1999, p. 95). But this still leaves room for the type of agent-relativity I am adverting to. Permissible variations in people’s interests and curiosities aren’t entirely due to variations in how informed and reflective people are. A group of people who are similarly reflective and informed are still likely to have a great deal of diversity in their interests. Even granting all this, one might think that informed, reflective people are obliged to be interested in certain things. And this gives rise to a natural reply to my points above about why the open-ended nature of people’s interests calls for a consultative approach to epistemic beneficence. Whatever agent-relativity applies to judgements about the interestingness of different questions, presumably the information controller is sometimes able to think “this person, A, should be interested in P, and so I’ll be benefiting A in helping them learn about P, regardless of whether they evince an active interest in P.” And consultation doesn’t always make sense in such cases. If A is actively interested in P, or ready to have an active interest awakened, then consultation is unnecessary. And if A is resistant to taking an active interest in P, then consultation may be counterproductive. “It will be better to help A learn about P on the sly,” so the paternalist may think “instead of raising the issue with her head-on.”
100 Robert Mark Simpson I don’t think this reasoning will get us to the conclusion that, for the epistemically beneficent actor, it is in general prudent to eschew consultation with a would-be beneficiary. For one thing, this rationale only applies in a narrow set of cases, i.e. those in which you’re sharing information related to things that the beneficiary is obliged to be interested in. But even in these cases there are complications in how we understand the relative weight and priority of putatively obligatory interests, which create subtle obstacles for any attempt at non-consultative epistemic beneficence. For example, suppose we think people should be interested in questions about the nature and effects of Covid-19. The issue is what level of priority should be assigned to that interest, given people’s finite cognitive resources, and their presumptive entitlement to allocate some of those resources to other activities. An extreme view is that people should prioritize learning about Covid-19 over and above all other activities, including recreation, socializing, or anything else that isn’t strictly necessary for survival. A very slightly more moderate view would be that people may spend some of their brainpower on things besides pursuing epistemic goods, but that with the part of their cognitive resources given over epistemic goods, finding out about Covid19 should take priority over everything else. Both views seem totally implausible. The somewhat plausible thesis that lies in the rough vicinity of these extreme views is that people should allocate some brainpower to pursuing epistemic aims, and should have learning about Covid-19 as one of their epistemic aims. Once we retreat to that position, though, it seems like there will be lots of wiggle room vis-à-vis what degree of priority a given agent should assign to learning about Covid-19, or any other topic they are obliged to have an interest in. Besides the fact that there are endless other topics to seek out new information on, there are also other types of epistemic “work,” besides seeking out new evidence and information, to which the agent may allocate some of her cognitive resources. One kind of work is keeping track of things that she already knows: reviewing her knowledge, recalling its evidentiary bases, and engaging in other sorts of mental exercises that recommit her knowledge to memory. Another kind of work is expanding her stock of knowledge not through seeking out new information, but by reflectively figuring out what further distinct and non-trivial pieces of knowledge might be inferable on the basis of her current stock of knowledge. Jane Friedman’s work on the norms of inquiry – what she calls zetetic norms – offers a rich analysis of the kinds of resource-allocation tradeoffs that I’m pointing to here. One key idea Friedman wants to emphasize is that there is no obvious way to arbitrate between norms that tell you to acquire new information, as part of an inquiry, and norms that
Norms of Inquiry 101 tell you to engage in reflections and inferences drawing on your current stock of knowledge. You have limited brainpower and time, and these two different classes of norms – while they are both ultimately aiming at the same sort of epistemic goods: true belief, knowledge, understanding – offer divergent prescriptions as to your resource-allocation at any given moment. To paraphrase Friedman: There is a lot that a subject, S, can do at some time, t, and this “do” doesn’t only range over bodily actions, like talking to people, or looking around, or searching online. It also ranges over mental actions like drawing inferences, making judgments, and searching memory. Let’s say that A is the set of all the things S is in a position to do at t. The strategic norms for inquiry are going to render a verdict about which acts in A S is allowed to do and which they are not allowed to do. If we assume that S should at least take some means for trying to answer their question, at t, then there may be many acts in A that S isn’t permitted to do, at t, from the perspective of these strategic norms. This is because many of the things S can do at a given moment aren’t going to be means to them answering their question. (Friedman, 2020a, pp. 18–19, paraphrased) In essence, the idea is that retaining and deepening your current stock of knowledge can get in the way of actively inquiring after new information, and vice versa. There is also another way that the two classes of norms can come into conflict, which relates to suspending judgement. Friedman convincingly argues that treating a proposition P as an object of inquiry requires you to suspend judgement about P. But then it seems like there are cases in which the available evidence justifies you in having a settled belief that P, but where it is also permissible to open an inquiry into whether P, on the off chance that the evidence is misleading, or liable to be outweighed by as-yet-unavailable evidence indicating P’s falsity. In cases like these, the zetetic norms and doxastic epistemic norms generate different prescriptions. One calls for you to believe P, while the other permits you to suspend judgment about P (Ibid., pp. 8–12).6 These aren’t just meant to be abstruse technical claims about the structure of epistemic norms. These kinds of tensions and trade-offs are a pervasive feature of our epistemic lives. Most of us have forgotten a huge amount of things we once knew. And for just about all of us, the story of this forgetting is partly a story about opting to prioritize the acquisition of new information over retention and reflective deepening of what we already knew. When we decide to undertake some inquiry, we suspend judgement about various propositions (some of which we may
102 Robert Mark Simpson have formerly held as settled beliefs), and we allocate some of our finite cognitive resources towards the acquisition of new information (that illuminates the truth of those propositions), and away from any number of other cognitive tasks that would serve some other part of our all-thingsconsidered epistemic aims, via some other sort of epistemic work. There are cases in which it seems obvious that an agent is negotiating these trade-offs badly. If you lose track of important knowledge because you get swept up in learning about pointless trivia, you’re managing things badly. If you fail to learn about something you really need to learn about, because you decided to re-read a book of facts that you already memorized (just to further reinforce your memory), you’re managing things badly. But most of the time it isn’t obvious what epistemic conscientiousness per se calls for, in negotiating these trade-offs.7 And so, again, agent-relative factors seem to have a bearing on our judgements about this. If a person has a burning appetite for a particular inquiry that’s a good reason for her to allocate some of her cognitive resources to the acquisition of information relevant to that inquiry. If another person is disposed to retain and reflectively deepen her knowledge of a certain topic, rather than pursuing new inquiries, that is a good reason for her to allocate her finite cognitive resources accordingly. The underlying reasoning here is just simple, run-of-the-mill epistemic consequentialism. “Outward” investigation and “inward ratiocination” can both deliver epistemic goods, but more epistemic goods are likely to be realized if, at any given time, people are striking a balance, between investigation and ratiocination, that is responsive to their own temperamental leanings. We are now in a position to see a further way in which attempts at epistemic paternalism can go awry, despite beneficent intentions. In Section 5.2 I argued that you can do better at assisting someone’s inquiry by being consultative, and thus attuned to the inquirer’s dynamic interests. But epistemic beneficence also calls for you to help the inquirer to strike a balance between investigation and ratiocination that is optimal in their attainment of epistemic goods. And this factor also speaks in favor of a consultative approach, insofar as the inquirer herself is typically betterplaced to interpret her own temperamental leanings, and in light of these to decide whether more or less active inquiry is likely to be helpful for her in attaining epistemic goods. Information control isn’t only a way of guiding inquiry that is already underway. It can also be a way of instigating or terminating inquiry. Placing some information under a person’s nose, without asking them if they’re interested, is a way of nudging them towards an inquiry that makes use of that information. Removing information from someone’s reach, without first checking with them, is a way of nudging someone away from inquiry that makes use of that information. Information
Norms of Inquiry 103 controllers affect how the balance between inquiry and ratiocination is struck, for those agents whose information they are controlling. If you consult as you go, you can help strike a better balance on this front, by being sensitive to the agent’s own temperamental inclinations, and thus you can generally do a better job at helping the inquirer to attain epistemic goods. But again, your beneficent intervention in that case will be a collaborative form of assistance, not a top-down, non-consultative interference.
5.4 Student-led Learning This picture of consultative epistemic beneficence that I am sketching will resonate with those who teach, and who take pedagogy seriously. Of course, in most teaching situations some learning outcomes are decided upon in advance, and to that extent are not led by students’ interests or temperaments. But good teachers look for ways to tap into students’ interests and temperaments in trying to bring about learning outcomes. They let students focus on what fascinates and excites them within the syllabus. They allow lines of inquiry to open up in a way that’s studentled, even if that sometimes means digressing, or jumping ahead a little, or going right back to basics. (“There are no stupid questions.”) And they allow other lines of inquiry to trail off if students aren’t latching on in the right way. Some teachers may have an ethical story to tell about these pedagogical practices. They may say that having an adaptable, studentled approach to teaching is about respect, fairness, or promoting welfare. But these practices can be defended on purely epistemic grounds. You can favor a student-led approach to teaching precisely because you want your teaching to effectively promote the epistemic good, and you think this approach is the best way to do so. This picture stands in striking contrast to how education is portrayed by proponents of epistemic paternalism. What they portray is something more like what Paolo Freire, in his classic work on education, Pedagogy of the Oppressed (2017), characterizes as a “banking” model of education. Educators use their expertise to override students’ neophytic attitudes about what it’s worth being interested in, and what it’s worth paying attention to in order to learn about those interests. As an example of this, when Goldman speaks of the exclusion of indefensible views from a curriculum, he uses health education classes as an example. Health classes “do not give equal time to drug pushers to defend the safety of drug use,” he says, “or to quacks to present and defend their cures.” And such omissions, he suggests, have veritistically good consequences (1991, p. 121). Goldman thus regards information control in education as one of the well-established social practices that conflicts with a principle that would prohibit information control by epistemic authorities (Ibid., p.
104 Robert Mark Simpson 114). Picking up on the same subject, Ahlstrom-Vij considers an extension of this sort of rationale. We may even imagine that teachers could justifiably withhold true and perfectly accurate theories, on the grounds that those theories … have a tendency to confuse the students, and have them draw the wrong conclusions. (Ahlstrom-Vij 2013, p. 31) His thought here is that there is a great deal of complexity in explaining how much of a health risk different drugs pose, given facts about addictiveness, dosage strength, etc., and moreover, that some legal drugs, like alcohol, or over-the-counter medication, can potentially have worse health effects than some illegal drugs. And so: A completely accurate account of the risks and benefits of drugs would have to be fairly complex, on account of having to make several distinctions and qualifications … such an account may also be more likely to lead students to draw inaccurate conclusions than a less sophisticated … account, such as one on which it is maintained, say, that all drugs are bad. (Ahlstrom-Vij, 2013, p. 31) As Ahlstrom-Vij rightly goes on to say, it is an empirical question whether suppressing information and espousing simplified – and strictly speaking, false – summaries will result in better epistemic outcomes for students. I don’t pretend to have an overview of the data, but it strikes me that advocates of more transparent, consultative, student-led pedagogy might well want to cite drug education as a prime example of the folly of information control in teaching. Who knows how many false beliefs about the health risks of marijuana, alcohol, and prescription pain-killers are a result of paternalistically mis- or under-informative drug education programs, which simplistically portray one of these as an unsafe drug, and the other two as presumptively safe refreshments/ medicine? However beneficent one’s underlying motives might be, any responsible educator needs to reckon with the possibility that suppressing the facts or propounding half-truths can lead to totally dire veritistic outcomes. More to the point for our purposes, though, education that isn’t responsive to students’ interests and temperaments seems to run a serious risk of failing to properly engage the student’s mind. Ahlstrom-Vij seems to me too sanguine about this.
Norms of Inquiry 105 Many of us have at one point or another in the course of our schooling felt that we are being taught thoroughly uninteresting and irrelevant things. But as we look back years later with more informed eyes, we see that in many cases we were actually being taught things that are relevant and interesting, although we were not able to see this at the time. (Ahlstrom-Vij, 2013, p. 55) Many people do have such an experience. But we can’t infer much from this until we consider the informative comparisons. First, in the wake of paternalistic educational experiences, how common is retrospective appreciation, like what Ahlstrom-Vij describes, compared to retrospective frustration, which, with the benefit of hindsight, sees the preoccupations of one’s education as being just as pointless as they seemed at the time? Second, do we find greater retrospective appreciation about the topical preoccupations of one’s education in the wake of paternalistic educational experiences, or do we see more of this appreciation in the wake of less paternalistic, more student-led education? These are empirical questions. But for purposes of hypothesis selection, it’s notable that in pretty much all mainstream storytelling about education, from Hard Times to Mona Lisa Smile, rigidity about the syllabus is portrayed as a mark of pedagogical incompetence, while adaptability and sensitivity to students’ individual interests and temperaments is portrayed as the mark of a true teacher.8 I am only trying to make a modest point here. I’m not sure whether we can use arguments about the veritistic merits of different educational philosophies to draw far-reaching conclusions about the general justifiability of non-consultative information control. I’m really just trying to problematize the appeal to education, as a supposedly congenial example and reference point for advocates of information control. It’s true that part of what occurs in education is that epistemic authorities decide on behalf of others what they should and shouldn’t pay attention to. But this cannot be seen as lending support to the idea that non-consultative information control is a reliable, tried-and-tested way for authorities to do their work. Why? Because the authoritiesdeciding-what-students-pay-attention-to part of educational practice is necessarily married up with another part of education, which is about figuring out how to let the contents of a syllabus take hold in students’ minds. And it’s doubtful that a non-consultative approach is the optimal one vis-à-vis this other part of educational practice. This other area of education is all about being sensitive to the interests and temperaments of students, and working together with students to figure out
106 Robert Mark Simpson when and where new inquiries should take off, and where, instead, energy should be spent on retention and reflective deepening of what has already been learned.9
5.5 Epistemic Autonomy and Foreign Wills Let’s take stock. Conferring epistemic benefits on a person is easy in some cases. Tell her something you know, that she doesn’t, and you’ve imparted a bit of knowledge. But conferring epistemic benefits via nonconsultative information control is harder. In order to do it effectively you need to have a good understanding of what informational provisions will, at the point of intervention, best conduce to the inquirer gaining true beliefs etc. And this depends on facts about the inquirer’s state of mind that aren’t readily ascertainable without consultation. What are her interests? How are they evolving, or liable to evolve, in the course of an inquiry? Will she make greater epistemic gains by taking on new information at the moment, or by working on retention and reflective deepening of things she’s already learned? If you want to epistemically benefit someone then you should consult with them to get a better understanding of such matters. This is what skilled teachers do by making their teaching practice student-led in various ways. As a would-be epistemic benefactor, you should only eschew consultation with a beneficiary if you think consultation is likely to totally derail her learning, e.g. if she has an appetite for misleading information, which consultation is somehow going to fuel. But outside of such unusual cases, consultation with the inquirer looks like a key part of epistemically beneficent practice.10 This suggests a different way of thinking about epistemic autonomy than what we often find in debates around epistemic paternalism. Some commonly cited definitions of epistemic autonomy unhelpfully conceive of it as a rather eccentric state of being – something like a total unwillingness to learn from or with the help of others.11 Of course, epistemic autonomy thus defined gives us little reason to refrain from information control, because on this definition no one has much epistemic autonomy, and few of us would really want it.12 The fact that we all learn things from and with others should just be taken as part of the descriptive scenery that forms the backdrop against which theorizing about the nature of epistemic autonomy occurs. Being epistemically autonomous cannot, in light of this, be about being disconnected from others in one’s epistemic life. It must be about being connected to others in the right ways – ways that facilitate learning, not indoctrination or excessive deference.13 Being able to conduct inquiry in a way that’s guided by one’s own interests and temperamental leanings (vis-à-vis the trade-off between investigation and ratiocination), and, correspondingly, being free from the interference of foreign wills in respect of these things,14 is one significant aspect of epistemic autonomy, properly understood.15
Norms of Inquiry 107 Epistemic autonomy thus conceived of needn’t be taken as a deontic side-constraint on the pursuit of epistemic goods. It isn’t that you should temper your pursuit of epistemic goods out of respect for people’s epistemic autonomy. The idea, rather, is that you should respect epistemic autonomy because this is how the effective pursuit of epistemic goods works for beings like us. In the introduction to this volume, Matheson and Lougheed ask whether it is better for someone to be autonomous in their attainment of epistemic goods. The answer suggested by my account isn’t that it’s better for people to attain epistemic goods autonomously, but that people are more likely to attain epistemic goods when their autonomy is respected. On one level this flies in the face of the empirical findings that drive defenses of epistemic paternalism. We know that inquirers make all sorts of errors if left to their own devices. In an unregulated informational environment, some people will gravitate towards sources that are utterly inimical to the attainment of epistemic goods, for the inquirer and others. This is a crucial part of what motivates the notion that people need to be helped in conducting their inquiries.16 But to say that people need help isn’t to say people’s autonomy needs to be overridden. Cooperative, autonomy-respecting assistance to someone’s inquiry is a veritistically desirable middle-ground between leaving a person to inquire alone, and paternalistically taking over her inquiry. The beneficent inquiry-assistant can let the inquirer’s interests and temperament direct the course of the inquiry, while sharing her expertise in a way that helps with the attainment of the inquirer’s aims. The argument for approaching things like this is that the inquiry is better for being approached in an autonomyrespecting way, precisely because it is thereby more effective at realizing the sought-after goods. Most people who want to become chess grandmasters need help from a coach. The aspiring grandmaster will need to extend her knowledge of opening theory. She will also need to learn more endgame theory. In any given coaching session, her coach could focus on opening theory or on endgame theory. Or, if the trainee isn’t in a good headspace for taking in new information, her coach might give her activities –playing through classic games, solving puzzles – aimed at bedding down her current knowledge. If the aspiring grandmaster is smart, she will welcome her coach’s guidance about what to work on when. Her coach has a wider range of knowledge about how to get where she’s trying to go. But the trainee shouldn’t automatically defer to her coach’s advice, and her coach shouldn’t pressure her to do so. Sometimes the trainee will know best where epistemic gains are in the offing, based on her state of mind on a given day, and what’s sparking her interest. The beneficent coach can try to bully or cajole the trainee into doing exactly what he wants to prescribe, in each session, or he can act as a collaborative consultant offering expert insight. Both approaches recognize that the trainee needs
108 Robert Mark Simpson outside help. But the latter approach – the one I am recommending – is premised on the idea that help can be more effectively rendered, in general, if the recipient is able to guide the direction of her learning, and if her knowledge of her state of mind is self-consciously factored into this.
5.6 Conclusion The most comprehensive defense of epistemic paternalism, from Ahlstrom-Vij, has a built-in response to the kinds of worries I have been presenting here. One of his conditions for justified epistemic paternalism is a burden-of-proof constraint: you need to think it’s likely that everyone interfered with will indeed be epistemically benefited, thanks to your interference, or else it isn’t justified (2013, p. 118). If it turns out that it’s extremely hard to benefit people via non-consultative information control, given the kinds of issues I am highlighting, then this account already tells us to proceed carefully, and only use such measures when we’re confident they will work. As I said at the outset, the justifiability of epistemic paternalism in specific situations isn’t in doubt. The burden-of-proof condition is a sensible caveat on how we extend the logic of those situations to other cases. I’m not attacking epistemic paternalism, then, so much as trying to keep it in its place. We should think more about the wider repertoire of nonpaternalistic approaches that can be used by epistemic authorities trying to make positive social-epistemic interventions. We want to promote true belief in relation to controversies in public health, climate science, and other high-stakes issues. And we know that a laissez-faire approach – hoping that truth will win out, in some kind of Millian marketplace – is a vain hope in a world that has accepted global media empires and social network companies as its information-brokers (if, indeed, it was ever anything more than a vain hope). The question, for anyone engaged in the kind of ameliorative social epistemology that debates about epistemic paternalism are situated within, is which methods are most likely to do the most good, once we eschew laissez-faire optimism and start proactively intervening. My view is that we should be cautious about using information control in legal fact-finding as a model for epistemically beneficent action more generally, and that we would do well, instead, to seek inspiration from student-led learning practices. The immediate practical obstacle with this, though, is in how we offer consultative input to people we are hoping to educate, guide, and de-indoctrinate, in relation to issues like climate change. These people aren’t just sitting in a classroom somewhere, waiting to embark on a voyage of epistemic self-improvement aided by a beneficent expert. We can’t go through the whole online world, one social media-user at a time, in order to find out about everyone’s interests
Norms of Inquiry 109 and temperaments, and then use this information to provide individualized epistemic assistance to those we are trying to epistemically benefit. Nevertheless, there are still ways of consulting with potential epistemic beneficiaries. We can do more to try to understand the interests and temperaments of the kinds of people who characteristically fall prey to misinformation, conspiracy theories, and the like. Instead of thinking: “these people can’t be trusted to reason intelligently, so we had better control the information they have access to,” we could do more work to understand what these epistemically disenfranchised people are looking for, in making sense of the world, and how to package the information we are hoping they will take on in a way that meets them half-way. No doubt that will strike some readers as wildly optimistic. Perhaps it is. The main take-away here is just that promoting the good epistemic outcomes we are aiming for through methods of information control isn’t a magic bullet that’s going to ameliorate our social epistemic problems. And this is because really effective epistemically beneficent interventions need to take account the interests and temperaments of the would-be beneficiaries. Non-consultative information control doesn’t do this.17
Notes 1 For an argument in defence of this approach to science communication, see e.g. John (2017). 2 Bullock (2018) and Basham (2020) discuss some worries along these lines. Of course, no advocate of epistemic paternalism will want to defend abuses of epistemically paternalistic techniques, which aim to deceive, hurt, or dominate people. However, the concern I am adverting to is much like that which surrounds calls for apparently beneficent uses of state censorship. If we give state agencies the power to censor, it seems likely that maleficent actors will seek control of that power in order to abuse it. There is a kind of Field of Dreams logic behind this worry: “if you build it (an apparatus of state censorship or information control), they (people who want to put that apparatus to tyrannical use) will come.” 3 My argument loosely resembles an old-fashioned argument against paternalism, roughly, that the paternalizer is rarely in a position to know enough about the paternalisee’s interests or preferences to effectively confer the benefits they’re seeking to confer. See Sugden (2008) for a discussion and defence of these kinds of arguments against paternalism, in the face of challenges from proponents of nudging. 4 This definition follows e.g. Ahlstrom-Vij (2013: 4), Bullock (2018: 434), and Croce (2018: 305). 5 There were 60,817 people at the MCG when the Melbourne Demons defeated the Adelaide Crows in the AFL’s second qualifying final on September 5, 1998. Because these informational titbits mean nothing to you, the reader, I haven’t epistemically benefited you by conveying them, irrespective of their truth, accuracy, etc. That is the key intuition behind the assumption I am introducing here. 6 For discussion of Friedman’s claims about inquiry and suspension of judgment, see (2017). For discussion of her claims about the divergent prescriptions of
110 Robert Mark Simpson zetetic norms and other (doxastic) epistemic norms, see (2020a) and (2020b). Much of the work in Friedman’s analysis is about showing how zetetic norms, i.e. norms that tell you how to practically carry out a successful inquiry, can/ should be thought of as members of the overarching class epistemic norms, as opposed to being norms of practical, means-end reasoning. Her reasoning on this is something like the following: doxastic epistemic norms – norms like: believe what your evidence shows – could either be seen as (i) norms of practical inquiry, or (ii) something else, something like pure commands of reason, to which we owe allegiance regardless of any practical payoffs. If (ii), then it’s unclear why we should regard epistemic norms as binding upon us (2020b: 32). But we do regard such norms as being binding upon us. So this pushes us towards (i), in which case zetetic norms and doxastic norms by definition belong to the same general class of norms. 7 It isn’t obvious because there seem to be different ways of attaining epistemic goods, and it isn’t obvious which of these ways, in a given set of circumstances, is the most effective means to that end. 8 Croce (2018) argues that we should distinguish between experts, i.e. skilled researchers who are good at finding out truths in their field, and epistemic authorities, i.e. people who have novice-oriented abilities related to transmitting knowledge and understanding of truths in a given field. The contrast between rigidity and flexibility that I am alluding to may broadly map onto this distinction. Experts may tend towards rigidity in their syllabus design, based on what content their research abilities suggest is most important. By contrast, virtuous epistemic authorities may be more flexible, since they have a novice-oriented sensitivity to the content that is most effective at being taken in by learners. 9 None of this is meant to deny that educators should be free to design syllabuses as they see fit, based on their subject-relevant expertise. My point is that educators will be more effective in their role if they’re displaying noviceoriented sensitivities, to borrow Croce’s language (see note 8). The argument for strict principles of academic freedom isn’t that educators can be trusted to always get things right on these fronts. The argument is that allowing other actors – administrators, business leaders, state officials – to interfere with academic judgements about course and syllabus content is likely to make things worse on these fronts. Principles of academic freedom stand guard against this (see Simpson 2020). 10 Is it possible to get information about the inquirer’s interests and temperament, with a view to assisting in their inquiry, but without consulting the inquirer? Presumably yes, at least in some cases. My contention isn’t that effective, epistemically beneficent intervention in someone else’s inquiry is impossible without consultation. My claim is that in general consultative epistemic assistance is more effective than non-consultative epistemic paternalism, in benefiting the inquirer in the attainment of epistemic goods. Thanks to Neil Levy for pressing me on this point. 11 For example, on Fricker’s (2006: 225) widely cited definition, the epistemically autonomous person “takes no one else’s word for anything, but accepts only what she has found out for herself, relying only on her own cognitive faculties and investigative and inferential powers.” 12 Granted, as Dellsén (2020) argues, there are good reasons for aspiring experts in some domain to seek a high degree of independence in their judgments within that domain. Among other things, this has the benefit of making the joint testimony of experts in that domain more reliable than if the experts were to evince a more conventional pattern of deference to their intellectual
Norms of Inquiry 111 peers. However, even experts who are loath to defer to anyone else in their judgements will still, on pain of falling into a state of total informational poverty, need to rely on others when it comes to gathering information. 13 My point here mirrors the insight behind so-called relational theories of autonomy. Some theories of autonomy downplay the ways in which everybody’s capacities for self-governance are formed and exercised in a thoroughly socially embedded set of circumstances. Relational theories try to theorise autonomy in a way that remains consistently attuned to that fact. The inescapability of our social-embeddedness doesn’t mean that self-governance is impossible. Autonomy-talk is a way of marking the important differences between people whose preferences are formed under the yoke of oppressive socialisation, and those whose preferences aren’t distorted by such factors; see e.g. Oshana (1998). 14 I mean to use the term foreign will in the way that Garnett uses it, in his account of what he calls social autonomy. For Garnett, if A would endorse (or at least not reject) B’s will, then B’s will isn’t foreign to A, or to A’s purposes (2015: 101). This concept then plugs into a view of autonomy, for him, on which “part of what it is for one’s life to go well is for one to enjoy a certain kind of independence from the control or manipulation of others,” and hence on which “the extent to which one is subject to foreign wills, one is deficient with respect to an important human value” (Garnett 2015: 99). 15 Godden (2020) argues that epistemic autonomy is about believing in accordance with the norms of belief, and that autonomous inquirers thus have reason to submit to paternalistic interventions in cases where this will help them conform to the norms of belief. Part of what we learn from Friedman’s work on zetetic norms, however, is that norms of belief and norms of inquiry can push us in different directions, and while both sets of norms aim at the realisation of epistemic goods, it is often unclear which norms prescribe the most efficient route to that goal at a given moment. In such cases it is harder to see what epistemic-norm-conforming activity per se consists in, and hence it’s less clear whether and when an autonomous and epistemically conscientious inquirer should welcome paternalistic intervention. 16 Chapter 1 of Ahlstrom-Vij (2013) provides a compelling explanation of this motivation. 17 Many thanks to Neil Levy, Kirk Lougheed, and Jon Matheson for feedback and constructive criticism on an earlier version of this chapter.
References Ahlstrom-Vij, K. (2013). Epistemic paternalism: A defence. London: Palgrave Macmillan. Basham, L. (2020). Political epistemic paternalism, democracy, and rule by crisis. In A. Bernal & G. Axtell (Eds.), Epistemic paternalism: Conceptions, justifications, and implications. London: Rowman and Littlefield. Bullock, E. C. (2018). Knowing and not-knowing for your own good: the limits of epistemic paternalism. Journal of Applied Philosophy, 35(2), 433–447. Croce, M. (2018). Epistemic paternalism and the service conception of epistemic authority. Metaphilosophy, 49(3), 305–327. Dellsén, F. (2020). The epistemic value of expert autonomy. Philosophy and Phenomenological Research, 100(2), 344–361.
112 Robert Mark Simpson Freire, P. (2017). Pedagogy of the oppressed, trans. Myra Bergman Ramos (London: Penguin Classics). [Originally published 1968] Fricker, E. (2006). “Testimony and epistemic autonomy” in Jennifer Lackey and Ernest Sosa (Eds.), The epistemology of testimony (Oxford: Oxford University Press). Friedman, J. (2017). Why suspend judging? Noûs, 51(2), 302–326. Friedman, J. (2020a). Zetetic epistemology. In B. Reed & A. K. Floweree (Eds.), Towards an expansive epistemology: Norms, actions, and the social sphere. New York: Routledge. Friedman, J. (2020b). The epistemic and the zetetic, forthcoming in Philosophical Review. Garnett, M. (2015). Freedom and indoctrination. Proceedings of the Aristotelian Society, 115, 93–108. Godden, D. (2020). Epistemic autonomy, epistemic paternalism, and blindspots of reason. In A. Bernal & G. Axtell (Eds.), Epistemic paternalism: Conceptions, justifications, and implications. London: Rowman and Littlefield. Goldman, A. I. (1991). Epistemic paternalism: communication control in law and society. The Journal of Philosophy, 88(3), 113–131. Goldman, A. I. (1999). Knowledge in a social world. Oxford: Oxford University Press. Goldman, A. I. (2000). Replies to reviews of Knowledge in a Social World. Social Epistemology, 14(4), 317–333. John, S. (2017). Epistemic trust and the ethics of science communication: Against transparency, openness, sincerity and honesty. Social Epistemology, 32(2), 75–87. Lougheed, K., & Matheson, J. (2021). Introduction: Puzzles concerning epistemic autonomy. In K. Lougheed & J. Matheson (Eds.), Epistemic autonomy. London: Routledge. McKenna, R. (2020). Persuasion and epistemic paternalism. In A. Bernal & G. Axtell (Eds.), Epistemic paternalism: Conceptions, justifications, and implications. London: Rowman and Littlefield. Oshana, M. A. L. (1998). Personal autonomy and society. Journal of Social Philosophy, 29(1), 81–102. Simpson, R. M. (2020). The relation between academic freedom and free speech. Ethics, 130(3), 287–319. Sugden, R. (2008). Why incoherent preferences do not justify paternalism. Constitutional Political Economy, 19(3), 226–248.
6 Persuasion and Intellectual Autonomy Robin McKenna
6.1 Introduction In “Democracy, Public Policy, and Lay Assessments of Scientific Testimony,” Elizabeth Anderson (2011) identifies a tension in contemporary democratic societies between the requirements of responsible public policy making and democratic legitimacy. Responsible public policy making in a technologically advanced society should be based on the available scientific evidence. But, to be democratically legitimate, there must be broad (though not universal) acceptance of the policies which are put in place. This, in turn, requires broad acceptance of the science on which these policies are based. But there are many public policy issues where sizeable minorities reject the science on which responsible public policy making might be based. Consider climate change. In many countries, particularly the US, a sizeable minority rejects the science on climate change (Ballew et al. 2019; Tranter and Booth 2015). Further, this minority tend to hold similar political views (pro-free market and antiregulation). Once politics and science become intermingled in this way, the prospects of broad consensus emerging seem dim, given the difficulty of persuading people to change their minds about things they regard as integral to their political identities. In this chapter, I consider this tension (henceforth “Anderson’s tension”).1 My first aim is to show that the tension is harder to resolve than Anderson supposes because not all ways of securing public acceptance of the science on which public policy making is (or might be) based are themselves democratically legitimate.2 My second aim is to address the worry that what we can call “science marketing” methods are problematic from a democratic perspective because they infringe on our capacity to make up our own minds about the issues in question. I argue that, while some sorts of marketing may be problematic, and over-use of any sort of marketing may be problematic, there is no reason to think that judicious use of certain kinds of marketing need infringe on our capacity to make up our own minds. While this may not quite resolve Anderson’s tension, it lays some of the necessary groundwork for resolving it.
114 Robin McKenna Here is the plan. In Section 6.2, I review some empirical research on how people can be persuaded to change their minds, with a strong focus on work on climate change and climate change skepticism. The result of this review is that science marketing methods are often more effective than methods of rational persuasion. In Section 6.3, I discuss why some think that marketing methods are problematic from a democratic perspective. The basic idea is that they are problematic because they infringe on our capacity to make up our own minds – on our intellectual autonomy. In Sections 6.4 and 6.5, I look at two recent papers that argue that nudges (in the sense of Thaler and Sunstein 2008) are problematic because they are incompatible with the development of intellectual autonomy (Meehan 2020; Riley 2017). While nudges aren’t quite the same thing as marketing methods, they share many of the same features, and the “science marketing” methods I discuss in Sections 6.2 and 6.3 are plausibly examples of nudges. I argue that, while Riley and Meehan show that “over-enthusiastic” use of nudging or marketing methods may be incompatible with the development of intellectual autonomy, there is no reason to think that more judicious use of such methods is incompatible with the development of intellectual autonomy. I finish in Section 6.6 by identifying what I take to be the crucial unresolved issue, which is the value of intellectual autonomy itself.
6.2 Tackling Climate Change Skepticism In this section, I survey the empirical literature on how people can be persuaded to change their minds about scientific issues, with a strong focus on how climate change skeptics can be persuaded to change their minds. Let’s start with some figures. In total, 97% of climate scientists agree that human activity is a major cause of climate change (Cook et al. 2016). Bur a recent study in the US found the following (Ballew et al. 2019): • Around seven in ten Americans think climate change is happening and around one in eight Americans think climate change is not happening. • Around six in ten Americans think that climate change is mostly human caused and three in ten think it is due mostly to natural changes in the environment. • Just over half of Americans realize that most scientists think climate change is happening. But only about one in five realize how strong the level of consensus among scientists is. The good news is that seven in ten Americans think climate change is happening. The slightly less good news is that only six in ten think it is mostly human caused. The far less good news is that just over half realize that scientists are almost unanimous. But how good or bad this news is
Persuasion and Intellectual Autonomy 115 depends on who thinks these things. Views about climate change (whether it is happening, what is causing it, and whether scientists agree about it) correlate strongly with political views. Ballew et al. (2019) found that, while almost all “liberal Democrats” think that climate change is happening (as do most “moderate Democrats”), less than half “conservative Republicans” think climate change is happening. Given the breakdown of opinion above, it is reasonable to suppose that even fewer conservative Republicans think that climate change (if it is happening at all) is mostly human caused, or that most scientists think climate change is happening. In the US, climate change therefore poses precisely the problem Anderson identifies. While there is fairly broad acceptance that climate change is happening, there is a sizeable minority, which also happens to be politically powerful, that denies this. The question is: how could we persuade them that they are wrong? There are lots of things you might try. You might think the problem is a lack of scientific understanding, so the solution is more and better science education, perhaps allied with more emphasis on critical thinking in education (Bak 2001; Sturgis and Allum 2004). But there are two broad reasons to be skeptical about the chances of this strategy succeeding. First, as we have seen, attitudes about climate change correlate with political views. There are two possible explanations for this. One is that levels of scientific understanding vary with political views, and conservative Republicans understand less about climate science than liberal Democrats. If this were right, we would expect that, among conservative Republicans, the better informed will be more likely to think that climate change is happening, and that (if it is happening) it is mostly human caused. But this prediction is not borne out by the evidence. A 2019 Pew Research Center survey found that, while the percentage of liberal Democrats who think that climate change is mostly human caused increases with level of scientific understanding, conservative Republicans who are more scientifically informed are slightly less likely to think that climate change is mostly human caused (Funk and Kennedy 2019). An alternative explanation is that our background political views influence how we process information, including information about scientific developments. We are motivated to find ways of understanding the information we receive that enable us to vindicate rather than reject these background views (Hamilton et al. 2015; Hamilton 2011; Hardisty et al. 2010; Hornsey et al. 2016; Kahan et al. 2011; Lewandowsky and Oberauer 2016; Tranter and Booth 2015). This explanation, which is developed by those who work on “politically motivated reasoning,” can explain why more knowledgeable conservative Republicans are no more likely to accept the science on climate change. They, like all conservative Republicans, are motivated to preserve their basic conviction that there should be limited (or no) regulation on industry, and to do this they need
116 Robin McKenna to reject the science on which policies promoting regulations could be based. Second, everyone underestimates the extent of scientific consensus on climate change regardless of political affiliations because there is so much misinformation about climate change in the public sphere (Cook 2017; 2016; Cook et al. 2018). While “consensus messaging” (informing the public that scientists agree on an issue) can be a good way of educating the public (Cook and Lewandowsky 2011; Bolsen et al. 2014; Lewandowsky et al. 2013; van der Linden et al. 2014), consensus messaging about climate change clearly hasn’t “cut through” with sections of the population. A likely explanation is that the efficacy of consensus messaging is reduced when there is widespread misinformation, including about whether there is a consensus (Cook 2017; 2016; van der Linden et al. 2017). As van der Linden et al. (2017) put it: Results indicate that the positive influence of the “consensus message” is largely negated when presented alongside [misinformation]. Thus, in evaluating the efficacy of consensus messaging, scholars should recognize the potent role of misinformation in undermining real-world attempts to convey the scientific consensus. (p. 5) So the problem is not (just) poor understanding of science and lack of critical thinking ability. This may be part of the problem, but both are supplemented by the influence of politically motivated reasoning and the amount of misinformation in the public domain about something like climate change. Because more science education won’t necessarily help with either of these things (particularly with tackling the influence of politically motivated reasoning), something else is needed. What can we do? We can turn to the literature on climate science communication for some possible strategies. I will discuss three strategies. In each case, the strategy is a “marketing method” – it aims to sell a product (climate science) rather than to persuade by (merely) providing relevant information or argumentation. First, we can consider how the issue is framed. How willing people are to accept particular policies to mitigate the effects of climate change, and even how willing they are to say they accept the underlying science, depends on how things are described (Campbell and Kay 2014; Corner et al. 2015; Dahlstrom 2014; Dryzek and Alex 2015; Hardisty et al. 2010; Kahan 2014; MacInnis et al. 2015). Compare two different ways of describing the challenge posed by climate change. You might put it in terms of what we can do to reduce carbon emissions. Or you might put it in terms of what new technologies we might develop to meet the challenge. Perhaps unsurprisingly, people react differently to these two ways of describing the problem. Describing the problem as a technological
Persuasion and Intellectual Autonomy 117 problem can make those who are ideologically opposed to any regulations on industry more willing to accept that action is needed to combat climate change (Kahan et al. 2015). Similarly, the term “carbon offsetting” is less off-putting to many than the term “carbon tax” (Hardisty et al. 2010). Second, we can consider who the messenger is. Climate scientists have often taken on this role, which makes sense given that public trust in scientists is high (American Academy of Arts & Sciences 2018; Ipsos MORI 2014). But the evidence suggests that we would be better off if a more diverse (in multiple senses of the word) group of people made a public case for action. Scientists are not the most trusted source on every issue, or necessarily the best people to deliver messages on issues that have become politically contentious (Cvetkovich and Earle 1995; Cvetkovich and Löfstedt 1999; Kahan 2010; Moser and Dilling 2011). Further, our political beliefs influence how we assess expertise in the first place. Put crudely, we would often rather rely on someone we perceive to share our political beliefs and values than someone with different values from our own, no matter how much of an expert they are (Kahan et al., 2011). While this can be a real problem, it has some upsides. It means that ensuring a diverse group of people deliver the message will likely be effective. Third, we can consider which communicative strategies we use in the first place. It is common to pursue a “debunking” strategy: take some claim made by climate change skeptics and show why the claim is false. While debunking can work, it is often less effective than you might expect because misconceptions and false beliefs can be hard to dislodge (Lewandowsky et al. 2012; Seifert 2002). For this reason, many have considered an alternative strategy which is often called “prebunking.” Where debunking is a matter of refuting a belief that has already been accepted, prebunking is a matter of giving people the tools to avoid being taken in by false claims in the first place. While this might not sound so different from the idea that we need to improve critical thinking skills, it is different in that “prebunkers” envisage a more targeted strategy that is based on something called “inoculation theory” (Compton 2013). The thought is that you can inoculate someone against certain forms of misinformation and misleading ways of arguing by exposing them to these forms of misinformation and misleading ways of arguing in controlled environments (Bolsen and Druckman 2015; Cook et al. 2017; Ivanov et al. 2015; Pfau 1995; Pfau and Burgoon 1988; van der Linden et al. 2017). For instance, you might present someone with a common climate skeptic argument along with a refutation of it. Take the claim that human CO2 emissions are tiny in magnitude compared to natural emissions. You would accompany a presentation of this argument with the refutation: human CO2 emissions interfere with the natural carbon cycle, putting it out of balance.
118 Robin McKenna I have discussed three strategies that we have reason to think might be effective in persuading people to change their minds about climate change. Let me finish by highlighting what they have in common. The idea is that we can use these strategies to construct a better epistemic environment – an environment in which it is easier for people to recognize that they hold false beliefs about an issue like climate change, and to form true beliefs instead. But the crucial thing is that someone who utilizes these strategies isn’t trying to construct a better epistemic environment just by increasing the amount of knowledge that is “out there” waiting to be discovered. Rather, they are trying to target groups of people with a message or in a way that is designed to produce a certain outcome (they abandon a false belief and adopt a true one). As such, it seems fair to describe these strategies as “marketing methods” – someone utilizing them views (climate) science as a product that needs to be sold to a (sometimes) skeptical public.
6.3 Marketing and Intellectual Autonomy In this section I will explain why some think that marketing methods are problematic. To do this I will draw on a recent paper by Arthur Beckman (2018), “Political Marketing and Intellectual Autonomy,” which lays out the reasons why political marketing is viewed with suspicion. I will argue that what Beckman says about political marketing methods also seems to go for the “science marketing” methods discussed in the previous section. Beckman says that suspicion of political marketing is due to the idea that: [political] marketing methods diminish the ability of citizens to deliberate autonomously about government policies and processes, questions of government effectiveness, political candidate choices, and social mores, both personally and with others. (p. 24) There are some complications here. Setting aside the question of whether political marketing is effective, you might think it has some benefits (e.g. it provides people with relevant information). But the basic thought is that political marketing is problematic because it interferes with the capacity of citizens to deliberate autonomously. This is not surprising, given the aim of political marketing is to shape public opinion in desired directions. If people deliberate autonomously, then you have little control over what the outcome of their deliberations will be. But what is autonomous deliberation, and why does Beckman think that political marketing diminishes the ability of people to engage in it? Beckman, in line with the general trend in discussions of intellectual
Persuasion and Intellectual Autonomy 119 autonomy in epistemology, distinguishes between solitary and autonomous deliberation (cf. Adam Carter 2017, Roberts and Wood 2007, Zagzebski 2013). Your deliberations can be informed by social interactions and the information and diverse perspectives they provide and still be autonomous. So, for Beckman, autonomous deliberation is not solitary deliberation but rather deliberation that is free from certain forms of interference such as deception, misinformation, framing, omission of crucial information, agenda-setting, priming, selective provision of information and distortion of facts. Autonomous deliberation therefore involves the capacity to form opinions and take actions on one’s own, without these forms of interference One can therefore deliberate autonomously, and be intellectually autonomous, while still engaging in extensive social interaction. (You might wonder about the equation of autonomous deliberation with intellectual autonomy – intellectual autonomy is a capacity, whereas autonomous deliberation is an activity. I return to this in Sections 6.4 and 6.5). Beckman thinks political marketing interferes with autonomous deliberation and intellectual autonomy precisely because it involves a combination of deception, misinformation, framing, omission of crucial information, agenda-setting, priming, selective provision of information, and distortion of facts. The “political marketer” employs methods designed to set political agendas, to frame issues in advantageous ways, to construct and reinforce desired political attitudes and create desired habits of political information processing. Summing up, he says that Notwithstanding its limitations, political marketing can provide predictable and significant means of establishing, promoting, reinforcing, and altering attitudes, perceptions, decision-making processes, and behavior. These capabilities can constrain the intellectual autonomy of citizens, and have demonstrably narrowed its scope. Rationalization of political information, its transmission, and its processing by way of media and marketing communications produces homogenized political cognition and ready-made political belief systems, attitudes, and identities – all of which may facilitate marketing strategies. (p. 40) Beckman’s paper is about political marketing, but his argument can also be applied to the “science marketing” methods I discussed in the previous section. Like political marketing, science marketing employs processes that are designed to produce desired outcomes. Like the political marketer, the “science marketer” wants to shape public opinion. They want to secure acceptance of certain scientific claims and to alter attitudes and perceptions of the relevant issues.
120 Robin McKenna Similarly, a case can be made for thinking that science marketing interferes with autonomous deliberation and intellectual autonomy. We saw that Beckman thinks that political marketing interferes with autonomous deliberation and intellectual autonomy because of the sorts of methods the political marketer employs. They try to frame issues in advantageous ways and to construct and reinforce desired attitudes and ways of processing information. But the science marketer also employs methods that are designed to do these things. Of the three strategies discussed in the previous section, the first two (framing and selecting an appropriate messenger) are designed to construct and reinforce certain attitudes and ways of processing information and to frame issues in certain ways. The third strategy (debunking) is also designed to construct certain attitudes and ways of processing information. In short, it looks like, if political marketing is problematic because it infringes on autonomous deliberation and intellectual autonomy, then science marketing is problematic for the same reason. My aim in the rest of the chapter is to go some way towards arguing that science marketing is not problematic (at least for this reason). Whether this conclusion also applies to political marketing is an open question. To do this we need to look more carefully at autonomous deliberation and intellectual autonomy. Beckman equates the claim that political marketing interferes with autonomous deliberation with the claim that it interferes with intellectual autonomy. In the next two sections, I try to drive a wedge between these two claims by considering two papers that critique forms of persuasion that have a lot in common with political and science marketing on the grounds that they infringe with our intellectual autonomy.
6.4 Riley on Nudging and Epistemic Injustice In his “The Beneficent Nudge Programme and Epistemic Injustice,” Evan Riley (2017) argues that the beneficent nudge program, as defended by Richard Thaler and Cass Sunstein in their book Nudge and several articles (Thaler and Sunstein 2008; 2003; Sunstein 2015a, 2015b), is problematic on both ethical and epistemological grounds. His basic claim is that the program has the potential to cause a distinctive kind of epistemic injustice. In this section I will briefly outline his argument, before drawing out what I think it shows – and does not show – about autonomous deliberation and intellectual autonomy. We can start with what nudging is, and how it is related to the science and political marketing strategies I discussed in the previous section. As Thaler and Sunstein define it, a nudge is “any aspect of the choice architecture that alters behavior in a predictable way without forbidding any options” (2008, 6). For example, a doctor who needs to tell a patient about the chances of success of a potentially lifesaving, but risky, operation has a choice to make. They can either say what percentage of
Persuasion and Intellectual Autonomy 121 patients who have the operation are alive in x years’ time, or they can say what percentage of patients who have it are dead in x years’ time. While these are statistically equivalent, there is evidence that focusing on how many patients live makes it more likely that the patient will opt to have the operation than focusing on how many patients die (McNeil et al. 1982). A doctor who knows this has the power to alter patient behavior in a predictable way, but they do not force patients to choose one way rather than the other. The crucial aspect of nudging is that, while it involves interfering with someone’s deliberations and decisionmaking with the aim of making them better off, it avoids any sort of coercion. Why does Riley think that nudges are problematic? As Riley puts it: [Thaler and Sunstein] do not simply hold that nudging is in some circumstances morally permitted and practically called for. Rather, they favor and would have us foster the broad adoption of the beneficent nudge, to be deployed as a general purpose tool for good, across the institutional milieu of contemporary social life. In a recent defense of implementing this – which I call the beneficent nudge program (BNP) – Sunstein has gone so far as to christen and defend the “First Law of behaviorally informed regulation: In the face of behavioral market failures, [beneficent] nudges are usually the best response, at least when there is no harm to others.”3 (p. 598) Riley’s target is not nudging per se but the “beneficent nudge program.” The beneficent nudge program combines two elements. First, nudges should be “deployed as a general purpose tool for good” – for Thaler and Sunstein, this means that they should be deployed to help people make the decisions they would make if they were better at figuring out what was in their long-term interest. Second, nudges should be employed widely (cf. Sunstein’s First Law). We can now look at Riley’s objection to the beneficent nudge program. His basic point is that, while nudges don’t exactly bypass the “nudgees” (the person who is nudged) critical faculties, they also don’t engage them fully. When someone is nudged, they may be made aware of important new information (in the example above, the patient is made aware of some relevant statistics). They may also engage in rational inference (in the example above, they might compare the survival statistics for the operation with what they know about their chances of survival given the disease they have). But the crucial point is that their reflective critical capacities are not fully engaged. Thus, while being nudged occasionally may not be much of a problem (you will have plenty of opportunities to fully engage your critical capacities), it is a problem if you are systematically (i.e. regularly) nudged. This will prevent you from developing the
122 Robin McKenna capacity to “reason critically, energetically and otherwise well” (p. 604). Riley thinks that this may even constitute a form of epistemic injustice: denying or neglecting to provide people the support, opportunities, or means necessary to develop those capacities [e.g. the capacity to reason critically], or making it relatively more difficult to develop and exercise those capacities, where this lack could be supplied or ameliorated without duly weighty sacrifice or some other comparatively serious consideration, is unjust. In addition, the character of this general kind of wrong cannot be made fully explicit without reference to the epistemic nature of the victim. Thus, it counts as an epistemic injustice. Call it reflective incapacitational injustice. (p. 605) For my purposes, we don’t need to worry about the ins and outs of what Riley means by a “reflective incapacitational injustice.” All we need is the claim that, while being subject to the occasional nudge may not do much harm, being subjected to a systematic program of nudging (whether beneficent or not) deprives one of the opportunity to develop the capacity to reason critically. This completes my overview of Riley’s argument. I now want to relate it more directly to the science marketing strategies I discussed in the previous sections, and to autonomous deliberation and intellectual autonomy. I will start with two preliminaries. First, we can view the science marketing strategies discussed in the previous sections as nudges. If you frame the issue of climate change in a certain way, select a messenger because you think they will be listened to, or engage in prebunking then you are altering the epistemic environment in such a way that it is more likely that a certain outcome is achieved, but without any element of coercion. It is still possible to reject the new framing, to ignore the new messenger, or refuse to be “prebunked.” Second, while Riley focuses on engaging critical faculties and the capacity to think critically rather than autonomous deliberation and intellectual autonomy, these things are clearly closely connected. Roughly speaking, engaging your critical faculties is an essential part of autonomous deliberation, and the capacity to think critically is an essential part of being intellectually autonomous. I will leave it open just how much more there is to autonomous deliberation and intellectual autonomy than this.4 The crucial question is whether the science marketing strategies I discussed earlier run into the same problems Riley thinks the beneficent nudge program runs into. Riley’s discussion points to a reason for thinking that science marketing need not run into the same problems. Recall that Riley distinguishes between preventing someone from fully engaging
Persuasion and Intellectual Autonomy 123 their reflective critical capacities in a particular case and preventing them from developing the capacity to reason critically. In nudging someone, you may prevent them from fully engaging their reflective critical capacities at the point at which you nudge them. But clearly this need not prevent them from developing the capacity to reason critically. Riley’s point is that over-use of nudging as a strategy has this problematic result, not that every nudge does. Put in terms of autonomous deliberation and intellectual autonomy, Riley’s point is that interfering with someone’s deliberations in a particular case is not the same thing as interfering with their intellectual autonomy. This is because intellectual autonomy is a capacity and preventing someone from manifesting a capacity at a particular point in time is not the same thing as preventing them from having or developing the capacity. If this is right, then there is some scope to argue that science marketing need not stop anyone from becoming intellectually autonomous. But can anything else be said here? Two things. First, while the proponent of the science marketing methods I have discussed is putting forward a program, it is a far more limited program than Thaler and Sunstein’s beneficent nudge program. Thaler and Sunstein envisage using nudges in all areas where humans seem to do a bad job of making choices and decisions that serve what many would regard as their interests. The science marketer proposes nudging those who are otherwise unwilling to accept the science on climate change and similar issues to change their minds. To the extent that Riley’s objection to the beneficent nudge program is simply based on how extensive it is, it doesn’t carry over to science marketing. Second, the aims of those who want to (better) market science differ from the aims of commercial (and perhaps political) marketers. This remark by Dan Kahan (a proponent of science marketing) is suggestive: It would not be a gross simplification to say that science needs better marketing. Unlike commercial advertising, however, the goal of these techniques is not to induce public acceptance of any particular conclusion, but rather to create an environment for the public’s open-minded, unbiased consideration of the best available scientific information. (2010, p. 297) Kahan is saying that the aim of science marketing is to facilitate reasoning critically about issues like climate change. If this is right, then we can make a stronger point. Perhaps surprisingly, it might turn out that the best way to enable people to become intellectually autonomous can sometimes be to infringe on their deliberations about particular issues. Clearly, this isn’t going to apply across the board. It is hard to see how
124 Robin McKenna commercial marketing makes anyone more intellectually autonomous, and perhaps much the same can be said about political marketing. But this might be because of the aims of commercial and political marketers, rather than because of anything inherent to marketing itself.
6.5 Meehan on Nudging and Epistemic Vices I have argued that, while Riley may succeed in showing that the beneficent nudge program infringes on intellectual autonomy, his argument actually points to some reasons for thinking that science marketing need not infringe on intellectual autonomy. In “Epistemic Vices and Epistemic Nudging: A Solution?” Daniella Meehan (2020) makes a similar point to Riley, but in a different way. It will therefore be worth seeing whether her paper supports the argument of the previous section. As it will turn out, while Meehan puts her objection to nudging in different terms to Riley, at best she succeeds in establishing the same conclusion. Where Riley argues that the beneficent nudge program is problematic on the grounds that it commits a distinctive sort of epistemic injustice, Meehan argues that the program is, at best, minimally effective in combatting our various epistemic vices and, at worst, leads to the creation of further epistemic vices. To provide some focus, we can start with two of her core cases, one of which is fictitious and the other of which is drawn from the real world. In the fictitious example, Harry forms various false beliefs about politics and related matters because he makes extensive use of unreliable news sources and, further, is unwilling to look at what are clearly more reliable sources. Harry, in short, is closed-minded, and perhaps even closed-minded about his closed-mindedness.5 Meehan asks: what can we do about Harry? We could sit him down and calmly lay out the reasons why he is being epistemically vicious. Or we could try nudging. We could, for example, offer him a discounted subscription to a more reliable paper, leave unbiased news programs on TV, and so on. These are nudges because, while they are interventions that are intended to improve Harry’s epistemic situation, his freedom of choice is not taken away. For her real-world example, Meehan looks at Michigan’s use of the “inconvenience model” for tackling falling rates of child vaccination. The basic idea behind the inconvenience model is to put various barriers in place to securing an exemption from the requirement to vaccinate children before sending them to school and daycare. In Michigan, parents were required to attend educational sessions about vaccines at local public health centers, and to use an official state form to apply for exemptions. This had a demonstrable impact on exemption rates, which went down by 39% state-wide and by 60% in the Detroit area (Higgins 2016). Meehan thinks this is a good example of a nudge because, while the intent behind these policies was to produce a particular outcome (i.e.
Persuasion and Intellectual Autonomy 125 a reduction in the exemption rate), parents were still free to not vaccinate their children. Meehan argues for two main claims. Her first claim is that, even if nudging is successful in tackling epistemic vices in the short term, it is ineffective in the long term. Here is her basic argument (“EN” refers to “epistemic nudging,” which as far as I can tell is just nudging): When EN claims to have successfully mitigated a vice, like the examples presented earlier, what has really happened is that EN has merely masked the epistemic vice at hand. EN can only mask epistemic vices as the deep nature of vices remains present. EN does not change the vice in any way, just like the bubble wrap did not change the fragility of the vase, but only masks it, and when EN practices are not employed the vice is still present, just like how the fragility of the vase still remains when the bubble-wrap is removed. (p. 253) So, in Meehan’s first example, even if your attempts to get Harry to consult more reliable news sources are successful in the short term, they are merely masking his closed-mindedness, which will inevitably resurface in the future. More generally, Meehan’s thought is that it is easier to change behavior than to change underlying character traits and, insofar as nudging seems more focused on changing behavior than underlying character traits, it is easy to see why it would merely mask underlying vices. In her second example, it is a little harder to evaluate her claim because it really depends on why the “inconvenience model” was so successful in Michigan. Did it change anyone’s minds, or did it work just because it made avoiding vaccination so inconvenient? If it just made avoiding vaccination very inconvenient, it becomes unclear whether it even qualifies as a nudge.6 I think Meehan’s first claim is plausible, but for my purposes I don’t need to dispute it. Returning to the problem I started with (Anderson’s tension), what we need is a way of securing democratic legitimacy for science-based public policy. While tackling the epistemic vices underlying climate change skepticism and other forms of science denialism would be one way of securing this legitimacy, it isn’t the only way of doing it. From the standpoint of democratic legitimacy, more short-term solutions that focus on changing what people are and aren’t willing to accept are perfectly viable and perhaps more realistic. That said, I want to make two points in response to Meehan’s argument. The first is that, as Quassim Cassam emphasizes in his recent book Vices of the Mind, it can be incredibly difficult to get started with tackling our epistemic vices. If – as Harry is – you are closed-minded, then your closed-mindedness can be a barrier to even recognizing that you are closed-minded (see note 5). Partly for this reason, it might be best to “start
126 Robin McKenna small” and try to change Harry’s behavior before tackling his underlying character traits. This leads to the second point, which is that it is a common idea in the virtue ethics and epistemology literature that acquiring a virtue requires habituation. Part of becoming open-minded is behaving in open-minded ways. While genuine open-mindedness requires more than behaving in an open-minded way, it might be argued that behaving in an open-minded way can help you become genuinely open-minded. We can now move on to Meehan’s second claim, which is that nudging can foster new epistemic vices, such as the vice of epistemic laziness, and more generally can hinder the development of our epistemic capacities. Meehan is getting at a similar idea to Riley: EN does not merely accidentally fail to engage the critical deliberative faculties of their targets, but purposefully seeks to bypass reflective deliberation entirely. Take the educational tool … where teachers nudged students towards effective inquiry by teaching incomplete theories to facilitate a better understanding of their complexities. As Riley would note, this form of EN seeks to bypass a genuine open reflective deliberation, meaning for one to nudge successfully in this case (and many others) one must at the time of the nudge, not invite, seek or start any critical reflection or deliberation. (p. 254) While there are some differences between Riley and Meehan, they agree that, in failing to engage the nudgee’s critical deliberative faculties, the nudger may well make the nudgee unwilling (or even unable) to inquire for themselves and limit their capacity for critical reflection and deliberation. So, much like Riley, Meehan is pointing to the fact that nudging can interfere with our intellectual autonomy. Meehan links her criticism of the nudging program with some criticisms of Thaler and Sunstein developed by Hausman and Welch (2010). For Hausman and Welch, nudges fail to respect the nudgee’s autonomy, and they say that we “should be concerned about the risk that exploiting decision-making foibles will ultimately diminish people’s autonomous decision-making capacities” (p. 135). As we saw in the previous section, Riley doesn’t think that “one-off” nudges need interfere with anyone’s intellectual autonomy. His thought is that subjecting someone to a systematic program of nudging will make it harder for them to become intellectually autonomous. Meehan seems to be making the stronger claim that nudging itself interferes with intellectual autonomy. But, at least as far as I can see, there is nothing in her argument to support this stronger conclusion. So, while Meehan uses the language of epistemic vices rather than the language of epistemic injustice, her argument shows no more than Riley’s does. Relating this back to science marketing, we are left with the same conclusion as in the previous section: because the science marketer is proposing a far more
Persuasion and Intellectual Autonomy 127 limited program than Thaler and Sunstein, it is (far) less clear that what they are proposing need infringe on anyone’s intellectual autonomy. The point, again, is that preventing someone from deliberating autonomously in a particular case need not prevent them from becoming intellectually autonomous. As I suggested at the end of the previous section, it may help them become more intellectually autonomous.
6.6 Conclusion Let’s take stock. I started with a tension between the demands of responsible public policy making and democratic legitimacy. Responsible public policy making in a technologically advanced society should be based on the available scientific evidence. But, to be democratically legitimate, there must be broad acceptance of the policies that are put into place, which seems to require broad acceptance of the science on which the policies are based. The tension arises because there is often not broad (enough) acceptance of the science on which public policy making might be based. I then did two things. First, I deepened the tension by showing that many of the most effective methods for securing broad acceptance of the science on climate change seem to infringe on our intellectual autonomy. If this appearance is accurate, then they are not themselves democratically legitimate. Second, I made a start at resolving this deeper tension by arguing that, despite appearances, science marketing strategies need not infringe on our intellectual autonomy. There remain (at least) two open questions. The first is whether I have shown that the science marketing strategies I have discussed can be used to secure the broad acceptance of climate change in a way that is democratically legitimate. You might object that it isn’t enough to show that they need not infringe on our intellectual autonomy. They may be democratically illegitimate for other reasons, not least because, if they are going to work, we can’t seek any sort of consent before using them. I don’t have a particularly firm view on this issue, so I will set it aside. The second question concerns intellectual autonomy. Throughout I have taken it for granted that intellectual autonomy is important and valuable, and my task has been to show that science marketing need not infringe on it. But you might object that there is something problematic about intellectual autonomy itself. In the literature it is a familiar point that certain conceptions of intellectual autonomy are problematic because they conceive of it in such a way that it is simply unattainable, or in such a way that it isn’t clear why we should value it. Take, for instance, a view of intellectual autonomy on which the intellectually autonomous agent is maximally self-reliant. Even setting aside whether a human being can be maximally self-reliant, it seems clear that anyone who strives to be maximally self-reliant is going to miss out on a lot of knowledge (cf.
128 Robin McKenna Adam Carter 2017). Given this, it is unclear why we should even want to be maximally self-reliant. But those who criticize conceptions of intellectual autonomy on these grounds typically propose a way of conceiving of it on which it is both (often) attainable and valuable.7 It is worth considering whether, even if intellectual autonomy is attainable, it is something that we ought to value quite as much as many of us seem to (cf. the authors I have discussed in this chapter). Intellectual autonomy may be a good thing, but so is having true beliefs, and – especially given the empirical work on human psychology I have discussed in this chapter – it is hard to see why striving for intellectual autonomy should generally be the best (or even a good) way of getting true beliefs.8
Notes 1 Anderson is not the first person to note the tension (it is central to Kitcher 2001), but she addresses it in the same way I do here – from the perspective of an applied social epistemology. 2 Anderson’s own solution to the tension seems to ignore this. She proposes what she calls in Anderson (2006) an “institutional epistemology”: an epistemology that looks at how social institutions (like science) can do a better job of producing and disseminating knowledge. On the dissemination side, Anderson draws on the empirical literature on science communication that I discuss in Section 6.2. But she doesn’t address whether the methods recommended in this literature are themselves democratically legitimate. As I argue in Section 6.3, you might think they aren’t because they seem to infringe on our intellectual autonomy. 3 Riley is quoting from Sunstein (2015b). 4 On some views (e.g. Zagzebski 2013) they seem to come to almost the same thing. 5 Closed-mindedness is a good example of what Quassim Cassam (2019) calls a “stealthy vice” – a vice that “blocks” its own detection, in that those who have it are unwilling to recognize that they have it because of the very vice in question. 6 See Navin and Largent (2017) for discussion. 7 While their accounts of intellectual autonomy differ, Adam Carter (2017), Roberts and Wood (2007) and Zagzebski (2013) all fit this characterization. 8 Thanks to Jon Matheson and Kirk Lougheed for comments on an earlier version of this chapter.
References Adam Carter, J. (2017). Intellectual autonomy, epistemic dependence and cognitive enhancement. Synthese, 197, 1–25. American Academy of Arts & Sciences. (2018). Perceptions of science in America. www.publicfaceofscience.org/. Anderson, E. (2006). The epistemology of democracy. Episteme: A Journal of Social Epistemology, 3(1), 8–22. Anderson, E. (2011). Democracy, public policy, and lay assessments of scientific testimony. Episteme, 8(2), 144–164.
Persuasion and Intellectual Autonomy 129 Bak, H.-J. (2001). Education and public attitudes toward science: Implications for the ‘Deficit Model’ of education and support for science and technology. Social Science Quarterly, 82(4), 779–795. Ballew, M. T., Leiserowitz, A., Roser-Renouf, C., Rosenthal, S. A., Kotcher, J. E., Marlon, J. R., Lyon, E., Goldberg, M. H., & Maibach, E. H. (2019). Climate change in the American mind: Data, tools, and trends. Environment: Science and Policy for Sustainable Development, 61(3), 4–18. Beckman, A. (2018). Political marketing and intellectual autonomy. Journal of Political Philosophy, 26(1), 24–46. Bolsen, T., Leeper, T. J., & Shapiro, M. A. (2014). Doing what others do: Norms, science, and collective action on global warming. American Politics Research, 42(1), 65–89. Bolsen, T., & Druckman, J. N. (2015). Counteracting the politicization of science. Journal of Communication, 65(5), 745–769. Campbell, T. H., & Kay, A. C. (2014). Solution aversion: On the relation between ideology and motivated disbelief. Journal of Personality and Social Psychology, 107(5), 809–824. Cassam, Q. (2019). Vices of the mind: From the intellectual to the political. Oxford: Oxford University Press. Compton, J. (2013). Inoculation theory. In J. P. Dillard & L. Shen (Eds.), The Sage handbook of Persuasion: Developments in theory and practice (pp. 220–236). Thousand Oaks, CA: Sage Publications. Cook, J. (2016). Countering climate science denial and communicating scientific consensus. Oxford Encyclopedia of Climate Change Communication. https:// doi.org/10.1093/acrefore/9780190228620.013.314. Cook, J. (2017). Understanding and countering climate science denial. Journal and Proceedings of the Royal Society of New South Wales, 150(465/466), 207–219. Cook, J., & Lewandowsky, S. (2011). The debunking handbook. St. Lucia, Australia: University of Queensland http://sks.to/debunk. Cook, J., Lewandowsky, S., & Ecker, U. K. H. (2017). Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PloS One, 12(5), e0175799. Cook, J., Sander van der Linden, Maibach, E. H., & Lewandowsky, S. (2018). The consensus handbook. www.climatechangecommunication.org/all/ consensus-handbook/. Cook, J., Oreskes, N., Doran, P. T., Anderegg, W. R. L., Verheggen, B., Maibach, E. H., Carlton, J. S., Lewandowsky, S., Skuce, A. G., & Green, S. A. (2016). Consensus on consensus: A synthesis of consensus estimates on human-caused global warming. Environmental Research Letters, 11(4), 048002. Corner, A., Lewandowsky, S., Phillips, M., & Roberts, O. (2015). The uncertainty handbook. Bristol: University of Bristol. Cvetkovich, G., & Earle, T. (1995). Social trust: Toward a cosmopolitan society. Westport, CT: Praeger. Cvetkovich, G., & Löfstedt, R. (Eds.). (1999). Social trust and the management of risk. Abingdon: Earthscan. Dahlstrom, M. F. (2014). Using narratives and storytelling to communicate science with nonexpert audiences. Proceedings of the National Academy of Sciences, 111(Supplement 4), 13614–13620.
130 Robin McKenna Dryzek, J. S., & Alex, Y. L. (2015). Reason and rhetoric in climate communication. Environmental Politics, 24(1), 1–16. Funk, C., and Brian K. (2019). How Americans see climate change and the environment in 7 charts. Pew Research Center. www.pewresearch. org/fact-tank/2020/04/21/how-americans-see-climate-change-andthe-environment-in-7-charts/. Hamilton, L. C. (2011). Education, politics and opinions about climate change evidence for interaction effects. Climatic Change, 104(2), 231–242. Hamilton, L. C., Hartter, J., Lemcke-Stampone, M., Moore, D. W., & Safford, T. G. (2015). Tracking public beliefs about anthropogenic climate change. PLoS One, 10(9), e0138208. Hardisty, D. J., Johnson, E. J., & Weber, E. U. (2010). A dirty word or a dirty world? Attribute framing, political affiliation, and query theory. Psychological Science, 21(1), 86–92. Hausman, D. M., & Welch, B. (2010). Debate: To nudge or not to nudge. Journal of Political Philosophy, 18(1), 123–136. Higgins, L. (2016). More Michigan parents willing to vaccinate kids. Detroit Free Press. https://eu.freep.com/story/news/education/2016/01/28/ immunization-waivers-plummet-40-michigan/79427752/. Hornsey, M. J., Harris, E. A., Bain, P. G., & Fielding, K. S. (2016). Meta-analyses of the determinants and outcomes of belief in climate change. Nature Climate Change, 6(6), 622–626. Ipsos MORI. (2014). Public attitudes to science 2014. www.britishscienceassociation.org/public-attitudes-to-science-survey. Ivanov, B., Sims, J. D., Compton, J., Miller, C. H., Parker, K. A., Parker, J. L., Harrison, K. J., & Averbeck, J. M. (2015). The general content of postinoculation talk: Recalled issue-specific conversations following inoculation treatments. Western Journal of Communication, 79(2), 218–238. Kahan, D. (2010). Fixing the communications failure. Nature, 463, 296–297. Kahan, D. (2014). Making climate-science communication evidence-based – All the way down. In M. Boykoff & D. Crow (Eds.), Culture, politics and climate change (pp. 203–220). New York: Routledge. Kahan, D., Jenkins-Smith, H., & Braman, D. (2011). Cultural cognition of scientific consensus. Journal of Risk Research, 14(2), 147–174. Kahan, D., Jenkins-Smith, H. C., Tarantola, T., Silva, C. L., & Braman, D. (2015). Geoengineering and climate change polarization: Testing a two-channel model of science communication. Annals of American Academy of Political and Social Science, 658, 193–222. Kitcher, P. (2001). Science, truth, and democracy (Vol. 112). Oxford: Oxford University Press. Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131. Lewandowsky, S., Gignac, G. E., & Vaughan, S. (2013). The pivotal role of perceived scientific consensus in acceptance of science. Nature Climate Change, 3(4), 399–404. Lewandowsky, S., & Oberauer, K. (2016). Motivated rejection of science. Current Directions in Psychological Science, 25(4), 217–222.
Persuasion and Intellectual Autonomy 131 van der Linden, S., Leiserowitz, A., Feinberg, G. D., & Maibach, E. (2014). How to communicate the scientific consensus on climate change: Plain facts, pie charts or metaphors? Climatic Change, 126(1–2), 255–262. van der Linden, S., Leiserowitz, A., Rosenthal, S., & Maibach, E. (2017). Inoculating the public against misinformation about climate change. Global Challenges, 1(2), 1600008. MacInnis, B., Krosnick, J. A., Abeles, A., Caldwell, M. R., Prahler, E., & Dunne, D. D. (2015). The American public’s preference for preparation for the possible effects of global warming: Impact of communication strategies. Climatic Change, 128(1–2), 17–33. McNeil, B. J., Pauker, S. G., Sox, H. C., & Tversky, A. (1982). On the elicitation of preferences for alternative therapies. The New England Journal of Medicine, 306(21), 1259–1262. Meehan, D. (2020). Epistemic vice and epistemic nudging: A solution. In G. Axtell & A. Bernal (Eds.), Epistemic paternalism: Conceptions, justifications and implications (pp. 247–259). London: Rowman & Littlefield. Moser, S. C., & Dilling, L. (2011). Communicating climate change: Closing the science-action gap. In J. S. Dryzek, R. B. Norgaard, & D. Schlosberg (Eds.), The Oxford handbook of climate change and society (pp. 161–174). Oxford: Oxford University Press. Navin, M. C., & Largent, M. A. (2017). Improving nonmedical vaccine exemption policies: Three case studies. Public Health Ethics, 10(3), 225–234. Pfau, M. (1995). Designing messages for behavioral inoculation. In E. H. Maibach & R. L. Parrott (Eds.), Designing health messages: Approaches from communication theory and public health practice (pp. 99–113). Thousand Oaks, CA: Sage Publications. Pfau, M., & Burgoon, M. (1988). Inoculation in political campaign communication. Human Communication Research, 15(1), 91–111. Riley, E. (2017). The beneficent nudge program and epistemic injustice. Ethical Theory and Moral Practice, 20(3), 597–616. Roberts, R. C., & Jay Wood, W. (2007). Intellectual virtues: An essay in regulative epistemology. Oxford: Clarendon Press. Seifert, C. M. (2002). The continued influence of misinformation in memory: What makes a correction effective? Psychology of Learning and Motivation 41: 265–292. Sturgis, P., & Allum, N. (2004). Science in society: Re-evaluating the deficit model of public attitudes. Public Understanding of Science, 13(1), 55–74. Sunstein, C. R. (2015a). Nudging and choice architecture: Ethical considerations. SSRN http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2551264. Sunstein, C. R. (2015b). Why nudge? New Haven: Yale University Press. Thaler, R. H., & Sunstein, C. R. (2003). Libertarian paternalism. The American Economic Review, 93(2), 175–179. Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. New Haven: Yale University Press. Tranter, B., & Booth, K. (2015). Scepticism in a changing climate: A cross-national study. Global Environmental Change, 33, 154–164. Zagzebski, L. (2013). Intellectual autonomy. Philosophical Issues, 23(1), 244–261.
7 What’s Epistemic about Epistemic Paternalism? Elizabeth Jackson
7.1 Introduction Paternalism is a familiar part of our lives – consider a professor who enforces a no-technology policy for her students, an adult pulled over for not wearing their seatbelt, or a spouse who hides cake from their partner who just started a new diet. Paternalism is the practice of limiting the free choices of agents, without their consent, for the sake of promoting their best interests (see Mill, 1869; Dworkin, 2010; Grill & Hanna, 2018). There are many strands of paternalism; this chapter focuses on one in particular: epistemic paternalism.1 Roughly, one might think about the distinction this way: regular paternalism aims at (or has the final goal of) improving another’s decisions or actions; epistemic paternalism aims at improving another’s beliefs. More precisely, epistemic paternalism involves interfering with agents, without their consent, for their own epistemic good – e.g. to promote their true beliefs, knowledge, etc. The aim of this chapter is twofold: (i) to critically examine the concept of epistemic paternalism and (ii) to explore the consequences of normative questions one might ask about it. In Section 7.2, I critically examine several definitions of epistemic paternalism. I argue that many existing definitions are either too broad or too narrow, and I suggest some ways these definitions might be improved. In Section 7.3, I contrast epistemic and general paternalism, and argue that it’s difficult to see what makes epistemic paternalism an epistemic phenomenon at all. In Section 7.4, I turn to normative questions about epistemic paternalism. I examine different perspectives by which we might evaluate epistemic paternalism, and discuss the literature’s assumptions of epistemic consequentialism and veritism. I close in Section 7.5 by comparing and contrasting epistemic paternalism with other phenomena in social epistemology, such as disagreement or testimony. I argue that epistemic paternalism is a uniquely social phenomenon, in a way that e.g. disagreement and testimony are not. The aim of this chapter is largely clarificatory, rather than an attempt to argue for a single controversial thesis. However, we will address headon a number of questions at the forefront of the epistemic paternalism
What’s Epistemic about Paternalism? 133 literature that are normally overlooked or quickly brushed aside. As we go, I will flag these questions. Some I will answer or begin to answer, and others I will leave open. Either way, the literature on epistemic paternalism should be paying more attention to them, and eventually address them. Doing so will crucially aid us in answering questions about epistemic paternalism, including questions about whether and how it might be justified.
7.2 Defining Epistemic Paternalism This section critically examines several definitions of epistemic paternalism in the literature. Goldman (1991) and Lougheed (2021) define epistemic paternalism quite narrowly, focusing on withholding evidence for the sake of promoting true beliefs. Lougheed (2021, p. 261), following Goldman (1991, p. 114), provides the following definition: If agent X is going to make a doxastic decision concerning question Q, and agent Y has control over the evidence that is provided to X, then, there are instances when Y need not make available to X all of the evidence relevant to Q if doing so will make X more likely to believe the truth about Q. This suggests the following: Definition 1: Epistemic paternalism =df (i) withholding evidence from someone, (ii) without their consent, (iii) to make it more likely that they believe truths (or avoid errors). (i)–(iii), taken as necessary and jointly sufficient conditions for epistemic paternalism, is quite narrow. This isn’t necessarily a criticism of Goldman or Lougheed; they have the right to examine and/or attempt to justify a narrower phenomenon. However, insofar as our goal is to capture the class of cases that we would naturally classify as epistemic paternalism, both (i) and (iii) are too narrow. First, note that condition (iii) seems to assume veritism, the view that believing truth (and avoiding error) is the only final epistemic good. While this is a popular view of epistemic value, it is, nonetheless, controversial (see DePaul, 2001; Carballo, 2018). We can make the definition more ecumenical by revising it in the following way: Definition 2: Epistemic paternalism =df (i) withholding evidence from someone, (ii) without their consent, (iii) for their own epistemic good. This definition, unlike the first, is neutral on epistemic axiology – it is consistent with veritism, but also leaves open whether you might engage
134 Elizabeth Jackson in epistemic paternalism to promote goods other than true belief. For example, you might engage in epistemic paternalism to provide another with new, epistemically justified beliefs or knowledge, to increase the level of justification of their already-existing beliefs, or to turn their beliefs into knowledge. I don’t see a reason to rule these out as kinds of epistemic paternalism without argument. Veritism is a substantial assumption, and this second definition enables us to avoid assuming it without ruling it out. I will spend more time on condition (i). Definition 2 claims epistemic paternalism necessarily involves a particular kind of interference: withholding evidence. The problem with this condition is that there are a number of cases of epistemic paternalism that aren’t a matter of withholding evidence from someone. Consider providing evidence one wouldn’t have otherwise had or considered. One might fail to consent to have information that spoils a movie or show, and even express a desire to not have the information (at least at that time, via testimony), but another might give them the information anyway. While undesired, this nonetheless constitutes an epistemic improvement (e.g. more true beliefs/rational beliefs/ knowledge) and thus counts as epistemic paternalism. Bullock (2016, 2018) discusses a more serious example: cases of a patient’s right not to know about her medical condition if she so chooses. In these cases and others, one may engage in epistemic paternalism by providing evidence. A second kind of epistemic paternalism involves interfering with the way another processes or weighs evidence. For example, when teaching, I might present my class with two philosophical theories, but then strongly emphasize simplicity while purposefully leaving out discussion of the value of explanatory power – making it much more likely they believe the simpler one. I could similarly push skepticism, causing them to value avoiding falsehoods more than they value getting at truths. This could cause them to withhold belief in many propositions, but neglect the epistemic value of true belief. Generally, one can engage in epistemic paternalism by influencing another’s epistemology – which affects how they process evidence – without changing their first-order evidence.2 Another example of epistemic paternalism involves controlling the order in which someone receives pieces of evidence. Even if the order of evidence wouldn’t matter for ideally rational agents, empirical results show that the order in which a normal human reasoner receives different pieces of evidence often affects their conclusion (i.e. the ordering effect).3 For example, when patients are choosing between treatment options, the order in which information is presented may influence their decisions. Bansback et al. (2014) make a case for this and argue that patient decisions can be improved by presenting the most important information first. Thus, one might interfere with inquiry by simply influencing the order in which one receives pieces of evidence.
What’s Epistemic about Paternalism? 135 Other examples of epistemic paternalism include deception and coercive measures, both discussed by Bullock (2018). An example of deception would be teaching a false theory to ultimately facilitate understanding of more complex true theory, such as teaching Newtonian physics before quantum mechanics (2018, p. 434). One might engage in coercive epistemic paternalism by threatening a consequence if someone doesn’t form a certain belief. Even if one cannot believe directly based on a threat, they could take action to try to cultivate the belief (2018, p. 435). The general lesson is this: epistemic paternalism need not merely be about withholding evidence, but can involve a number of other epistemic practices. If we understand ‘inquiry’ broadly, to include both evidencegathering and belief-forming practices, then these practices might all be classified as instances of interfering with another’s inquiry. Given this, we can modify our definition as follows: Definition 3: Epistemic paternalism =df (i) interfering with someone’s inquiry, (ii) without their consent, (iii) for their own epistemic good. This definition parallels the one found in Ahlstrom-Vij (2013a, p. 51) and Bullock (2018, p. 434). Note that both Ahlstrom-Vij and Bullock have a non-consultation condition rather than a non-consent condition. Bullock (2018, p. 434) explains that an epistemically paternalistic practice “does not consult those interfered with on the issue of whether they should be interfered with in the relevant manner.” This is too weak. Suppose I consult with you about some interference I’m considering imposing on you, and you say you don’t want me to interfere. I choose to interfere anyway (e.g. maybe I’m threatening you to form a certain belief). This seems like epistemic paternalism, even though I consulted with you. Thus, a non-consent condition seems more plausible than a non-consultation condition. Thus, definition 3 is about consent, rather than consultation. While I have even utilized definition 3 in my own previous work – Jackson (2020, p. 201) – I now think it may need additional modification. To see why, suppose you decided to write a new book on the philosophy of mind. It is published, and a professor decides to assign it in their philosophy of mind class. As a result of reading the book, students learn a lot of new things about philosophy of mind, and some of them radically change their views.4 Now, as a result of writing the book, you’ve interfered with the inquiry of these students (concerning certain questions in the philosophy of mind). None of these students consented to you writing this book, but (let’s suppose) they’ve epistemically benefitted from reading it. This case seems to fulfill all three conditions, but it’s not clear that you have
136 Elizabeth Jackson engaged in epistemic paternalism – writing a new book is just not clearly paternalistic in any way. This case brings up several issues. First, epistemic paternalism needs to be intentional. Note that we are constantly interfering with others’ inquiry by, e.g. withholding evidence from almost everyone around us all the time. There are always additional ways that we could epistemically improve others. But it doesn’t seem like we are constantly engaging in epistemic paternalism. Nonetheless, as noted above, many cases of epistemic paternalism seem to involve intentionally withholding important evidence from someone. Part of what explains the difference is that accidental or unintentional paternalism seems impossible. Thus, we might edit our definition to reflect that epistemic paternalism is intentional.5 Thus, withholding evidence only counts as epistemic paternalism if it is done intentionally. This doesn’t fully deal with our case, though – you likely wrote the book to epistemically improve someone, even if not those students in particular. It doesn’t seem that paternalism, even if intentional, needs to be directed at a particular person. A government might build a wall around a dangerous part of a mountain hike to prevent people from exploring that area. The wall isn’t built to keep out certain individuals in particular, but still seems paternalistic. One might instead think that, in the philosophy of mind case, the lackof-consent condition isn’t met. Maybe the students do, in fact, consent by, e.g. enrolling in the relevant course. Suppose instead a student finds the book randomly in the library and starts reading it. While the student didn’t consent to you writing the book, they make the decision to pick it up and read it, so this doesn’t count as epistemic paternalism. This suggests that consenting to the book’s being written isn’t the relevant object of consent. Generally, we should pay more attention to the consent condition. What counts as consent? And what exactly do the agents in question need to consent to? More attention also needs to be paid to what counts as an interference.6 Note that interfering with another’s inquiry comes in degrees. On the more innocent end of the spectrum, there is sharing a fact with someone that they didn’t ask about – consider the talkative neighbor who always wants to tell you about their day, their job, or their kids. Then there’s handing someone a book or pamphlet, or recommending they read something. Then there are more serious interferences, like epistemic manipulation: sharing certain facts but purposefully leaving others out, or intentionally presenting certain arguments to your class so they come to have a particular view. On the most extreme end of the spectrum, there’s e.g. serious acts of lying and brainwashing, and epistemic threats and coercion. We can suppose that, in all these cases, the consent condition isn’t met and there
What’s Epistemic about Paternalism? 137 is an epistemic improvement, so they are all at least candidates for epistemic paternalism. But it’s not clear that we should count all these cases as epistemic paternalism. If your neighbor is a talker, you may learn more than you want to know from them, but is this really paternalistic? Is handing your friend an interesting book to read paternalistic? Maybe the interference has to be sufficiently significant to count as epistemic paternalism. This suggests: Definition 4: Epistemic paternalism =df (i) intentionally and significantly interfering with someone’s inquiry, (ii) without their consent, (iii) for their own epistemic good. I worry this definition is still not sufficiently precise. (What counts as a “significant interference”? What counts as consent?) This points to the following first open question: Open Question 1: when defining epistemic paternalism, how should we understand consent and interference? And the general discussion in this section to a second, related question: Open Question 2: what is the best way to define epistemic paternalism, so that it is neither too broad nor too narrow? In my view, something close to definition 4 is promising and perhaps can deal with the tricky cases by being clearer about the consent and interference conditions. Nonetheless, this discussion shows that even basic questions about the definition of epistemic paternalism need more attention and refinement.
7.3 Epistemic and General Paternalism This section continues our exploration of the nature of epistemic paternalism. More specifically, it contrasts epistemic paternalism with general paternalism and assesses what makes the former an epistemic phenomenon at all. While epistemic paternalism is generally taken to be a type of general paternalism, it is also assumed to be a unique subset that is distinguishable from cases of non-epistemic paternalism. This section argues that either the distinction is much more slippery than it appears prima facie, or cases of epistemic paternalism are extremely rare. Consider again our definition: the interference must happen for the inquirer’s good. As a preliminary note, it is natural to read this as claiming that the inquirer’s epistemic good is a reason for the interference. What kind of reason exactly? Metaethicists distinguish different kinds of
138 Elizabeth Jackson reasons – three of which are motivating reasons (facts for which someone phi-s), normative or justifying reasons (that count in favor of phi-ing), explanatory reasons (that explain why someone phi-ed). Since our goal here is simply to define epistemic paternalism, I think the best candidate here is motivating reasons – the epistemic benefit is the fact for which they interfered, and normative reasons for epistemic paternalism come in when we examine the question of whether epistemic paternalism is justified (e.g. in the next section).7 One of the main things that distinguishes epistemic and general paternalism is that the interference must be done for the inquirer’s epistemic good. In other words, one of the primary factors that characterizes epistemic paternalism is that it is motivated in a distinctly epistemic way. This raises the question: need the interferer be motivated only by the inquirer’s epistemic good? What if they act partially for the inquirer’s epistemic good? Many cases in the actual epistemic paternalism literature are arguably instances of the latter. For example, consider a commonly used case: a judge withholding evidence from a jury to raise the probability they will come to the right verdict (maybe the judge has good evidence that the jury will weigh the evidence improperly). Presumably, the judge might do this in part for the epistemic good involved (the jury’s getting a true belief, justified belief, knowledge, etc.) but in real-life cases, a major part of the judge’s motivation is moral: to convict the guilty and to let the innocent go free. This latter thing is only contingently connected to the jury’s having true beliefs, and if (for some odd reason) the judge thought that the jury’s having false beliefs would lead to convicting the guilty and the innocent going free, the judge would likely not be motivated to interfere in the same way. This suggests that the epistemic is not the judge’s primary motive. Goldman (1991), who pioneered the philosophical interest in epistemic paternalism, is sensitive to this point. After stating the definition of epistemic paternalism, he says, “the restriction to the epistemic viewpoint is again important. In legal settings, for example, there are many non-epistemic reasons for refusing to provide relevant evidence to jurors” (114). However, in many of the examples of epistemic paternalism he provides, it is not at all clear that the agents involved are motivated purely by epistemic factors. For instance, he considers epistemic paternalism in education curriculum, noting that, in health classes, we don’t give “equal time to drug pushers” (121), which has an obvious non-epistemic motivation (keeping kids from doing drugs). He also mentions the battle over whether creationism should be taught in schools, noting that excluding creationism from curriculum might be a case of epistemic paternalism. However, he even admits that, in this case, nonepistemic constitutional issues about including religion in public education muddy the waters (122). He also mentions epistemic paternalism
What’s Epistemic about Paternalism? 139 in commercial advertising, to combat false or deceptive advertising. While he claims that the goal in this is “to keep [buyers] from believing untruths about commercial products” (122), this is clearly not the only goal, and most people care about this goal because they care about the actual products people will eventually buy. Similar considerations apply to most of Goldman’s other examples, including interfering with what news items are covered by the media, and how they are covered – people’s beliefs from watching and reading the news have significant non-epistemic effects. In fact, Ahlstrom-Vij (2013a, p. 117, 134) invokes an alignment condition for this very reason: on his view, one of two jointly sufficient conditions for justified epistemic paternalism is that the epistemic and non-epistemic reasons must be normatively aligned (e.g. both permit the interference). One reason he needs this condition is because of how frequently epistemic paternalism is tied up with, and affects, the non-epistemic. This raises the question: are there any cases of “pure” epistemic paternalism, where someone is motivated to interfere with another’s inquiry solely on epistemic grounds? These cases turn out to be quite rare.8 A potential set of cases involves abstract topics in science, philosophy, etc. that have little practical import. For example, Bullock (2018, p. 434) discusses an instructor acting paternalistically to teach her students quantum mechanics. Something similar may happen in some philosophy classes – e.g. a professor engages in epistemic paternalism to help her students better understand the realist/nominalist debate, and does so purely to facilitate their understanding of the issues, rather than for any practical or moral reasons. These seem like plausible cases where one might be motivated to interfere with another’s inquiry purely for the sake of promoting true/justified beliefs or knowledge, and not for downstream non-epistemic effects. Note also that not all paternalism regarding abstract concepts is purely epistemic, since we might sometimes be motivated to help someone understand something abstract for a practical or moral reason (e.g. to help them get into grad school or to get a good teaching evaluation). Of course, this point applies in the opposite direction to the cases above as well; an interference might have non-epistemic benefits, but our motives for interfering could still be solely epistemic.9 Nonetheless, I suspect this normally isn’t the case; again, we care about a jury’s beliefs because we want to free the innocent and convict the guilty; we care about our children’s beliefs about drugs so they don’t do them, etc. The general lesson is that, if epistemic paternalism requires acting only for another’s epistemic good, it is probably a very rare phenomenon, even if we understand epistemic value quite broadly (as including true beliefs, justified beliefs, knowledge, understanding, etc.). This isn’t great news for those working on epistemic paternalism because it means that
140 Elizabeth Jackson (i) real-life cases of epistemic paternalism are virtually non-existent, and (ii) many of the literature’s supposed examples of epistemic paternalism, such as the cases of juries in courtrooms, are not actually cases of epistemic paternalism. Alternatively, one might maintain that an act can count as epistemic paternalism if it is done partially for another’s epistemic good, but practical and moral factors can also be part of the motivation. There are several issues with defining epistemic paternalism this way, however. First, this move strays from current literature. Goldman (1991), Pritchard (2013), and Bullock (2018) are all sensitive to the idea that paternalism motivated partially by the epistemic and partially by other, non-epistemic factors doesn’t seem distinctly epistemic.10 Of course, one could push back on this, but would be out of step with current literature. Second, if we allow epistemic paternalism to be motivated by both epistemic and non-epistemic reasons, then it also becomes less clear what makes epistemic paternalism distinctly epistemic. Yes, it involves interfering with inquiry, but are those interferences of any special epistemic interest if they are done partially or even mostly for practical or moral reasons? For example, if I interfere with a juror’s evidence partially to help them come to know who did it, but also out of moral concern from the guilty to be punished and the innocent to be set free, it’s not clear that my interference ought to be classified as pure epistemic paternalism (it might be classified as a pseudo-epistemic paternalism). You also might wonder if the weight of each reason matters: what if the epistemic is only a small part of the reason I interfere, and the moral is my primary motive? Maybe an act counts as epistemic paternalism if the epistemic reason is weightier than the other ones. Or maybe it counts if the epistemic reason is sufficient, on its own, to motivate the interferer. These possibilities are underexplored. Finally, in many real-life cases, the epistemic motivation is (only or primarily) valuable instrumentally. Consider, for instance, why we care about buyer’s beliefs about products they might purchase, children’s beliefs about the effects of drugs, or a jury’s beliefs about who committed a crime. It is because of the downstream moral and practical effects of these beliefs; we rarely purely care about the beliefs themselves. Thus, in many realistic cases, it’s not clear that the distinctly epistemic factors are central to the motivation for the interference at all. Again, recall our example in which a jury’s having false beliefs would lead to convicting the guilty and the innocent going free. While the judge may not lie to the jury (note that she also has moral and practical reasons not to lie), she may withhold evidence even if it makes the jury epistemically worse off, if it leads to practically and morally good consequences. To sum up, when defining epistemic paternalism, we are faced with a dilemma. If epistemic paternalism requires acting only for another’s epistemic good, then we’re spending a lot of intellectual energy and ink
What’s Epistemic about Paternalism? 141 analyzing a practice that rarely occurs (and most of our examples don’t actually apply). However, if an action can count epistemic paternalism if it is motivated by both epistemic and non-epistemic reasons, then it becomes less clear what distinguishes epistemic paternalism from regular paternalism, why it would deserve its own literature and analysis, and why its justification would differ from that of regular paternalism.11 This leaves us with at least two open questions: Open Question 3: Does epistemic paternalism require acting only for another’s epistemic good? Open Question 4: What, if anything, distinguishes epistemic paternalism from regular paternalism? Now, we turn to normative evaluations of both paternalism and epistemic paternalism.
7.4 Evaluating Epistemic Paternalism We’ve now discussed the nature of epistemic paternalism and ways that it contrasts with general paternalism. While many questions are unanswered, and I’ve provided reason to doubt that the distinction is welldefined, I’ll proceed as if we have some way to differentiate the two. More specifically, in this section, I will (modestly) assume that, if an action is primarily motivated by non-epistemic factors, we ought not count it as epistemic paternalism (even if this means that epistemic paternalism is a rare phenomenon). Now, we turn to normative questions. As a preliminary note, it is instructive to examine justifications for regular paternalism and epistemic paternalism side by side, because lessons and concepts from the longstanding, mature paternalism literature can be applied to questions about epistemic paternalism, which is newer and less developed. For example, in cases of general paternalism, there is widely taken to be a presumption of non-interference. We should err on the side of not interfering with the free choices of others, unless we have an overriding reason to do so (Mill, 1869, Dworkin, 2010, sec. 3). The “burden of proof,” then, is on the person engaging in paternalism to justify their interference. One might wonder: does this presumption hold in the epistemic case? Interestingly, most of the epistemic paternalism literature thus far involves arguments that it is justified, and almost no one has argued that epistemic paternalism is always or almost always impermissible (Bullock, 2018 being the notable exception). Since so many people seem to be in favor of epistemic paternalism (apparently epistemic libertarians are quite rare!), this might suggest that the presumption of non-interference doesn’t hold in the epistemic case. At the same time, philosophers like to argue for controversial theses, so this observation
142 Elizabeth Jackson about the literature may not mean much. It is nonetheless worth exploring whether the presumption that applies to general paternalism also applies to epistemic paternalism.12 Now, when it comes to justifications of general and epistemic paternalism, there are at least four questions one might ask: Q1. Is paternalism all-things-considered justified? Q2. Is paternalism epistemically justified? Q3. Is epistemic paternalism all-things-considered justified? Q4. Is epistemic paternalism epistemically justified? Here, I understand all-things-considered justification to include moral, practical, epistemic reasons, and it may also include other types of value, like aesthetic reasons. (Some, e.g. Feldman, 2000, are skeptical of allthings-considered justification because they think different types of reasons aren’t commensurable.) Note that there are questions in addition to Q1–Q4 that isolate each of these types of value, e.g. is epistemic paternalism morally (or practically) justified? Note also that, assuming epistemic paternalism is a subset of general paternalism, then a negative answer to Q1 entails a negative answer to Q3, and a negative answer to Q2 entails a negative answer to Q4. Similarly, a positive answer to Q4 entails a positive answer to Q2, and a positive answer to Q3 entails a positive answer to Q1. Finally, it is crucial to be clear about which question is of interest, rather than simply speaking of epistemic paternalism’s “justification.” It could very well be that epistemic paternalism is, e.g. epistemically justified but morally unjustified, or practically justified but not all-things-considered justified. For example, Bullock (2018)’s argument that epistemic paternalism is morally unjustified might not give us reason to doubt Goldman’s (1991) argument that it is epistemically justified. Several of those who have written on the justification of epistemic paternalism thus far have focused on Q3.13 However, this strikes me as a debate that isn’t especially fruitful, because it seems difficult, if not impossible, to justify epistemic paternalism on all-things-considered grounds. This is because, as Pritchard (2013) and Bullock (2018) point out, whatever epistemic gain supposedly motivates epistemic paternalism – e.g. true belief, justified belief, knowledge – will almost always be outweighed by moral considerations in favor of personal autonomy and sovereignty. (Recall here that, if we understand epistemic paternalism as primarily motivated by epistemic considerations, we cannot count the moral or practical value conferred by the content of the beliefs; it is purely the epistemic value of the true/justified beliefs/knowledge). Bullock (2018, pp. 442–443) gives the following illustrative example: Suppose, for example, that I play a series of physics lectures to you whilst you are sleeping, with the intention that you subconsciously
What’s Epistemic about Paternalism? 143 learn quantum mechanics. I have good reason to think this will be effective. You happen to have no interest in quantum mechanics and the facts that you learn have no bearing on your wellbeing. Is this interference justified on balance? … It seems intuitive … that the loss to personal sovereignty is in fact a weightier concern than the gain in knowledge: indeed, it looks as though you would be morally correct to admonish me for my secretive interferences even if you wanted to learn about quantum mechanics. I agree with Bullock’s assessment of this case – it is both morally and all-things-considered impermissible for me to sneak the headphones on you while you are sleeping, simply because I want you to learn about quantum mechanics. It is hard to see how the epistemic gain of doing so could outweigh the moral losses. This leads to our next open question: Open Question 5: Can epistemic paternalism ever be morally or allthings-considered justified? In general, it is hard to see how epistemic paternalism could be morally or all-things-considered justified, especially if it requires acting solely for another’s epistemic good. Part of this depends on what counts as a significant interference, per definition 4. For instance, if I give you a book about quantum mechanics instead, it’s much less clear that this is morally and all-things-considered impermissible.14 However, Bullock might reply that this interference isn’t sufficiently significant, so it doesn’t count as epistemic paternalism. But there is one notable exception to this that Bullock’s argument overlooks – the case of nudges. (This is another example of how epistemic paternalism can fruitfully borrow from regular paternalism.) The concept of a nudge was pioneered by Sunstein and Thaler (2003). They argue that, in certain situations, we are forced to present choices to others in a certain way. In these cases, no matter what we do, we will have some influence on their decisions. Sunstein and Thaler argue that, for this reason, we might as well present the choices in a way that makes it more likely that people will choose what is best for them. This is called a nudge. For example, in a cafeteria, students are more likely to choose foods that are at eye level. One might “nudge” these students by putting healthier foods at eye level, making it more likely that they will pick the healthy option. Something must be put at eye level, so, the reasoning goes, we might as well choose the healthier food. Sunstein and Thaler argue that nudges are a case of paternalism that even libertarians should be happy with – it is difficult to see why nudges would be unjustified. This raises the question: is there such a thing as epistemic nudges? For example, when presenting people with a body of information, we have to present some information first. As discussed in Section 7.2, humans are subject to ordering
144 Elizabeth Jackson effects, so that the order in which pieces of evidence are presented affects people’s ultimate judgments. We might present certain information first to facilitate our audience’s understanding, even though this might change what they ultimately conclude. After all, we have to present it in some order, so we might as well do what is epistemically best for our audience. Nudges may be cases of uncontroversially morally/all-things-considered justified epistemic paternalism. Generally, epistemic nudges strike me as a fruitful area for further research.15 Further, they strike me as one of the rare cases where there is a moral or all-things-considered justification for epistemic paternalism. Because morally and all-things-considered justified epistemic paternalism is plausibly rare, a potentially more fruitful debate involves Q4, which is the original focus of Goldman’s (1991) paper: is epistemic paternalism epistemically justified? This question is about the epistemic justification of a particular practice, namely, interfering with inquiry. This assumes that epistemic norms can guide behavior. Further, ‘epistemic justification’ (or ‘epistemically justified’) is used here in a non-standard way – it doesn’t pick out the thing that turns true unGettiered belief into knowledge.16 Here, ‘justification’ indicates when a practice, on balance, promotes epistemic goods. This sheds light on why many in the epistemic paternalism literature have either implicitly or explicitly adopted a version of epistemic consequentialism.17 Epistemic consequentialism is the view that epistemic goods are more fundamental than epistemic obligations – what you epistemically ought to do is promote certain epistemic goods. Many candidate deontological epistemic norms – e.g. believe in accord with your evidence, believe truths, don’t believe contradictions, have probabilistic credences, etc. – are not norms that guide action, but are evaluative or teleological norms that apply to belief. It is unclear what deontological epistemic principles would govern action; at least, these principles are not frequently discussed by epistemologists. However, it nonetheless seems epistemically good to take actions that promote valuable epistemic states, such as true belief, justified belief, coherent belief, etc. Thus, one way you might epistemically justify a particular practice is by arguing that it will likely result in many epistemically good states and/or enable the avoidance of epistemically bad states, or, more simply, that it maximizes expected epistemic value (Greaves, 2013). A potential paper explores what the deontological, epistemic, actionguiding norms might be (if they exist), and how those affect the normative status of epistemic paternalism. Many in this literature simply assume a version of epistemic consequentialism. However, note that we need not be epistemic consequentialists across the board to make sense of Q4. We need only assume epistemic consequentialism about epistemic norms for action; we can be deontologists about epistemic norms for belief. We can still maintain that deontological norms such as: believe in
What’s Epistemic about Paternalism? 145 accord with your evidence or have probabilistic credences apply directly to attitudes themselves. However, when it comes to how we should act, epistemically, consequentialist norms kick in: we should act in ways that, e.g. maximize expected epistemic value. Nonetheless, this still leaves the following question: Open Question 6: Can epistemic norms guide action? If the answer is no, then we cannot evaluate epistemic paternalism from an epistemic point of view at all. And several philosophers, including Feldman (2000), Kelly (2002, fn. 30), Berker (2018), and Simion (2018), argue that there aren’t epistemic reasons for action. This leads to another potential dilemma, this time concerning the normative evaluation of epistemic paternalism. Either we evaluate epistemic paternalism from a moral or all-things-considered perspective, in which case it is hard to see how epistemic paternalism is justified (this is essentially Bullock’s (2018) argument, nudges being the key exception), or, we evaluate it epistemically, in which case we must controversially assume that that epistemic norms can guide action. The latter might not be so bad, insofar as it is reasonable to think some epistemic norms guide certain kinds of behaviors, such as how we get evidence (e.g. inquiry, evidence gathering) and what we do with our evidence (e.g. critical reasoning, reflection on our evidence).18 But with either natural way of evaluating epistemic paternalism, we are left with surprising or controversial results. Now, we turn to ways that epistemic paternalism compares and contrasts with other questions in social epistemology.
7.5 Epistemic Paternalism and Social Epistemology Goldman (1991) mentions several times that epistemic paternalism falls under social epistemology, or what he calls social epistemics. He explains, “Social epistemics studies the veritistic properties of social practices, or institutional rules that directly or indirectly govern communication and doxastic decision” (120). Goldman and O’Connor (2019), in the Stanford Encyclopedia of Philosophy entry on ‘Social Epistemology’, define social epistemology as “an enterprise concerned with how people can best pursue the truth (whichever truth is in question) with the help of, or in the face of, others.” Later in that same article, they categorize some of the central topics in social epistemology as testimony, disagreement, how we should identify and respond to expert belief, and epistemic injustice. This suggests a noteworthy sense in which epistemic paternalism differs from other questions in social epistemology. Many of the topics Goldman and O’Connor mention – disagreement, testimony, epistemic injustice, and responding to expert opinion – can be framed as (at least primarily) concerning the question, “What should I believe?” The disagreement
146 Elizabeth Jackson literature asks what I should believe in the face of peers who disagree with me. The epistemology of testimony is about whether, and under what conditions, I should believe a testifier. Questions about experts also concern the question of what I should believe in response to experts (or so-called experts). Even epistemic injustice can be seen as largely concerning the question of what I should believe – in response to the testimony of marginalized groups. Interestingly, epistemic paternalism notably differs from these other topics, and is, in some sense, more social. This is because epistemic paternalism concerns our epistemic obligations to others – not merely what we should believe when we encounter other epistemic agents. It is about how we ought to affect – or not affect – the beliefs of others. This complex issue doesn’t merely boil down to the question of what a single individual should believe, but involves tricky considerations about how agents should treat each other, epistemically. One reason this is notable is because epistemic obligations to other people are rarely discussed, in any literature; in fact, many assume that our obligations to others are merely moral ones.19 Further, it is noteworthy that many issues in social epistemology are less social than one might have thought, and can be grouped alongside issues in traditional epistemology, at least insofar as they all concern the question, “What should I believe?” Of course, the fact that epistemic paternalism goes beyond this question might explain some why some of the issues mentioned above might crop up – e.g. whether there are epistemic reasons for action, whether epistemic consequentialism is true, and how we should understand epistemic justification when it comes to our obligations to others. Nonetheless, I think this shows that we can further divide social epistemology into distinct and interesting categories, and some traditional issues in social epistemology might be notably “less social” than others. This brings us to a final open question: Open Question 7: What is the best way to define social epistemology, and how should we categorize its topics? While, unlike the other questions, this question is not necessarily crucial for those writing on epistemic paternalism to address, I nonetheless think it alludes to stimulating and significant general issues in social epistemology.
7.6 Conclusion My primary goal has been to critically examine the concept of epistemic paternalism and to survey and evaluate various normative questions we might ask about it. While this paper has raised a lot of problems for both the concept of epistemic paternalism and its evaluation, I nonetheless hope
What’s Epistemic about Paternalism? 147 that I’ve enabled clearer thinking about epistemic paternalism. This includes what epistemic paternalism is, what controversial commitments it may carry (e.g. potentially that there are epistemic reasons for action, and maybe certain strands of epistemic consequentialism), and what commonly made assumptions in the literature are unnecessary (e.g. veritism). Ultimately, I hope this chapter facilitates the aim of accurately answering questions about what kinds of epistemic paternalism (if any) are justified, under what circumstances they are justified, and in what sense they are justified.
Acknowledgments Thanks to Kirk Lougheed and Jon Matheson for helpful comments on an earlier draft. Thanks to Seth Lazar, Justin D’Ambrosio, Nic Southwood, Matthew Kopec, Klaas Kray, Chris Dragos, and audiences at the 2019 Canadian Philosophical Association, Australian National University, and Michigan State University for valuable discussion and feedback. Research on this chapter was supported by Australian Research Council Grant D170101394.
Notes 1 Discussions of epistemic paternalism include Goldman (1991), AhlstromVij (2013a, 2013b), Pritchard (2013), Ridder (2013), Bullock (2018), Croce (2018). See also Bernal and Axtell (2020) for an edited volume on epistemic paternalism. 2 See Jackson (forthcoming) and Jackson and Turnbull (forthcoming) for a further discussion of the ways one’s broader epistemic situation can affect one’s beliefs without affecting one’s evidence. 3 See, for example, Walker et al. (1972), Dean (1980), Hogarth and Hillel (1992), Wiegmann et al. (2012). 4 Thanks to Frank Jackson for suggesting this case to me. 5 Thanks to Jon Matheson. 6 Thanks to Kirk Lougheed. 7 Thanks to Seth Lazar for helpful discussion; see Ahlstrom-Vij (2013a, p. 113). 8 Thanks to Seth Lazar for helpful discussion. 9 Thanks to John Matheson. 10 Ahlstrom-Vij (2013a, pp. 117, 134) argues that for epistemic paternalism to be justified, epistemic and non-epistemic reasons need to be aligned in a particular way. But his point isn’t that non-epistemic reasons are frequently part of the motivating reason for epistemic paternalism; his point is about normative reasons (117). 11 One possibility is that both general and epistemic paternalism essentially concern the same phenomenon, but are simply evaluated from different points of view; the former evaluates it morally and practically, and the latter evaluates it epistemically. (Thus, we are only concerned with Q1 and Q2 from the next section.) While I’ll proceed as if epistemic paternalism is a unique phenomenon, this suggestion warrants further exploration. Thanks to Kirk Lougheed and Jon Matheson for helpful discussion.
148 Elizabeth Jackson 2 Thanks to Nic Southwood. 1 13 Including Pritchard (2013) and Ahlstrom-Vij (2013a). Bullock (2018) focuses on moral justification for epistemic paternalism (see her fn. 7), but my comments in this paragraph also apply to her view. 14 Thanks to Jon Matheson. 15 Thanks to Matt Kopec. On epistemic nudges, see Meehan (2020). 16 Thanks to Pamela Robinson. 17 For recent defenses of epistemic consequentialism, see Singer (2018a, 2018b). See also Ahlstrom-Vij and Dunn (2018). 18 See Tidman (1996), Hookway (1999), Friedman (2019). 19 One exception is Basu (2019), who discusses a potential epistemic obligation to others: not to wrong others in what we believe about them. (Note also that Basu advocates for moral encroachment, so this obligation is both epistemic and moral, since, according to moral encroachment, the moral can affect epistemic rationality). Another potential exception in the philosophy of testimony literature is the obligation that speakers have to hearers. However, this may not be distinctly epistemic either (e.g. lying or intentionally misleading another is often taken to be morally impermissible, not merely epistemically impermissible). For more on testimony and social obligations, see Goldman (1999) and Lackey (2008). Thanks to Jon Matheson.
References Ahlstrom-Vij, K. (2013a). Epistemic paternalism: A defence. London: Palgrave Macmillan. Ahlstrom-Vij, K. (2013b). Why we cannot rely on ourselves for epistemic improvement. Philosophical Issues, 23, 276–296. Ahlstrom-Vij, K., & Dunn, J. (Eds.). (2018). Epistemic consequentialism. Oxford: OUP. Bansback, N. L., Li, L. C., Lynd, L., & Bryan, S. (2014). Exploiting order effects to improve the quality of decisions. Patient Education and Counseling, 96(2), 197–203. Basu, R. (2019). What we epistemically owe to each other. Philosophical Studies, 176, 915–931. Berker, S. (2018). A combinatorial argument against practical reasons for belief. Analytic Philosophy, 59(4), 427–470. Bernal, A., & Axtell, G. (2020). Epistemic paternalism reconsidered: Conceptions, justifications, and implications. Lanham, MD: Rowman & Littlefield. Bullock, E. C. (2016). Mandatory disclosure and medical paternalism. Ethical Theory and Moral Practice, 19(2), 409–424. Bullock, E. C. (2018). Knowing and not‐knowing for your own good: The limits of epistemic paternalism. Journal of Applied Philosophy, 35(2), 433–447. Carballo, A. P. (2018). Good questions. In K. Ahlstrom-Vij & J. Dunn (Eds.), Epistemic consequentialism. Oxford: OUP. Croce, M. (2018). Epistemic paternalism and the service conception of epistemic authority. Metaphilosophy, 49(3), 305–327. Dean, M. L. (1980). Presentation order effects in product taste tests. The Journal of Psychology, 105(1): 107–110.
What’s Epistemic about Paternalism? 149 DePaul, M. (2001). Value monism in epistemology. In M. Steup (Ed.), Knowledge, truth, and duty: Essays on epistemic justification, responsibility, and virtue (pp. 170–183). Oxford: Oxford University Press. Dworkin, G. (2010). Paternalism. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. https://plato.stanford.edu/entries/paternalism/. Friedman, J. (2019). Inquiry and belief. Noûs 53(2), 296–315. Goldman, A. I. (1991). Epistemic paternalism: Communication control in law and society. The Journal of Philosophy, 88(3), 113–131. Goldman, A. I. (1999). Knowledge in a social world. Oxford: Oxford University Press. Goldman, A. I., & O’Connor, C. (2019). Social epistemology. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. https://plato.stanford.edu/entries/ epistemology-social/. Greaves, H. (2013). Epistemic decision theory. Mind, 122(488), 915–952. Grill, K., & Hanna, J. (2018). The Routledge handbook of the philosophy of paternalism. New York: Routledge. Hogarth, R. M. E., & Hillel, J. (1992). Order effects in belief updating: The beliefadjustment model. Cognitive Psychology, 24(1), 1–55. Hookway, C. (1999). Epistemic norms and theoretical deliberation. Ratio, 12(4), 380–397. Feldman, R. (2000). The ethics of belief. Philosophy and Phenomenological Research 60(3): 667–695. Jackson, E. (2020). Epistemic paternalism, epistemic permissivism, and standpoint epistemology. In A. Bernal & G. Axtell (Eds.), Epistemic paternalism reconsidered: Conceptions, justifications, and implications (pp. 201–215). Lanham, MD: Rowman & Littlefield. Jackson, E. (Forthcoming). A defense of intrapersonal belief permissivism. Episteme. Jackson, E., & Turnbull, M. G. (Forthcoming). Permissivism, underdetermination, and evidence. In C. Littlejohn & M. Lasonen-Aarnio (Eds.), The Routledge handbook of the philosophy of evidence. New York: Routledge. Kelly, T. (2002). The rationality of belief and some other propositional attitudes. Philosophical Studies, 110(2), 163–196. Lackey, J. (2008). Learning from words: Testimony as a source of knowledge. Oxford: Oxford University Press. Lougheed, K. (2021). Epistemic paternalism, open group inquiry and religious knowledge. Res Philosophica, 98(2), 261–281. Meehan, D. (2020). Epistemic vices and epistemic nudging: A solution? In A. Bernal & G. Axtell (Eds.), Epistemic paternalism reconsidered: Conceptions, justifications, and implications (pp. 247–261). Lanham, MD: Rowman & Littlefield. Mill, J. S. (1869). On liberty. London: Longman, Roberts & Green. Pritchard, D. (2013). Epistemic paternalism and epistemic value. Philosophical Inquiries, 1(2), 9–37. Ridder, J. D. (2013). Is there epistemic justification for secrecy in science? Episteme, 10(2), 101–116. Simion, M. (2018). No epistemic norm for action. American Philosophical Quarterly, 55(3), 231–238.
150 Elizabeth Jackson Singer, D. J. (2018a). How to be an epistemic consequentialist. Philosophical Quarterly, 68(272), 580–602. Singer, D. J. (2018b). Permissible epistemic trade-offs. Australasian Journal of Philosophy, 97(2), 281–293. Sunstein, C. R., & Thaler, R. H. (2003). Libertarian paternalism is not an oxymoron. University of Chicago Law Review, 70(4), 1159–1202. Tidman, P. (1996). Critical reflection: An alleged epistemic duty. Analysis, 56(4), 268–276. Walker, L., Thibaut, J., & Andreoli, V. (1972). Order of presentation at trial. Yale Law Journal, 82, 216–226. Wiegmann, A., Okan, Y., & Nagel, J. (2012). Order effects in moral judgment. Philosophical Psychology, 25(6), 813–836.
Part III
Epistemic Autonomy and Epistemic Virtue and Value
8 Intellectual Autonomy and Intellectual Interdependence Heather Battaly
Compare two sets of people, all of whose members are in the process of acquiring beliefs and conducting intellectual inquiries. Those in the first set tend to make up their own minds and rely on their own cognitive faculties and reasoning. They want to see things for themselves, and tend to marshal their own evidence and evaluate it for themselves. They have the trait of intellectual autonomy. Those in the second set tend to consult and collaborate with other people and/or digital sources. They tend to rely on the cognitive faculties and reasoning of other sources, and to defer to the views and evaluations of those sources. They have the trait of intellectual interdependence. Are these traits of autonomy and interdependence intellectual virtues? This chapter argues that these traits can be, but are not always, intellectual virtues. For starters, they won’t be intellectual virtues when agents have these traits to excess – when they are intellectually autonomous or intellectually interdependent to a fault. As scholars have pointed out, excessive autonomy can take the form of “extreme epistemic egoism”1 and lead to cognitive isolation (Carter, 2020, p. 239), while excessive interdependence can take the form of “blind deference” (Ahlstrom-Vij, 2019, p. 216) and lead to gullibility (Fricker, 1994, p. 145). For the traits of autonomy and interdependence to be intellectual virtues, our agents must at least regulate them, reining in such excesses, but without over-correcting. To do this, our agents arguably need the virtue of good judgment. But, their resulting dispositions – to think for themselves appropriately, and to think with others appropriately – can still fail to be intellectual virtues. Agents who think for themselves (or with others) appropriately, but only do so for selfish reasons – e.g., they might want their name(s) to be associated with an important discovery in perpetuity – don’t have intellectual virtues. To be intellectual virtues, their dispositions to think appropriately will also need to be grounded in motivations for epistemic goods, including motivations for truth, knowledge and understanding. For this general framework, I am indebted to Whitcomb et al.’s (2021) distinction between the trait and virtue of humility.
154 Heather Battaly This analysis of the virtues of intellectual autonomy and intellectual interdependence is Aristotelian in spirit, insofar as it conceives of these virtues as character traits that avoid excess and deficiency. As such, it shares a family resemblance with the analyses of intellectual autonomy endorsed by J. Adam Carter (2020), Nathan King (2020), Jon Matheson (2021), and Roberts and Wood (2007). The chapter aims to add to this theoretical landscape in two ways. First, it approaches the analysis of intellectual autonomy and interdependence through the lens of, what Ian Kidd (2020, p. 81) calls, “normative contextualism.” That is, it does not presuppose that intellectual autonomy and interdependence are virtues. Rather, it highlights the distinction between the traits of intellectual autonomy and interdependence, on the one hand, and the virtues that go by those same names. In so doing, it initially conceives of intellectual autonomy and intellectual interdependence as normatively neutral traits, and then investigates what else might be needed to “turn” those neutral traits into intellectual virtues (or vices). This general approach to separating the analysis of a trait from its normative status as a virtue (or a vice) can help us home in on what makes the trait a virtue (or a vice) when it is one. Second, the chapter intentionally proposes two virtues – one of intellectual autonomy, another of intellectual interdependence – rather than combining these into a single virtue. This allows logical space for an agent to possess one of these virtues without possessing the other, even if we should generally expect the possession of these two virtues to be correlated. By way of preview, Sections 8.1 and 8.2 sketch accounts of the traits of intellectual autonomy and intellectual interdependence, respectively. They likewise examine excesses and deficiencies of these traits. They suggest that these excesses and deficiencies may be intellectual vices, but leave the exact analysis of what makes them intellectual vices to the sub-field of vice epistemology. Section 8.3 argues that the traits of intellectual autonomy and intellectual interdependence won’t be intellectual virtues unless the agents who possess them are motivated by epistemic goods, and also possess and exercise good judgment. The final section addresses objections, and closes with some open questions about the connections between intellectual autonomy, interdependence, and other intellectual virtues, particularly intellectual humility, pride, and trust of self and others.
8.1 Intellectual Autonomy: The Trait, Excesses, and Deficiencies Like all traits, the traits of intellectual autonomy and intellectual interdependence are dispositions. The concept of a disposition is both a “threshold concept” and a “degree concept.” This means that in order to count as having the traits of intellectual autonomy and intellectual interdependence, agents must meet a basic threshold for having the particular dispositions in question. It also means that once those basic thresholds are
Intellectual Autonomy and Interdependence 155 met, they can be exceeded to different degrees, depending on the relative strength of the particular disposition in the agent. 8.1.1 The Trait of Intellectual Autonomy With the above in mind, let’s begin with a sketch of the trait of intellectual autonomy. Agents who have this trait are disposed to think independently – to think for themselves and make up their own minds. So, when the goal of inquiry is to arrive at beliefs about a particular matter (e.g., whether anthropogenic climate change is real, where the Sears Tower is located, whether I left my keys somewhere on campus), intellectually autonomous agents are those who want, and tend, to see things for themselves, and to grasp matters “via [their] own cognitive resources” (Pritchard, 2016, p. 38). This can include wanting to see pertinent reasons and evidence for themselves, and a tendency to rely on their own faculties in engaging with and evaluating such evidence. Since intellectual inquiries can have goals that differ from, or go beyond, belief-acquisition, these claims will generalize. Accordingly, when the goal of inquiry is instead, e.g., generating questions, or brainstorming possible options, we can likewise expect agents with the trait of intellectual autonomy to want, and tend, to think for themselves and to rely on their own cognitive faculties.2 To summarize this exploratory sketch, the trait of intellectual autonomy (TIA) arguably consists in: (TIA1) a behavioral disposition to think independently – to think for oneself and to rely on one’s own cognitive faculties, and (TIA2) a motivational disposition to want to behave in these ways. It consists in a combined behavioral-and-motivational disposition, which agents can manifest in inquiries that aim at belief-acquisition, and, more generally, in the ways they conduct their intellectual inquiries. To possess the trait of intellectual autonomy, one must meet a basic threshold for having this combined behavioral-and-motivational disposition. In other words, one must be consistent enough in wanting to think for oneself and in actually thinking for oneself, when conducting inquiries. Exactly how consistent one must be is an open question. Still, it is clear that agents who only occasionally think independently won’t be consistent enough to meet the basic threshold and won’t count as having the trait. Note that this sketch aims to describe the trait of intellectual autonomy in normatively neutral terms; importantly, it does not presuppose that the trait is an intellectual virtue. Indeed, the below suggests that excesses of the trait can amount to intellectual vices (as can deficiencies of the trait). 8.1.2 Excesses of the Trait of Intellectual Autonomy Recall that once the threshold for intellectual autonomy is met, it can be exceeded to different degrees, depending on the strength of the agent’s disposition to think for herself. These excesses get agents into trouble.
156 Heather Battaly To put this point differently, agents who possess the trait to excess will consistently think for themselves when it is inappropriate to do so: (very) defeasibly, they consistently think for themselves at (some of) the wrong times (e.g., when they aren’t reliable), or in (some of) the wrong ways (e.g., using unreliable methods), or with respect to (some of) the wrong objects (e.g., unanswerable questions or projects doomed to fail).3 In short, one reason the trait of intellectual autonomy won’t always be an intellectual virtue is that agents can possess the trait to excess. Doing so makes them intellectually autonomous to a fault.4 Warnings about excesses of intellectual autonomy are replete in the literature, even if not couched in exactly these terms.5 For instance, it has been argued that being a “complete autodidact” who relies solely on his own cognitive faculties (Roberts & Wood, 2007, p. 259) is a sign of paranoid skepticism (Fricker, 2006, p. 243), leading to cognitive isolation (Carter, 2020, p. 239). Matheson (under review) adds that some excesses of intellectual autonomy also manifest an epistemically unjust lack of trust in the competence of other agents. Especially pertinent for present purposes are Linda Zagzebski’s warnings about “extreme epistemic egoism.” On Zagzebski’s view, the extreme epistemic egoist is someone who: Maintains that the fact that someone else has a belief is never a reason for her to believe it, not even when conjoined with evidence that the other person is reliable. If she finds out that someone else believes p, she will demand proof of p that she can determine by the use of her own faculties, given her own previous beliefs, but she will never believe anything on testimony. (2020, p. 263) Zagzebski argues that extreme epistemic egoism leads agents to rely on their own faculties, even when they have evidence (i) that they are not themselves reliable in a particular domain and (ii) that other agents are reliable in that domain (2009, p. 89, 2020, p. 265). Applying the terminology introduced above, extreme epistemic egoists are intellectually autonomous to a fault. They think for themselves at some of the wrong times, e.g., when they have evidence that others are reliable and they are not, and in some of the wrong ways, e.g., they use faculties that they have reason to think are unreliable. Zagzebski points toward a related worry about extreme epistemic egoists, which I take to be under-appreciated in the literature (2012, p. 110). Namely, their extreme egoism doesn’t shut off when they enter domains in which nobody is reliable – in which we are at the boundaries of discovery and none of us are likely to get true beliefs. An extreme egoist will continue to think for herself even when she has evidence that nobody is reliable in a particular domain. Rather than postpone her inquiry,
Intellectual Autonomy and Interdependence 157 until more reliable methods can be developed, and abstain from forming beliefs about the matter, the extreme egoist presses on, relying on her own faculties. She, for example, believes – rather than merely hypothesizes or predicts – that there is/isn’t sophisticated intelligent life elsewhere in the universe. In other words, extreme egoists don’t just think for themselves in domains where they are not reliable but other agents are, they also think for themselves in domains where none of us are reliable. This is an important point for two reasons. First, it means that extreme egoists will think for themselves inappropriately even when it would also be inappropriate for them to rely on the faculties of other agents. In short, excesses of intellectual autonomy can, but need not, correlate with deficiencies of intellectual interdependence (see Section 8.2). Second, in addition to being a case in which extreme egoists think for themselves at some of the wrong times, and in some of the wrong ways, this is also a candidate case in which they think for themselves with respect to some of the wrong objects, e.g., questions that none of us can currently reliably answer. This point generalizes to ill-formed questions and projects – extreme egoists will continue to think for themselves when questions are unanswerable and projects are doomed to fail. The key point here is that having too much of the trait of intellectual autonomy is a bad thing. Agents with excesses of the trait of autonomy will consistently think for themselves when it is inappropriate to do so. Does this mean that excesses of the trait of intellectual autonomy are all and always intellectual vices? To answer this question in full, we would first need to consult vice epistemology, one of the goals of which is to investigate the conditions for intellectual vice.6 More specifically, we would need to settle on the sufficient conditions for intellectual vice, which is no small task. That said, we can start on an answer as follows. Suppose that there is more than one route to intellectual vice. Recall Aristotle’s claim that: It is possible to fail in many ways … while to succeed is possible only in one way (for which reason one is easy and the other difficult – to miss the mark easy, to hit it difficult); for these reasons … excess and defect are characteristic of vice, and the mean of excellence … For men are good in but one way, but bad in many. (NE.II.6.1106b29–35) To sharpen the point, suppose that any one of several independent conditions would each be sufficient for turning a trait into an intellectual vice. Perhaps, a trait’s producing a preponderance of bad epistemic effects is sufficient for making it a vice; and a trait’s being grounded in bad epistemic motives is also, (independently) sufficient for making it a vice; and, as Jason Baehr (2020, p. 26) has suggested, a trait’s being excessive due
158 Heather Battaly to bad judgment is also (independently) sufficient for making it a vice, and so forth.7 Applying this, if a trait’s being excessive due to bad judgment is itself a sufficient condition for turning it into an intellectual vice, then we have reason to think that excesses of the trait of intellectual autonomy are all and always intellectual vices.8 Here further argument about the conditions of vice is required; but for that, we need to look to vice epistemology. Second, even if vice epistemologists were to agree that excesses of a trait, qua excesses, are all and always intellectual vices, there is a plot twist that comes courtesy of some versions of normative contextualism. As I understand it, normative contextualism can have two main features: (NC1) its normative neutrality in identifying traits, e.g., it doesn’t presuppose that intellectual autonomy is a virtue; and (NC2) its contextualist allowance that the very same trait (had to the very same degree) can count as a vice in one environment, but fail to count as a vice – and may even count as a virtue – in another. Let’s call the view that endorses (NC1) but not (NC2) “modest,” and the view that endorses (NC1) and (NC2), “strong.”9 Thus far, we have been exploring modest normative contextualism and the feature of normative neutrality. Strong normative contextualism is more controversial. Endorsing it would mean that there are no answers to be had about whether a particular degree of a trait is a virtue or a vice simpliciter; there are only answers to be had about whether a particular degree of a trait is a virtue or a vice in a given epistemic environment. Nor would there be any answers about whether a particular degree of a trait is excessive simpliciter; there would only be answers about whether a particular degree of a trait is excessive in a given epistemic environment. To put this differently, excesses of a trait, vices, and virtues, would all be indexed to epistemic environments, if we endorsed strong normative contextualism.10 We can now return to our question – are excesses of the trait of intellectual autonomy all and always intellectual vices? – and to the plot twist (NC2) generates. Suppose that excesses of autonomy in a given epistemic environment E are all and always vices in E. The plot twist is that the very same degree of intellectual autonomy that counts as an excess, and vice, in E can fail to count as an excess and vice in another environment E’, and might even count as a virtue. To rough out an example, suppose we could give a normatively neutral description of the trait of being an epistemic loner that was suitably connected to the trait of intellectual autonomy. Presumably, in our “ordinary” epistemic environment, being such a loner would severely limit one’s knowledge and reasoning skills, and would count as a vice and excess of intellectual autonomy. But, arguably, there are epistemic environments in which being a loner won’t count as a vice or an excess. Suppose that having the knowledge and skills you already have, you enter George Orwell’s 1984, or another hostile or oppressive epistemic environment, in which you are surrounded by falsehoods and deluded sources. In such hostile
Intellectual Autonomy and Interdependence 159 epistemic environments, it is far from clear that being an epistemic loner would be excessive or vicious – it might even be virtuous!11 So, should we endorse strong normative contextualism? While I have employed the view elsewhere, and intend the accounts in this chapter to be open to it, the view still requires a full assessment. For that, we may need to look to liberatory virtue and vice theory. 8.1.3 Deficiencies of the Trait of Intellectual Autonomy In the meantime, we can say that having too much intellectual autonomy is a bad thing. So is having too little – deficiencies of the trait of intellectual autonomy also get agents into trouble. Whether such deficiencies are all and always intellectual vices, and whether they must be indexed to epistemic environments, I likewise leave to vice epistemology and assessments of strong normative contextualism. Agents who are deficient with respect to the trait of intellectual autonomy will consistently fail to think for themselves when it is appropriate to think for themselves: (very) defeasibly, they will fail to think for themselves at (some of) the right times (e.g., when they are reliable),12 or in (some of) the right ways (e.g., using reliable methods), or with respect to (some of) the right objects (e.g., answerable questions and well-formulated projects). To illustrate, an agent might consistently fail to think for herself and consistently defer to the claims of others, even when she is herself reliable and others are not. Perhaps, she is an expert interacting with novices – an expert who has been so often on the receiving end of testimonial injustice that she fails to recognize her own reliability and expertise.13 Or, perhaps, she is unwittingly outsourcing her beliefs to what is in fact a cult of conspiracy theorists (see Section 8.2). Applying our terminology, this agent consistently fails to think for herself at some of the right times – e.g., when she is reliable, and others aren’t. These failures to think for herself likewise manifest in failures to think in the right ways – e.g., to use (her own) reliable faculties. And, if the agent is deferring to, e.g., “Pizza Gate” conspiracy theorists,14 these failures to think for herself also manifest in failures to think with respect to the right objects – e.g., well-formulated projects. It is important to note that despite the emphasis on (un)reliability in the examples throughout Section 8.1, the (in)appropriateness of thinking for oneself won’t always turn on one’s (un)reliability (or evidence for it). As Matheson (under review) argues, it is sometimes appropriate for agents who are unreliable novices to think for themselves, rather than defer to reliable experts, since they will sometimes need to think for themselves in order to develop their own critical thinking skills and intellectual virtues. Specifically, it will be appropriate for them to think for themselves when it is more important to facilitate the development of their own intellectual character than it is to get the truth, as it sometimes is in education.15
160 Heather Battaly Case in point: we sometimes discourage undergraduates from relying too heavily on experts in formulating the arguments in their papers because we rightly want them to practice thinking for themselves – to exercise their own epistemic agency. Accordingly, the (unreliable) student who consistently fails to think independently, and consistently relies on the work of experts, even in contexts where it is more important for him to think independently than get the truth, will be deficient with respect to intellectual autonomy. Finally, although deficiencies of intellectual autonomy can correlate with excesses of intellectual interdependence, as they do in the cases above, they need not. Return, for the moment, to contexts in which it is more important for a student to think independently than get the truth, but now consider a student who neither relies on the experts in such contexts nor thinks for herself, instead she just gives up. More broadly, consider a student (or any agent) who, whenever it comes time to think for herself, gives up rather than relying on her own faculties or on the faculties of others. These agents are failing to think for themselves at the right times, but without relying on the faculties of others at the wrong times. They are failing to think for themselves appropriately, but without inappropriately relying on others (in these cases or, we can stipulate, in any other cases). Although they are deficient with respect to intellectual autonomy, these deficiencies won’t be correlated with excesses of intellectual interdependence.
8.2 Intellectual Interdependence: the Trait, Excesses, and Deficiencies 8.2.1 The Trait of Intellectual Interdependence This brings us to the trait of intellectual interdependence.16 Agents who have this trait are disposed to think interdependently – to think with other agents (and sources) and consult other agents (and sources).17 So, when the goal of inquiry is to arrive at beliefs about a particular matter (e.g., whether anthropogenic climate change is real, where the Sears Tower is located, whether I left my keys somewhere on campus), intellectually interdependent agents are those who want, and tend, to consult or collaborate with other sources in arriving at answers. This can include wanting to consult, collaborate with, or defer to other sources, alongside a tendency to rely on the faculties and reasoning of other sources, and/or a tendency to rely on other sources to collect and evaluate evidence, and/or a tendency to defer to other sources. Since intellectual inquiries can have goals that differ from, or go beyond, belief-acquisition, these claims generalize: when the goal of inquiry is generating questions, or brainstorming possible options, we can
Intellectual Autonomy and Interdependence 161 likewise expect agents with the trait of intellectual interdependence to want, and tend, to think with other sources and rely on the cognitive faculties of other sources. To summarize this exploratory sketch, the trait of intellectual interdependence (TII) arguably consists in: (TII1) a behavioral disposition to think interdependently – to think with other agents and to rely on the cognitive faculties of other agents, and (TII2) a motivational disposition to want to behave in these ways. It consists in a combined behavioral-andmotivational disposition, which agents can manifest in inquiries that aim at belief-acquisition, and in the ways they conduct their intellectual inquiries more generally. To possess the trait, one must meet a basic threshold for having this combined behavioral-and-motivational disposition. 8.2.2 Excesses of the Trait of Intellectual Interdependence The trait of intellectual interdependence won’t always be an intellectual virtue. One reason it won’t always be an intellectual virtue is that agents can also possess this trait to excess – they can be intellectually interdependent to a fault. Agents who possess the trait to excess will consistently think with others when it is inappropriate to do so: (very) defeasibly, they consistently think with others at (some of) the wrong times (e.g., when others aren’t reliable), or in (some of) the wrong ways (e.g., using practices that are unreliable, or that erode their own reasoning skills), or with respect to (some of) the wrong objects (e.g., projects doomed to fail). Warnings about excesses of intellectual interdependence are likewise replete in the literature, even if not couched in exactly these terms.18 For instance, Elizabeth Fricker (1994, 2006), and Roberts and Wood (2007, p. 271) both note that excessive trust of testimony can result in credulity and gullibility. Kristoffer Ahlstrom-Vij likewise warns of the dangers of blind (uncritical) deference, whereby an agent is “equally likely to believe what his sources are telling him, whether they’re telling him something that’s true or something that’s false” and whether they are reliable or unreliable (2016, p. 8). In a similar vein, King notes the problems of outsourcing indiscriminately in order to avoid thinking for oneself (2020, p. 58). Borrowing King’s example, consider an agent who “doesn’t want to think for herself. Seeking an intellectual proxy, she joins a cult and lets herself be brainwashed into believing everything her leader says. She never thinks twice about the leader’s claims or asks for supporting evidence” (2020, p. 58). Applying our terminology, the agent who does this consistently has an excess of intellectual interdependence. For starters, she relies on the testimony of others at some of the wrong times: given that she is reliable, and the cult is not, she defers to a source that saddles her with (a preponderance of) false beliefs when she would have arrived
162 Heather Battaly at (a preponderance of) true beliefs by thinking independently. She likewise ends up thinking in some of the wrong ways – even if she hadn’t been “brainwashed,” she would be relying on the cult’s (presumably) flawed methods of collecting and evaluating evidence, instead of using her own valid methods. Finally, on the assumption that the cult is engaged in a project that is doomed to fail – e.g., locating the secret meeting place of the Democrat’s child sex-trafficking ring19 – she likewise ends up thinking with respect to some of the wrong objects. Here, too, we can note that the (in)appropriateness of thinking interdependently won’t always turn on (un)reliability. Even when others are reliable and the agent is not, it will be inappropriate for the agent to outsource her beliefs to them, when doing so would erode the agent’s own reasoning skills or erode her epistemic agency, and thinking for herself would avoid such erosion. Echoing Matheson’s argument above, sometimes it is more important to avoid eroding one’s epistemic skills and agency than it is to get the truth. Accordingly, in contexts where it is more important to avoid eroding one’s mathematical skills than it is to get the correct answer, and where Googling the answer would erode one’s mathematical skills (and thinking for oneself wouldn’t), it would be inappropriate to outsource to Google. As Carter puts a similar point, too much outsourcing to other people or cognitive enhancements, and too little intellectual autonomy, can result in a form of learned helplessness, whereby “one is unable even if one tries, to direct one’s cognitive affairs, in the absence of the enhancement in question” (2020, p. 2945). To sum up the point: the unreliable agent who consistently outsources her beliefs to reliable others can still have an excess of intellectual interdependence. She has such an excess insofar as her outsourcing erodes her cognitive skills or epistemic agency in contexts where avoiding such erosion (via thinking for herself) is paramount. Finally, although excesses of intellectual interdependence can correlate with deficiencies of intellectual autonomy, as they do in the cases above, they need not. Applying Zagzebski’s point to interdependence, consider an agent whose interdependence doesn’t shut off when he enters domains in which nobody is reliable. This agent doesn’t just defer to others in domains where he is unreliable and others are reliable, he also defers to others in domains where neither he nor they are reliable, and/or where he has evidence that neither he nor they are reliable. Rather than postpone his inquiry and abstain from forming beliefs in such cases, he defers to others. Such agents are inappropriately interdependent on others, even when it would also be inappropriate for them to think for themselves. Accordingly, excesses of intellectual interdependence won’t always be correlated with deficiencies of intellectual autonomy. Here, too, I leave it to vice epistemology to decide whether the excesses and deficiencies of intellectual interdependence are all and always intellectual vices, and whether they must be indexed to epistemic environments.20
Intellectual Autonomy and Interdependence 163 8.2.3 Deficiencies of the Trait of Intellectual Interdependence Just as having too much intellectual interdependence is a bad thing, so is having too little. Briefly, agents who are deficient with respect to the trait of intellectual interdependence will consistently fail to think with others when it is appropriate to think with others: (very) defeasibly, they will fail to think with others at (some of) the right times (e.g., when others are reliable), or in (some of) the right ways (e.g., using reliable methods), or with respect to (some of) the right objects (e.g., well-formulated projects). Deficiencies of intellectual interdependence can be correlated with excesses of intellectual autonomy. Consider an agent who consistently fails to rely on the testimony of others and consistently thinks for himself, even when others are reliable and he isn’t. In failing to rely on the recommendations of reliable experts and drawing his own conclusions about (e.g.) mask-wearing and COVID spread, this agent fails to think with others at some of the right times (e.g., when they are reliable and he isn’t) and in some of the right ways (e.g., ways that are reliable). If he also fails to rely on their judgments about the criteria for viable projects on mask-wearing – and in making his own such judgments, pursues ill-fated projects – he likewise fails to think interdependently with respect to the right objects. However, deficiencies of intellectual interdependence need not be correlated with excesses of intellectual autonomy. Consider an agent who, whenever it comes time to think with others, neither thinks with others nor thinks for himself, he just gives up. This agent fails to think interdependently at the right times, but without thinking independently at the wrong times. He fails to think with others appropriately, but without inappropriately thinking for himself (in these cases or, we can stipulate, in any other cases). While such agents are deficient with respect to intellectual interdependence, these deficiencies won’t be correlated with excesses of intellectual autonomy.
8.3 The Virtues of Intellectual Autonomy and Intellectual Interdependence Thus far, this chapter has sketched analyses of the traits of intellectual autonomy and intellectual interdependence, and argued that each of these traits can be had to excess. It has likewise sketched outlines of an agent whose autonomous thinking is excessive, and an agent whose interdependent thinking is excessive. The traits of intellectual autonomy and intellectual interdependence are not intellectual virtues when they are had to excess – indeed, we noted they might end up being intellectual vices. Nevertheless, these traits will sometimes be virtues. Adapting the framework in Whitcomb et al. (2021), this section argues that in order for them to be intellectual virtues, our agents must at least rein in excess,
164 Heather Battaly without over-correcting. To accomplish this, our agents must possess and exercise good judgment, which is similar to Aristotelian practical wisdom. Good judgment enables agents to think for themselves appropriately, and think with others appropriately. But, even these dispositions of appropriate independent, and interdependent, thinking will still fail to be intellectual virtues when the agent’s motivation for thinking in these ways is something other than epistemic goods. While this section proposes neither conceptually sufficient conditions for intellectual virtue, nor complete accounts of what the virtues of intellectual autonomy or interdependence consist in, it does argue that the traits of intellectual autonomy and interdependence won’t be intellectual virtues unless the agents who possess them are motivated by epistemic goods, and possess and exercise good judgment. In this, it joins its sibling analyses from King (2020) and Matheson (2021).21 What is the virtue of intellectual autonomy – what dispositions does it consist in? It consists, at least partly, in the disposition to think for oneself appropriately. Agents who have this virtue avoid excessive autonomy – roughly, they avoid thinking for themselves at the wrong times, in the wrong ways, and with respect to the wrong objects – but without over-correcting. In other words, their intellectual autonomy is neither excessive nor deficient. So for starters, the virtue consists, at least partly, in: (VIA1) a behavioral disposition to consistently think for oneself at the right times, in the right ways, and with respect to the right objects (avoiding deficiency), and to consistently think for oneself only at the right times, in the right ways, and with respect to the right objects (avoiding excess); and (VIA2) a motivational disposition to want to behave in these ways. This is a (combined) disposition to think for oneself appropriately. Now to pull this off, agents arguably need to possess the virtue of good judgment – they need something akin to Aristotelian phronesis. To put this point differently, agents aren’t likely to possess the virtue of intellectual autonomy unless they also possess the virtue of good judgment.22 Why not? As we saw in Section 8.2, context matters: (in)appropriateness won’t always be keyed to one’s (un)reliability (Matheson under review). This means that agents can know the contexts in which they are unreliable, and avoid thinking for themselves in those contexts, but still fail to possess the disposition above. To explain: agents who avoid thinking for themselves whenever they know they are unreliable will still be deficiently autonomous, since it is sometimes appropriate for them to think for themselves even when they are unreliable! Accordingly, to be disposed to think for themselves appropriately, agents will also need to know whether they are in contexts where the importance of truth is paramount, and their unreliability makes it inappropriate for them to think independently. Or, whether they are instead in contexts where it is more important for them to develop their reasoning skills by thinking independently.
Intellectual Autonomy and Interdependence 165 Now, it won’t always be easy to recognize contexts in which a competing epistemic value trumps the value of truth. It can be difficult to determine whether one is in a context in which it is appropriate to think for oneself. Fortunately, the virtue of good judgment is designed to solve this problem.23 Arguably, it is the job of good judgment to know which actions are appropriate in a given context (NE.1140a25-28, 1140b4-5, 1140b9-10). The present proposal is that agents will need to possess the virtue of good judgment in order to possess the disposition to think for themselves appropriately. Agents will need good judgment in order to discriminate contexts in which it is appropriate for them to think for themselves from contexts in which it isn’t. Something similar holds for the virtue of intellectual interdependence. This virtue consists, at least partly, in the disposition to think with others appropriately. Agents who have this virtue avoid excessive interdependence – roughly, they avoid thinking with others at the wrong times, in the wrong ways, and with respect to the wrong objects – but without over-correcting. In other words, their intellectual interdependence is neither excessive nor deficient. So, for starters, this virtue consists at least partly in: (VII1) a behavioral disposition to consistently think with others at the right times, in the right ways, and with respect to the right objects (avoiding deficiency), and to consistently think with others only at the right times, in the right ways, and with respect to the right objects (avoiding excess); and (VII2) a motivational disposition to want to behave in these ways. This is a (combined) disposition to think with others appropriately. Here, as above, agents will need to possess the virtue of good judgment in order to possess this disposition. Good judgment enables them to recognize (e.g.) contexts in which it is more important for them to avoid eroding their reasoning skills than it is to get the truth – and thus appropriate for them to avoid outsourcing to others even though they are themselves unreliable. More broadly, the virtue of good judgment enables them to discriminate contexts in which it is appropriate for them to think with others from contexts in which it isn’t. To sum up the above, agents need the virtue of good judgment in order to think for themselves appropriately, and to think with others appropriately. But, even these dispositions of appropriate independent, and interdependent, thinking can fail to be intellectual virtues. As King rightly points out, if one’s ulterior motivation for thinking in these ways is selfish – if one thinks in these ways “just to make a splash” or “to make a name” for oneself – then one doesn’t have intellectual virtues (2020, p. 63). To be intellectual virtues, one’s dispositions to think appropriately need to be grounded in ulterior motivations for epistemic goods, such as motivations for truth, knowledge and understanding.24 So, in addition to possessing the virtue of good judgment, and the resulting dispositions to think for oneself (and with others) appropriately, agents must be motivated to think in these ways for the right reasons.
166 Heather Battaly To be clear, the motivations in (VIA2) and (VII2) are good ones.25 They are good proximate motivations. If agents don’t have these, they don’t have the above dispositions to think appropriately. But, the present point is that agents can have good proximate motivations and have the above dispositions to think appropriately, while lacking ulterior motivations for epistemic goods. When this happens, they don’t have intellectual virtues. To illustrate, compare two teams of scientists tasked with developing COVID-19 vaccines. Both teams have good proximate motivations (VII2) – both are disposed to want to think with other (teams of) scientists about this topic at the right times and in the right ways. And, both succeed in thinking with other (teams of) scientists about this topic at the right times and in the right ways – both satisfy (VII1). But, Team 1’s ulterior motivation for doing all this is to make billions by selling their findings to the highest bidder (they are motivated by greed), whereas Team 2’s ulterior motivation is to find out the truth about how to effectively vaccinate against COVID-19. Only Team 2’s intellectual interdependence is still a candidate for intellectual virtue. Since Team 1 lacks an ulterior motivation for epistemic goods, their intellectual interdependence isn’t intellectually virtuous; it is greedy. Note that if Team 1 had instead been ultimately motivated by the welfare of others, their intellectual interdependence would have been a candidate for moral virtue; and if Team 2 had instead been motivated both by epistemic goods and by the welfare of others, their intellectual interdependence would have been a candidate for intellectual and moral virtue. Putting all of the above together, we can propose the following accounts of what it is to have the virtues of intellectual autonomy and intellectual interdependence. (VIA) The intellectual virtue of intellectual autonomy (at least partly) consists in: (VIA1) a behavioral disposition to consistently think for oneself at the right times, in the right ways, and with respect to the right objects (avoiding deficiency), and to consistently think for oneself only at the right times, in the right ways, and with respect to the right objects (avoiding excess); (VIA2) a proximate motivational disposition to want to behave in the above ways; and (VIA3) an ulterior motivational disposition to want to think for oneself appropriately (to have VIA1 and VIA2) because one cares about epistemic goods. Likewise, (VII) the intellectual virtue of intellectual interdependence (at least partly) consists in: (VII1) a behavioral disposition to consistently think with others at the right times, in the right ways, and with respect to the right objects (avoiding deficiency), and to consistently think with others only
Intellectual Autonomy and Interdependence 167 at the right times, in the right ways, and with respect to the right objects (avoiding excess); (VII2) a proximate motivational disposition to want to behave in the above ways; and (VII3) an ulterior motivational disposition to want to think with others appropriately (to have VII1 and VII2) because one cares about epistemic goods. How does this sketch compare with its closest kin? On King’s view, the virtue of intellectual autonomy “requires thinking for ourselves while relying on others appropriately – neither too much nor too little” (2020, p. 59). King analyzes appropriate intellectual reliance on oneself and others in terms of right objects, right occasions, right means, and right motives. Similarly, Matheson proposes that the virtue of epistemic autonomy involves: cognitive dispositions to make good judgments about how and when “to rely on your own thinking, as well as, how, and when to rely on the thinking of others”; behavioral dispositions to conduct one’s inquiries in line with such judgments; and motivational dispositions to do all of this because of one’s love of epistemic goods (2021, p. 183). Clearly, these accounts are in the same family as (VIA) above. Matheson’s account even embraces modest normative contextualism. Section 8.4 addresses one key point on which they differ: (VIA) divides the territory into two virtues rather than one. I take this to be a family squabble. Below, I argue that there are good reasons for separating the virtue of intellectual autonomy from the virtue of intellectual interdependence.
8.4 Objections and Open Questions Readers are likely to have formulated at least three worries about the arguments above. To flag the first: one might worry that strong normative contextualism is neither plausible nor necessary. While a full assessment of the view is required, I have tried to motivate its plausibility by arguing that there are hostile epistemic environments in which being a loner is not a vice, and may be a virtue. Strong normative contextualism aims to capture the virtues and vices of agents in oppressive and hostile environments, both real and imagined. One key question, which would be included in a full assessment of the view, is whether it goes too far. The second worry is that modest normative contextualism isn’t viable because the properties of intellectual autonomy and interdependence are normatively thick. Their normative properties can’t be stripped away, and so any attempts to give normatively neutral analyses of intellectual autonomy and interdependence will fail – they will smuggle in normative assumptions that make autonomy and interdependence virtues. In reply, the chapter has proposed a normatively neutral analysis of (TIA) and (TII), and has argued that these traits aren’t always virtues. If the objector’s response is to deny that the disposition to think for oneself, which
168 Heather Battaly is captured by (TIA), is a case of intellectual autonomy, then we need to know what it is a case of, and why it doesn’t count as intellectual autonomy. Relatedly, normatively thick accounts of intellectual autonomy and interdependence face an objection that modest normative contextualism avoids. Such accounts presuppose that intellectual autonomy is a virtue – they typically define intellectual autonomy in terms of appropriately thinking for oneself. The problem is that excessive thinking for oneself (which is by definition inappropriate) is then precluded from counting as autonomous thinking of any kind – it can’t even count as inappropriate autonomous thinking because there is no such thing! In the words of Ahlstrom-Vij, it “fails to instantiate any kind of epistemic autonomy at all” (2013, p. 103). But, if excessive thinking for oneself isn’t a kind of autonomous thinking, then what is it? The third worry is an objection to dividing the virtue of intellectual autonomy from the virtue of intellectual interdependence. Before we address it, let’s address the motivation for dividing the two. One reason to do so is that collapsing intellectual autonomy and interdependence into a single virtue leads to a counter-intuitive result. Suppose we collapse them, such that the virtue of intellectual autonomy requires a disposition to think for oneself and think with others appropriately.26 Further, following Ahlstrom-Vij (2013, p. 103), suppose it is usually appropriate for us to think with others and avoid thinking for ourselves (because we would have little knowledge if we didn’t think with others and relied solely on thinking for ourselves). We would then count as being virtuously autonomous even though we spent most of our epistemic lives avoiding independent thinking and relying on the faculties of others. This is an odd result. Arguably, we aren’t being virtuously autonomous, we are being virtuously interdependent. As Ahlstrom-Vij puts the point, “epistemic autonomy does not seem a particularly apt term … Indeed, one might even … suggest that it is a downright misleading term. A more appropriate term … given our epistemic dependence on others, would be epistemic deference” (2013, p. 104). Relatedly, adapting an example from Whitcomb et al. (2017, p. 259), imagine trying to advise a person who is deficient with respect to relying on the faculties and testimony of others. He rarely defers to, or even consults, others. Now, if the virtue of intellectual autonomy is a disposition to think for oneself and think with others appropriately, then we should advise this person to be more autonomous. But, again, that is an odd result. Arguably, we should advise him to be more intellectually interdependent, not more autonomous. So, there is reason to divide the two virtues. The objection to doing so, which will be familiar to readers of the humility literature, is that dividing them lands one with a different counter-intuitive result. Namely, one is saddled with claiming that agents can be virtuously autonomous and excessively (and viciously) interdependent at the same time. This is
Intellectual Autonomy and Interdependence 169 the problem of arrogance (Whitcomb et al., 2017, p. 528) applied to autonomy and interdependence. In reply, first, it is possible to have the virtue of intellectual autonomy, while also being excessively interdependent. This is because excesses of interdependence need not correlate with deficiencies of autonomy. Recall our example from Section 8.2, in which nobody is reliable and in which it would be inappropriate for the agent to either rely on her own faculties or on the faculties of others – what she should do is abstain. Imagine an agent who inappropriately relies on the faculties of others in such cases, but doesn’t inappropriately rely on her own faculties in such cases. When questions are extremely difficult, she isn’t tempted to rely on her own faculties to answer them – she has learned not to do so, but she does still rely on the faculties of others. Now, add that this agent has learned how to rely on her own faculties all and only when it is appropriate to do so – she is neither deficient nor excessive when it comes to independent thinking. She is thus virtuously autonomous. But, since she hasn’t yet learned to avoid relying on others in cases where nobody is reliable, she is simultaneously excessively interdependent. The key point: agents who inappropriately rely on the faculties of others in such cases need not be virtuously autonomous, but they can be. Second, though it is possible for an agent to be virtuously autonomous and excessively interdependent in this way, it isn’t likely. This is because agents who make progress in appropriately thinking for themselves are also likely to make progress in appropriately thinking with others. As we saw in Sections 8.1 and 8.2, deficiencies of autonomy will often be correlated with excesses of interdependence, and excesses of autonomy will often be correlated with deficiencies of interdependence, even if they are not always correlated in these ways. Thus, agents who learn to overcome deficiencies of autonomy will often simultaneously be learning to rein in excesses of interdependence, and agents who learn to rein in excesses of autonomy will often simultaneously be learning to overcome deficiencies of interdependence. Accordingly, we can expect possession of the virtue of autonomy to be correlated with possession of the virtue of interdependence, even if it is possible to possess either of these virtues without the other. In closing, many open questions remain, including: (1) What is the trait of collaboration? What are the key differences between the virtue of collaboration and other virtues of intellectual interdependence, such as the virtues of deference and outsourcing? (2) What are the virtues of self-trust and trust of others? Should we expect these virtues to be correlated with the virtues of intellectual autonomy and interdependence, respectively?27 Finally, (3) should we expect the virtues of intellectual pride and autonomy, and of intellectual humility and interdependence to be correlated?28
170 Heather Battaly
Notes 1 Zagzebski (2009, p. 88), see also Zagzebski (2012, p. 52), Zagzebski (2020, p. 263). 2 We can likewise expect them to think for themselves in deciding which inquiries (intellectual goals) to pursue (Matheson under review). 3 This is defeasible in at least three ways. First, we will need to consider whether the category of “wrong objects” is ultimately reducible to “wrong times” and “wrong ways.” Second, the parenthetical examples are but candidate placeholders, which we would need to qualify, and for which we would need to mount arguments. Matheson (2021, Section 8.5) examines several such candidates. Third, we can expect appropriateness to be context-dependent, which complicates the ability to offer concise non-defeasible examples. 4 Whether all degrees of excess, even relatively low ones, are enough to render a trait an intellectual vice is a matter for vice epistemology to decide. 5 See, for instance, Code (1991, Ch. 4) on the autonomous Cartesian knower; and Nguyen (2018) on direct epistemic autonomy. 6 See Battaly (2014), Cassam (2019), Crerar (2018), and Tanesini (2018). 7 Specifically, Baehr (2020, p. 26) argues that defective judgment can be sufficient for vice, but only when the agent is responsible for the defect. Above, I drop the responsibility requirement. See also the plot twist below: if what counts as an excess can change from one epistemic environment to another, then excesses will need to be indexed to the agent’s epistemic environment. 8 If we endorse (NC2) below, we can add: where these excesses are indexed to the agent’s environment. Note that lower degrees of excess might be intellectual vices, even if they are relatively minor offenders. 9 Cf. Kidd (2020, pp. 81–83). 10 Deficiencies of a trait would likewise be indexed to epistemic environments. 11 See Battaly (2018). 12 They consistently fail to think for themselves at times when it is right for them to think for themselves. 13 Fricker (2007). Marginalized persons may be at greater risk for deficiencies of autonomy than excesses of autonomy. On the importance of intellectual autonomy for resisting dominant narratives, see Grasswick (2019, p. 197). 14 Kang and Frenkel (2020). 15 See also Elgin (2013). 16 Code (1991, p. 111) helpfully uses the terminology of “cognitive independence” and “cognitive interdependence.” 17 Grasswick (2019) also emphasizes thinking with others. 18 See also Pritchard (2016), which explores whether deferring to others prevents an agent’s beliefs from counting as achievements of her own, and thus from counting as knowledge. 19 Kang and Frenkel (2020). 20 The analogy of the trait of being an epistemic loner is the trait of being an epistemic follower. 21 Roberts and Wood (2007, p. 282) and Carter (2020) likewise argue that excessive autonomy is not virtuous, and that the intellectual virtue of autonomy involves good judgment. However, neither seems to think the virtue requires a motivation for epistemic goods. 22 Agents won’t have the virtue of intellectual autonomy unless they also have another virtue, namely, good judgment. But, this doesn’t mean that the virtue of intellectual autonomy consists in, or is identical to, the virtue of good judgment.
Intellectual Autonomy and Interdependence 171 23 One worry about good judgment is that it is designed to magically solve this and other problems! 24 See Whitcomb et al. (2021); Baehr (2011, Ch. 6). 25 These motivations are good insofar as: (i) it is good to think for oneself (or with others) at the right times and in the right ways, and (ii) it is good to “be for” ends that are themselves good (Adams, 2006, Ch. 2). 26 See King (2020: 59); Roberts and Wood (2007: 259); and constitutive relational accounts of intellectual autonomy in Grasswick (2019) and Elzinga (2019). 27 See Dormandy (2020); Elzinga (2019); Tanesini (2020); and Zagzebski (2012) 28 I am grateful to Jonathan Matheson and Kirk Lougheed for comments, and for their patience.
References Adams, R. M. (2006). A theory of virtue. Oxford: Clarendon Press. Ahlstrom-Vij, K. (2013). Epistemic paternalism. London: Palgrave Macmillan. Ahlstrom-Vij, K. (2016). Is there a problem with cognitive outsourcing? Philosophical Issues, 26, 7–24. Ahlstrom-Vij, K. (2019). The epistemic virtue of deference. In H. Battaly (Ed.), The Routledge handbook of virtue epistemology (pp. 209–220). New York: Routledge. Aristotle. (1984). Nicomachean ethics. In J. Barnes (Ed.), The complete works of Aristotle (pp. 1729–1867). Princeton, NJ: Princeton University Press. Baehr, J. (2011). The inquiring mind. Oxford: Oxford University Press. Baehr, J. (2020). The structure of intellectual vices. In I. J. Kidd, H. Battaly, & Q. Cassam (Eds.), Vice epistemology (pp. 21–36). London: Routledge. Battaly, H. (2018). Can closed-mindedness be an intellectual virtue? Royal Institute of Philosophy Supplements, 84, 23–45. Battaly, H. (2014). Varieties of epistemic vice. In J. Matheson & R. Vitz (Eds.), The ethics of belief (pp. 51–76). Oxford: Oxford University Press. Carter, J. A. (2020). Intellectual autonomy, epistemic dependence and cognitive enhancement. Synthese, 197, 2937–2961. Cassam, Q. (2019). Vices of the mind. Oxford: Oxford University Press. Code, L. (1991). What can she know? Ithaca, NY: Cornell University Press. Crerar, C. (2018). Motivational approaches to intellectual vice. Australasian Journal of Philosophy, 96(4), 753–766. Dormandy, K. (2020). Introduction: An overview of trust and some key epistemological applications. In K. Dormandy (Ed.), Trust in epistemology (pp. 1–40). New York: Routledge. Elgin, C. Z. (2013). Epistemic agency. Theory and Research in Education, 11(2), 135–152. Elzinga, B. (2019). A relational account of intellectual autonomy. Canadian Journal of Philosophy, 49(1), 22–47. Fricker, E. (1994). Against gullibility. In B. Matilal & A. Chakrabarti (Eds.), Knowing from words (pp. 125–161). Dordrecht: Cluwer. Fricker, E. (2006). Testimony and epistemic autonomy. In J. Lackey & E. Sosa (Eds.), The epistemology of testimony (pp. 225–250). Oxford: Oxford University Press.
172 Heather Battaly Fricker, M. (2007). Epistemic injustice. Oxford: Oxford University Press. Grasswick, H. (2019). Epistemic autonomy in a social world of knowing. In H. Battaly (Ed.), The Routledge handbook of virtue epistemology (pp. 196–208). New York: Routledge. Kang, C. and S. Frenkel. (2020). “‘PizzaGate’ conspiracy theory Thrives Anew in the TikTok Era.” New York Times. www.nytimes.com/2020/06/27/technology/ pizzagate-justin-bieber-qanon-tiktok.html Accessed: December 8, 2020. Kidd, I.J. (2020). “Epistemic corruption and social oppression.” In I.J. Kidd, H. Battaly, and Q. Cassam (eds.) Vice epistemology. London: Routledge, 69–85. King, N. (2020). The excellent mind. Oxford: Oxford University Press. Matheson, J. (2021). The virtue of epistemic autonomy. In J. Matheson & K. Lougheed (Eds.), Essays on epistemic autonomy. New York: Routledge. Matheson, J. (Manuscript). Why think for yourself? https://philpapers.org/rec/ MATWTF. Accessed November 2, 2020. Nguyen, C. T. (2018). Expertise and the fragmentation of intellectual autonomy. Philosophical inquiries, 6(2), 107–124. Pritchard, D. (2016). Seeing it for oneself: Perceptual knowledge, understanding, and intellectual autonomy. Episteme, 13(1), 29–42. Roberts, R. C., & Wood, W. J. (2007). Intellectual virtues. New York: Oxford University Press. Tanesini, A. (2018). Epistemic vice and motivation. Metaphilosophy, 49(3), 350–367. Tanesini, A. (2020). Virtuous and vicious intellectual self-trust. In K. Dormandy (Ed.), Trust in epistemology (pp. 218–238). New York: Routledge. Whitcomb, D., Battaly, H., Baehr, J., & Howard-Snyder, D. (2017). Intellectual humility: Owning our limitations. Philosophy and Phenomenological Research, XCIV(3), 509–539. Whitcomb, D., Battaly, H., Baehr, J., & Howard-Snyder, D. (2021). The puzzle of humility and disparity. In M. Alfano, M. P. Lynch, & A. Tanesini (Eds.), The Routledge handbook of the philosophy of humility (pp. 72–83). London: Routledge. Zagzebski, L. T. (2009). On epistemology. Belmont, CA: Wadsworth. Zagzebski, L. T. (2012). Epistemic authority. New York: Oxford University Press. Zagzebski, L. T. (2020). Epistemic values. Oxford: Oxford University Press.
9 The Virtue of Epistemic Autonomy Jonathan Matheson
9.1 Introduction People should think for themselves. Arguably, one of the central goals of education is to equip students to think for themselves.1 However, like most cognitive projects, thinking for yourself can be done well and it can be done poorly. Consider two characters that demonstrate how thinking for yourself can go wrong. Consider first the Maverick. The Maverick is an independent thinker, but the Maverick is intellectually independent to a fault. The Maverick refuses to rely on the vast intellectual resources that are afforded to him by others. Instead, the Maverick insists on figuring everything out for himself, and refuses to take anyone’s word for anything. This is not an intellectually healthy life. In insisting on his intellectual independence, the Maverick comes to know very little and holds many mistaken beliefs. These mistakes would be more easily identified were the Maverick to get a little help from his friends, but alas this is not the Maverick’s way of conducting his intellectual business. Consider next the Codependent. The Codependent also lives an intellectually unhealthy life, but for different reasons. The Codependent has an unhealthy intellectual reliance on others. Perhaps fixated on their own intellectual shortcomings, the Codependent outsources nearly all of their intellectual projects. The Codependent nearly always defers to someone else in inquiry. They even defer about to whom they should defer. When asked what they think about a particular issue, they immediately turn to others for answers, not just assistance. Between these two extremes lies a mean. The intellectually healthy individual manages their intellectual life well. They rely on others, when appropriate, but they also strive for understanding things on their own and they do not shy away from doing the intellectual work themselves. Someone who manages their intellectual life well in these ways has the virtue of epistemic autonomy – they are a good epistemic executive who exhibits healthy intellectual interdependence. In what follows, I will both propose and motivate an account of the virtue of epistemic autonomy. In Section 9.2, I clarify the concept of
174 Jonathan Matheson an intellectual virtue and character intellectual virtues in particular. In Section 9.3, I clear away some misconceptions about epistemic autonomy to better focus on our target. In Section 9.4, I examine and evaluate several extant accounts of the virtue of epistemic autonomy, noting problems with each. In Section 9.5, I provide my positive account of the virtue of epistemic autonomy and explain how it meets the desiderata for such an account while avoiding the problems with extant accounts. Finally, in Section 9.6, I fill the account out by digging into the factors that guide epistemically autonomous agents in having an appropriate reliance on their own thinking.
9.2 What Is an Intellectual Virtue? A great deal of work has been done in virtue epistemology to identify and analyze numerous intellectual virtues.2 While exhibiting epistemic autonomy seems to be foundational to one’s life as an epistemic agent, relatively little work has been done to understand epistemic autonomy as an intellectual virtue.3 Before proposing my own analysis of this virtue, it will be helpful to give some preliminary remarks about intellectual virtues in general. Virtue epistemologists distinguish between two kinds of intellectual virtues: faculty virtues and character virtues. Faculty virtues are cognitive processes that reliably bring about good epistemic ends.4 Having good vision or a good memory are paradigm examples. The epistemic value of faculty virtues is instrumental; they are valuable in terms of what they bring about (true beliefs, knowledge, etc.). Faculty virtues need not be personal, acquired, or accompanied by any particular motivation. Character virtues, in contrast, are acquired cognitive character traits that are accompanied by particular motivations.5 Here, paradigm examples are intellectual humility and open-mindedness. The epistemic value of character virtues is at least partially intrinsic; they are valuable ways for an epistemic agent to be in themselves, though they may also bring about other things of value. A character virtue is a character trait of a good inquirer, traits that make an epistemic agent better epistemically. They are character traits that help agents acquire, maintain, and distribute epistemic goods like true beliefs, knowledge, and understanding. In addition, motives matter for character virtues. Good intellectual character traits are motivated by a love of the truth – by appropriately caring for what is epistemically valuable.6 So, following Jason Baehr, we can understand a character virtue is “a character trait that contributes to its possessor’s personal intellectual worth on account of its involving a positive psychological orientation toward epistemic goods” (2011, p. 102). In what follows, I will be treating epistemic autonomy as a character virtue.
The Virtue of Epistemic Autonomy 175
9.3 Misconceptions Before advancing a positive account of the virtue of epistemic autonomy, it is important to clear away some common misconceptions about epistemic autonomy and what it entails and make an important clarification. There are two important misconceptions to clear away: that epistemic autonomy entails intellectual independence, and that epistemic autonomy is committed to some version of doxastic voluntarism. Let’s take each of these misconceptions in turn. Some philosophers have identified epistemic autonomy with the independent intellectual life of the Maverick. For instance, John Hardwig (1985) claims, “If I were to pursue epistemic autonomy across the board, I would succeed only in holding uninformed, unreliable, crude, untested, and therefore irrational beliefs.”7 On this view, an individual is epistemically autonomous to the extent to which they are intellectually independent. While such an intellectual life is defective, we should resist seeing it as the epistemically autonomous life. Such a picture of epistemic autonomy is a relic of a Cartesian conception of epistemology with overly individualistc epistemic ideals.8 Such a picture of epistemology, and epistemic agents, ignores the insights of social epistemology. To see that epistemic autonomy is not intellectual independence, let us first think about autonomy more generally. Autonomy has received much more attention within the realms of moral and political philosophy. According to Joseph Raz, the autonomous person determines the course of their own life (1988, p. 407). Here, too, autonomy is not to be equated with independence. The hermit is not the paradigm autonomous agent. While the hermit lives an independent life, there are ways of determining the course of your own life, while relying on others. In fact, a healthy reliance on others only seems to increase one’s autonomy. Autonomous citizens are afforded the benefits that come from an interdependent society while remaining free to live their lives as they see fit.9 In the same way, intellectual independence is not the paradigm of epistemic autonomy. Epistemically autonomous agents manage their own intellectual lives, but not at the cost of forgoing the intellectual resources of others. Following Heidi Grasswick (2018), we can identify two central ways in which epistemically autonomous agents depend upon others to manage their intellectual lives. First, autonomous agents developmentally depend upon others. Without others, individuals would not develop the cognitive resources required for autonomous inquiry and deliberation. From the start, we rely on others to nurture our autonomous capacities. We acquire both the intellectual tools and the skills to use those tools in inquiry from others.10 The epistemic agent is not self-made. Second, our epistemic autonomy constitutively depends upon others. Exercising autonomous thinking often requires intellectually engaging with others, thinking about alternative perspectives (whether real or imagined),
176 Jonathan Matheson and seeing ourselves as answerable to others for our reasoning.11 So, not only is epistemic autonomy not committed to intellectual independence, agents actually become more epistemically autonomous when they rely on others in the right ways. One might resist this account of interdependent autonomy as a mean between the Maverick and the Codependent by noting that there is something amiss with instructing the Maverick to “be more autonomous.” If epistemic autonomy really is a mean between the extremes of intellectual independence and intellectual codependence, however, then such a reprimand would be accurate. In being too intellectually independent, the Maverick must be more autonomous; being more autonomous would be a move toward the mean. The fact that something seems amiss with such instruction may be thought to indicate that epistemic autonomy really is more about intellectual independence than some mean between the extremes of independence and servility. While there is admittedly something strange about telling the Maverick to be more autonomous, it is not without an explanation. It is worth noting that such an oddity is not unique to the virtue of epistemic autonomy. There is a similar oddity in encouraging someone who is rash to “be more courageous” or someone who is servile to “be more humble.”12 While a similar oddity exists in such reactions, this is no reason to think that courage and humility are not themselves means between two extremes (rashness/cowardice and servility/pride respectively). What then explains what is amiss with such prescriptions? Plausibly, the oddity here is explained by the fact that in general we tend to err on the side of the other extreme in each of these cases. Regarding courage, people tend to err on the side of deficiency, not excess. Similarly, people tend to err on the side of pride, not servility. It also seems plausible that individuals tend to fail to think for themselves as they should. After all, as children we all begin our intellectual lives like the Codependent, taking on everything that we hear. It is only as we mature intellectually that we begin to take a more active role in our intellectual lives. Given these asymmetries, it makes sense that instructions to be “more humble,” “more courageous,” and “more autonomous” tend to imply a move toward the side of excess on the relevant spectrums. What this shows is that it is important to distinguish between traits and virtues. The Maverick lacks the virtue of epistemic autonomy because he has the trait of epistemic autonomy to an excess – he excessively relies on his own intellectual efforts. The Maverick must have less of the trait of epistemic autonomy in order to have the virtue of epistemic autonomy. So, telling the Maverick to be ‘more autonomous’ sounds off since it is natural to read the kind of autonomy at issue there to be traitautonomy, and the Maverick needs less of that in order to obtain the virtue of autonomy. Having addressed the first misconception about epistemic autonomy, let us turn to the second. It is also a mistake to believe that epistemic
The Virtue of Epistemic Autonomy 177 autonomy requires exercising significant control over one’s beliefs. While epistemic autonomy requires having executive control over one’s intellectual life, and while beliefs are a central feature of one’s intellectual life, it is a mistake to maintain that epistemic autonomy requires having significant control over one’s beliefs. Here, too, we can gain insight by thinking about autonomy more generally. The concept of autonomy is closely connected to the concept of responsibility, and to be responsible, agents must exhibit significant control over their lives. However, while autonomous individuals determine the course of their own life, this does not entail that they control every aspect of their lives. Sometimes autonomous choices don’t turn out as the agent envisioned. This shows that we must be careful in determining what it is that an autonomous agent is autonomous about. Autonomous agents are autonomous in the choices that they make and the actions that they take. They are not autonomous in how those choices and actions work out in the world. For instance, Sam autonomously applies for a job, but Sam does not autonomously get the job. The fact that Sam is not in control of whether his application is successful does not indicate that the application was not autonomously made. Returning to the intellectual realm, we must be careful here too in assessing what it is that epistemic agents are autonomous about. A natural candidate here is their beliefs; autonomous agents are in control of their beliefs. After all, the central business of epistemic agents is to acquire true beliefs while avoiding false ones. However, it is contentious, at best, that we exercise significant control over our beliefs, and it is implausible that we directly control our beliefs by will. To the extent that we do control our beliefs, we control them only indirectly by controlling the actions that lead us to those beliefs. For this reason, several philosophers have instead viewed epistemic agents as autonomous over what they accept, rather than what they believe.13 Following Cohen (1992), a subject believes p just in case they are disposed to feel that p is true (and that that not-p). In contrast, a subject accepts p (in a context) just in case they treat p as a given (in that context) as a matter of policy. To treat p as a given is to be willing and able to deploy p as a premise. For our purposes, the salient difference between what one believes and what one accepts is that the latter is under the control of the agent (at least in a much more straightforward way). An individual can directly will themselves to adopt certain policies, whereas the same is not true for taking on feelings of truth. So, acceptance seems like a better candidate for the application of epistemic autonomy than belief. However, epistemic autonomy must extend beyond what a subject accepts. A subject who is unencumbered in what they accept can still be significantly lacking in epistemic autonomy. In managing their intellectual lives, epistemically autonomous individuals must have significant control over their inquiry. Autonomous agents
178 Jonathan Matheson control both the objects of their inquiry as well as their method inquiry. That is, autonomous agents control how they conduct their inquiry. In particular, they control their own level of involvement in the process of inquiry. They control when to think for themselves and when to defer to someone else. Beliefs are best seen as the outputs of inquiry. Like a successful job application, individuals exercise a much greater degree of control over the actions leading up to the output (inquiry, filling out application), then they do over the output itself (belief, getting hired). While we may want to extend an individual’s accountability to the outputs as well, it should be clear at least that the primary target of our evaluation are the actions that are more directly under the agent’s control, and that at best agents only have derivative control over the outputs. Having cleared away these two misconceptions about epistemic autonomy, it is important to make a clarifying distinction. Epistemic autonomy can properly be viewed as both a right, or freedom, as well as a character virtue. As a right, or freedom, epistemic autonomy consists of freedom from interference in inquiry. It is this sense of epistemic autonomy that is relevant to questions regarding epistemic paternalism, and whether it can be permissible to interfere with the inquiry of another without their consent.14 As a character virtue, epistemic autonomy is a nurtured character trait of epistemic agents, an epistemically good way to be, that also comes with proper motivation. It is important to see that “epistemic autonomy” is picking out two different things with these distinct uses, even if both pick out some kind of epistemic ideal. C.A.J. Coady (2002) sees epistemic autonomy (what he calls “intellectual autonomy”) as an epistemic ideal that blends elements of both senses of “epistemic autonomy.” On Coady’s account, epistemic autonomy has three core components: independence, self-creation, and integrity. According to Coady, independence is a kind of negative freedom – freedom from interference in one’s inquiry. This is a non-domination requirement for epistemic autonomy. Autonomous thinkers are not required to cognitively conform to the powers that be. Along these lines, Coady sees independence is a kind of freedom to develop mastery, or expertise, in the areas that one sees fit. Self-creation, the second component of epistemic autonomy, is a kind of positive freedom – freedom to create a distinctive intellectual life of their own. Autonomous thinkers order their intellectual lives in ways that they see fit (266). This component of autonomy amounts to prioritizing one’s intellectual projects in way that aligns with one’s values and interests. Finally, integrity is the idea of standing up for truth, even in circumstances where this will result in negative outcomes (363). Integrity amounts to not folding to external intellectual pressures. So, Coady’s account of epistemic autonomy blends aspects of epistemic autonomy as a right, or freedom, with aspects of autonomy as a character virtue. Coady’s conditions of independence and self-creation
The Virtue of Epistemic Autonomy 179 are both conditions for a certain type of intellectual freedom, a kind of intellectual right that epistemic agents have (even if it can sometimes be outweighed). While freedom from interference in one’s inquiry is good, such a freedom has little to do with any character trait of the individual inquirer. Whether one is free to inquire as they see fit will depend upon their external environment and their intellectual community. This carries over to Coady’s comments on self-creation as well. Coady is focused on a freedom to create one’s own intellectual life. However, here too, such freedom has more to do with external circumstances than it does with any internal traits of the subject. In contrast, the condition Coady calls “integrity” is something much more akin to an intellectual character virtue. This condition does concern the character of the subject, and is something that is cultivated. That said, intellectual integrity, while important and valuable, does seem to be separable from epistemic autonomy understood as a character trait. Autonomous thinkers may be lacking in intellectual courage or perseverance. While these other cognitive traits are related to epistemic autonomy, it is best to not think of them as constitutive components of epistemic autonomy. Our focus in what follows is on epistemic autonomy understood as a character virtue. So, the rights or freedoms associated with epistemic autonomy are not at issue here. Here, we are concerned with intellectual dispositions that typify an epistemically autonomous agent. It is important to emphasize this distinction so as to focus our attention on the stated target and not the rights or freedoms that go by the same name.
9.4 Alternative Accounts With these clarifications in hand, let us turn to evaluating some extant account of epistemic autonomy as an intellectual virtue. While there is reason to resist each of these accounts, our exploration can reveal important insights regarding this intellectual virtue and will help motivate my positive account. Roberts and Wood (2007) have given perhaps the central account of epistemic autonomy as an intellectual virtue, and are thus a fitting place to start. On their account, epistemic autonomy consists in resisting “alien hetero-regulators” and having a positive relationship with “proper” hetero-regulators (277, 285). Alien, or improper, hetero-regulators are intellectual principles or directives that the subject has no commitment to, that are extraneous to the subject’s purposes, and have not been internalized or “made one’s own” (284). In contrast, an individual’s “proper” hetero-regulators are intellectual principles or directives that have been internalized and “made one’s own.” Proper hetero-regulators are understood by the subject (278), and have been “actively and intelligently” appropriated into the subject’s noetic structure (285). Roberts and Wood identify one’s intellectual tradition, teachers, peers, colleagues, critics,
180 Jonathan Matheson models, sanctioners, and authorities that one is happy to acknowledge as proper hetero-regulators (285). So, which hetero-regulators are alien (improper) and which are not, are relative to an individual’s outlook and motivation (284) While Roberts and Wood’s account of epistemic autonomy does capture how epistemically autonomous individuals can depend upon others, it is insufficient both as an account of epistemic autonomy as well as an account of an intellectual virtue. The account fails as an account of epistemic autonomy because it fails to address how individuals have come to internalize and accept their (proper) hetero-regulators. The mere fact that one has internalized a principle or directive is insufficient for their following of it to be autonomously done. For instance, the internalization of the relevant principles and directives may have come by way of indoctrination. While the victim of indoctrination is committed to certain principles, has values and purposes in line with those principles, and has made those principles “their own,” such a person is not epistemically autonomous. Mariana Oshana (2008) gives a helpful example here. She describes Harriet, who is a subservient spouse and homemaker, but prefers to be subservient. She finds her life gratifying and has no wish to change it. However, her desire to be subservient comes from a socially reinforced belief that she is inferior to her husband. Even if Harriet has internalized these principles and gender norms, “making them her own,” she is lacking in autonomy due to conditions under which she came to internalize these principles. We can add to the story that in addition to listening to the hetero-regulators that she endorses, Harriet resists alien heteroregulators. Such alien hetero-regulators may even be voices from the women’s liberation movement. After all, the reason these voices are alien to Harriet is due to socially reinforced beliefs about her inferiority as a woman. Despite meeting Roberts and Wood’s conditions for epistemic autonomy, however, Harriet is not epistemically autonomous. She is a victim of indoctrination.15 Roberts and Wood’s account also fails as an account of an intellectual virtue since the intellectual trait they describe is not itself an intellectual excellence. There are several problems here. First, on Roberts and Wood’s account of epistemic autonomy, all that matters, is whether the subject has (or has not) internalized some intellectual standard or norms. This ignores whether the subject should have internalized those standards or norms.16 Good epistemic agents are careful to listen to the right voices, not simply the voices that they happen to endorse or identified with. Having endorsed the voice of the Guru (without good reason) does not entail that it is epistemically good to follow it. This leads to the second problem with seeing the trait picked out by Roberts and Wood as an intellectual virtue. An epistemic agent who conducted their intellectual business by resisting “alien hetero-regulators” and by
The Virtue of Epistemic Autonomy 181 being guided by “proper hetero-regulators” would not lead an intellectually healthy life. By dismissing “alien” voices and listening only to the voices that the subject has endorsed, such agents would create echo chambers, a kind of epistemic tribalism, and become even more susceptible to belief polarization. Such an intellectual life ignores important challenges that come from other perspectives; challenges that should be considered. Such an intellectual life is epistemically unhealthy, it is not intellectually virtuous. Kyla Ebels-Duggan (2014) gives an alternative account of epistemic autonomy as a virtue. Ebel-Duggan is focused on determining whether and why educators should be facilitating epistemic autonomy in their students. She takes the problems exhibited in students that a focus on autonomy is meant to address to be (i) overconfidence and (ii) a lack of positive conviction. Ebels-Duggan finds a remedy for these vices in the intellectual virtues of charity and intellectual humility. As such, Ebels-Duggan advocates reinterpreting autonomy as simply charity and intellectual humility. So, on this view, aiming to foster autonomy in students simply amounts to aiming to foster charity and intellectual humility in them. While this account does correctly classify epistemic autonomy as an intellectual virtue, it fails to capture anything that is unique about epistemic autonomy – anything that distinguishes it from other intellectual virtues. On this account, epistemic humility is simply a combination of charity and intellectual humility. While charity and intellectual humility are epistemically valuable, such an account fails to afford any particular epistemic value to epistemic autonomy. Ideally, an account of epistemic autonomy as an intellectual virtue ascribes it some unique epistemic value that is not fully subsumed by other intellectual virtues. Finally, Linda Zagzebski (2013) gives an account of epistemic autonomy, what she terms “intellectual autonomy,” where it is foundational to intellectual virtues.17 On her account, epistemic autonomy is “the right or ideal of self-direction in the acquisition and maintenance of beliefs” (259). This cognitive executive function is motivated by the desire for truth, and other intellectual goods, something that Zagzebski maintains that we desire naturally. On Zagzebski’s account, autonomous thinkers conscientiously attempt to resolve cognitive dissonance and produce cognitive harmony in its place. Epistemic autonomy can be impeded both externally and internally. It is externally impeded when there is outside interference in one’s intellectual pursuits. It is internally impeded when one’s cognitive states are not sufficiently controlled by conscientious selfreflection. Managing one’s cognitive life in this way, Zagzebski claims, is foundational to rationality itself (259). Conscientious self-reflection, as Zagzebski argues elsewhere (Zagzebski 2015), can call for one to outsource their beliefs. If one determines that another epistemic agent is more likely to uncover the truth, conscientiousness calls for deference rather than independent inquiry.
182 Jonathan Matheson Zagzebski’s account of epistemic autonomy captures the ideal of autonomy consisting of good executive management of one’s intellectual endeavors. Her account also builds in the proper motivation that guides such control – a love of truth and other intellectual goods. Such a motivation seems requisite for any intellectual (character) virtue. A problem for the account, however, is the central role that conscientiousness plays. In making conscientiousness the centerpiece of epistemic autonomy, Zagzebski has it that doing one’s best in intellectual executive management is sufficient for doing well. Conscientiousness, for Zagzebski, is using your cognitive faculties as best you can to get to truth. Unfortunately, sometimes people’s best efforts simply aren’t enough. The virtue of intellectual autonomy, the intellectual excellence, consists in managing one’s intellectual endeavors well, not simply doing one’s best.18 Since one can err in evaluating their reasons and making determinations based on their reasons, even one’s best efforts can fall short. The same holds for other virtues. Being honest requires more than trying your best to be honest. Being courageous requires more than trying your best to be courageous. So, while Zagzebski’s account has the right target, cognitive management motivated by a love of truth and other epistemic goods, it does not unpack this trait in the right way. Zagzebski’s account is too subjective. This critical survey of extant accounts of the virtue of epistemic autonomy provide the foundation for a positive account. In the next section, we will lay out a novel account of epistemic autonomy and note its virtues.
9.5 The Account and Its Virtues The above considerations reveal some desiderata for an account of the virtue of epistemic autonomy (EA): D1. The account accommodates intellectual interdependence and is compatible with a social epistemology. D2. The object of epistemic autonomy is something over which individuals exhibit significant control. D3. The account meets the conditions of an intellectual character virtue. D4. Epistemic autonomy is unique from other intellectual virtues and has distinctive value. D5. Exercising epistemic autonomy requires making (objectively) good choices in one’s intellectual endeavors. These desiderata lead to the following account of the character virtue of epistemic autonomy: The character virtue of EA characteristically involves the following dispositions:
The Virtue of Epistemic Autonomy 183 1. [cognitive] to make good judgments about how, and when, to rely on your own thinking, as well as how, and when, to rely on the thinking of others, 2. [behavioral] to conduct inquiry in line with the judgments in (1), and 3. [motivational] to do so because one loves the truth and appropriately cares about epistemic goods.19 So construed, EA has cognitive, behavioral, and motivational components. Being epistemically autonomous requires making good judgments about how to balance a reliance on your own thinking with a reliance on the thinking of others, behaving in line with those judgments, and so doing because of one’s love of the truth and other epistemic goods. This characterization of epistemic autonomy meets each of our desiderata while avoiding the flaws in alternative accounts. D1. The account accommodates intellectual interdependence and is compatible with a social epistemology. According to EA, epistemically autonomous individuals manage the way that they conduct their intellectual projects, and they do so well. Part of what is involved in managing one’s intellectual projects well is determining when to think for oneself and when to more heavily rely on the intellectual efforts of others. So, exercising EA will involve a give and take with one’s epistemic community. Individuals who exhibit EA will exercise a healthy intellectual interdependence. Epistemically autonomous individuals are not intellectual free-riders. Rather, epistemically autonomous agents are contributing members in the intellectual division of labor. In addition, exercising EA involves utilizing, not ignoring, the vast intellectual resources afforded by others. EA thus fits nicely within a fully social epistemology that acknowledges our intellectual interdependence. D2. The object of epistemic autonomy is something over which individuals exhibit significant control. According to EA, epistemic autonomy is exercised with respect to how one conducts inquiry. It involves judgments about inquiry, as well as the behaviors undertaken in inquiry. Judgments, and the associated behaviors in inquiry, are things over which we exercise significant control. So, the objects of our autonomous control, according to EA, are things over which we exercise significant control. EA does not require that we have significant control over our beliefs and is not committed to any form of doxastic voluntarism. D3. The account meets the conditions of an intellectual character virtue.
184 Jonathan Matheson This desideratum is important to satisfy in order to distinguish epistemic autonomy as an intellectual character virtue from epistemic autonomy as an intellectual right, or freedom. EA clearly picks out characteristics of epistemic agents, and these characteristics meet the conditions for a character virtue. Recall Jason Baehr’s account of a character virtue is, “a character trait that contributes to its possessor’s personal intellectual worth on account of its involving a positive psychological orientation toward epistemic goods” (2011, p. 102). The characteristics picked out in EA are character traits of an individual, and they are traits that contribute to their possessor’s personal intellectual worth. Someone who makes good judgments about how to rely on themselves, and others, in inquiry is intellectually better off for it. Further, such a character needs to be developed and nurtured, so it is also attributable to its possessor. Finally, proper motivation is explicitly built into the account itself. EA builds in that an epistemically appropriate motivation that is guiding the subject’s judgments and behaviors in inquiry. Agents who think for themselves out of intellectual pride, or defer to others out of intellectual cowardice do not exercise EA. D4. Epistemic autonomy is unique from other intellectual virtues and has distinctive value. While EA picks out a trait that is no doubt related to other intellectual virtues, it does pick out a unique epistemic excellence. Plausibly, EA is closely related to many intellectual virtues: intellectual humility, openmindedness, intellectual perseverance, intellectual charity, and intellectual courage, among others. Some other intellectual virtues also concern executive decisions regarding inquiry. These managerial decisions regard which intellectual projects to pursue, as well as when, and how long to pursue them. Of relevance here are treatments of curiosity,20 inquisitiveness, intellectual perseverance.21 and the love of knowledge.22 However, the particular epistemic excellence picked out by EA is not fully captured by any of these other executive intellectual virtues. These other intellectual virtues do not concern an individual’s proper reliance on themselves, as well as others, in inquiry. Rather, these other executive intellectual virtues concern which questions to pursue in inquiry, as well as when to take them on, and how long to pursue them. The intellectual virtue of EA comes “downstream” of the execution of these other executive virtues. Once it has been determined that I should now take on a particular inquiry, I then must exercise EA in determining whether, and how, I should rely on my own thinking (as well as the thinking of others) in conducting the inquiry in question. So, while many intellectual virtues concern how one manages their intellectual life, EA picks out a particular kind of intellectual management not covered by these other characteristics.
The Virtue of Epistemic Autonomy 185 The unique function of EA comes with a unique epistemic value. In managing one’s reliance on oneself and others in their intellectual projects, individuals that exhibit EA get the most out of their intellectual efforts while contributing to the intellectual division of labor. We will explore this in more detail below, when we examine the different factors that go into determining the proper balance in any given inquiry. D5. Exercising epistemic autonomy requires making (objectively) good choices in one’s intellectual endeavors. To exhibit EA, one must make good choices in conducting their inquiry. Particular to EA, are good choices regarding one’s reliance on one’s own thinking and one’s reliance on the thinking of others. Exercising EA requires making good determinations regarding when to think for oneself, when to defer, and to whom to defer when deferring. Doing one’s best in these matters is not sufficient for doing well. Similarly, exercising EA is not simply listening to the voices that one happens to endorse, while ignoring the voices that one finds “alien.” Exercising EA requires making (objectively) good choices about when to defer as well as to whom to defer. Sometimes that will require a shift in which voices one listens to and which voices one ignores. While the judgments and behaviors characteristic of EA must be objectively good, this does not entail that they ignore the subject’s situation. While these judgments and choices must be objectively good, what makes them objectively good can still be sensitive to the subject’s particular situation. For instance, evidentialism is a claim about when a doxastic attitude is objectively justified. Evidentialism gives an objective standard for epistemic justification. However, that standard is for individuals to believe in accordance with their evidence. Since individuals differ in terms of their total evidence, evidentialism can call for different responses from individuals with different bodies of evidence. So, regarding evidentialism, the objective standard is informed by the individual states of the subject. Similarly, while there are objectively better and worse judgments about how to rely on one’s own thinking, these objective standards can be sensitive to an individual’s particular epistemic position. What makes it an objectively good choice to defer to a particular individual may depend upon the subject’s body of evidence or some other subjective feature(s).
9.6 Exercising EA – The Factors While we have seen how EA captures the desiderata of an account of epistemic autonomy while avoiding the pitfalls of alternative accounts, more is to be desired in terms of the details. EA is rather “hand-wavy” with regard to the relevant judgments regarding inquiry. The judgments regarding how to rely on oneself and others in inquiry must be
186 Jonathan Matheson (objectively) good judgments, but what makes such judgments good judgments? What considerations guide an epistemic agent who exercises EA? Which factors are relevant in determining whether an epistemic agent should defer or deliberate? Several factors are worth highlighting here, though this list should be seen as representative of the types of factors, not exhaustive. Let’s begin with considering the kinds of factors that would incline an epistemically autonomous agent to defer to someone else. 9.6.1 Knowledge Knowledge is epistemically valuable, and a great deal of our knowledge has been acquired through deference. Given our limited time and resources, we would be able to know very little if we could only rely on our own inquiry.23 The minds of others are a great resource since collectively we are able to cognitively multi-task in ways that no single individual could. The intellectual division of labor opens up more knowledge to each of us. Further, some knowledge we would not be able to attain even without limitations. My prospects for figuring out astrophysics are dim, and it’s not just because I don’t have enough time to work on it. Some knowledge requires more skill and acumen then I could ever develop. So, valuing knowledge can give us a reason to defer. For many questions that we pursue, others have already undergone the inquiry, are intellectually better suited to conduct the inquiry, or have better resources to conduct the inquiry. In such situations, we have reason to defer. Deferring is the better way to attain the desired knowledge. 9.6.2 The Epistemically Best Available Deferring does not always promise us knowledge. Some questions are rather novel and so our evidential base is not good enough to give us knowledge, or the truth is yet to be uncovered. Other questions are sufficiently contentious, and the controversy precludes our coming to know the answer. However, even when knowledge isn’t on the table, in deferring to the experts we are relying on those who are in the best epistemic position on the matter. While even the best can be mistaken, or have insufficient evidence, they remain our best bet for navigating the world. The epistemic position of experts is much greater than that of novices. Experts have more, and better, evidence, and experts are better equipped to evaluate that evidence. Believing from a better epistemic position is an epistemic improvement. So, even when knowledge isn’t available, deferring to the experts can still be epistemically valuable – it can amount to relying on a better epistemic perspective.
The Virtue of Epistemic Autonomy 187 9.6.3 Epistemic Harm and Injustice In some cases, a failure to take someone at their word commits an epistemic harm or injustice. On plausible accounts of testimony, in telling a hearer something, speakers have invited the hearer to trust them.24 Such invitations offer the hearer assurance that the speaker is in a good epistemic position on the matter and that they vouch for the truth of what they say. So, to refuse such an invitation to trust, the hearer must have good reason to do so: they must have justified doubts about the sincerity or credibility of the speaker.25 In situations where the hearer is justified in believing that the speaker is both sincere and credible, yet refuses to take their word for it, they epistemically harm the speaker by refusing their invitation to trust them without good reason. In such situations, an insistence to think for oneself on the matter by obtaining and evaluating the evidence for oneself is epistemically harmful to the speaker. A special instance of such harm occurs in instances of epistemic injustice. Epistemic injustice occurs when an individual is harmed in their capacity as a knower owing to some identity prejudice.26 Since a hearer’s prejudice can be the reason why they want to evaluate the matter for themselves and not trust a speaker, autonomous deliberation can result in epistemic injustice. We have seen some factors that would incline an epistemically autonomous individual to defer, but what considerations would incline such an individual to think for themselves? Here are some of the foremost reasons to think for oneself. 9.6.4 Expertise While regarding many things that you think about, you are not an expert, sometimes you are. When a matter comes up that is within your area of expertise, there is good reason for you to think about it for yourself. Doing so, is doing your part in the epistemic division of labor. Novices rely on your expert opinion. Even if you are not the only expert capable of providing an answer, having experts independently evaluate a matter within their area of expertise is a valuable epistemic resource for the intellectual community.27 Independent expert assessment of the relevant evidence is a social epistemological good.28 Communities of truth-seeking agents benefit from experts autonomously deliberating about matters of their expertise. Consensus amongst expert autonomous inquirers is a more reliable guide to truth than an intellectual community where the experts defer on matters of their own expertise. Independently arrived at agreement amongst the experts is powerful evidence that their shared conclusion is true. This is exhibited when multiple independent doctors all diagnose the symptoms as resulting from the same underlying
188 Jonathan Matheson condition. Such agreement, when independently arrived at, is more powerful evidence than if the doctors collaboratively made the diagnosis. So, regarding matters within one’s area of expertise, there is reason to autonomously deliberate. Doing so brings about a valuable social (epistemological) good by providing a better epistemic resource for the community.29 9.6.5 Understanding Understanding is an epistemically valuable state, a state whose epistemic value outstrips the epistemic value of knowledge.30 Further, understanding requires thinking for oneself. Zagzebski puts the point this way: understanding cannot be given to another person at all except in the indirect sense that a good teacher can sometimes recreate the conditions that produce understanding in hopes that the student will acquire it also. (Zagzebski 2009, p. 146) Understanding requires more than taking someone else’s word for it, even when there are excellent reasons to believe them. Understanding requires possessing the relevant first-order evidence and seeing how it supports the proposition in question for oneself. So, when understanding is on the table, there is a good reason to think for oneself. In thinking for oneself and coming to understand, one has improved their epistemic position. In addition to understanding the answer to one’s question, thinking for oneself can also lead to understanding the debate surrounding the issue. Even when one fails to understand the answer, they can come to appreciate the landscape of the debate, having become familiar with the types of considerations on different sides of the issue. This too is of epistemic value, and this value is not ascertained when one simply takes the answer from someone else.31 9.6.6 Managing New Evidence In thinking for oneself, and wrestling with the relevant first-order evidence, one can become better positioned to revise their belief (or level of confidence) in light of new evidence.32 When one is unaware of the first-order reasons that have been marshalled in support of a proposition (because they merely believe it on someone else’s say-so), they are unable to update their belief upon receiving new information. Since they are unaware of whether this new information has already been accounted for in the testimony they have received, they do not know how to accommodate this information.
The Virtue of Epistemic Autonomy 189 Having thought about the issue for oneself, and having obtained the relevant evidence, individuals are in a better position to upkeep their beliefs in light of new evidence (whether it be confirming or disconfirming evidence). So, having autonomously deliberated about the issue, individuals are in a more resilient epistemic position and can better adapt to new evidence. This, too, is an epistemic improvement. 9.6.7 Developing Intellectual Virtue Developing and nurturing intellectual virtue requires exercising the character traits in question. Many intellectual virtues cannot be exercised unless one is thinking for themselves. Since it is epistemically valuable to have intellectual virtues, doing the intellectual work to develop and nurture these character traits will also be important and valuable. The epistemically autonomous agent will develop intellectual virtues in themselves and will not let them atrophy due to a lack of exercise. Several intellectual virtues plausibly require autonomous deliberation for their cultivation. Let’s look at just a couple. Consider the virtue of intellectual perseverance. According to Heather Battaly (2017), the trait of intellectual perseverance is “a disposition to overcome obstacles, so as to continue performing intellectual actions, in pursuit of one’s intellectual goals” (670). On Battaly’s account, overcoming obstacles is characteristic of intellectual perseverance, and this can take place whether or not one successfully completes their intellectual project (674). Without obstacles in inquiry, agents would not be able to exercise and cultivate intellectual perseverance.33 Obstacles get in the way of an epistemic agent from completing her intellectual goals, and inquirers can overcome obstacles even in unsuccessful inquiry. While Battaly maintains that intellectual perseverance is not always an intellectual virtue, she argues that when this trait is grounded in the agent’s commitment to, and love of, epistemic goods, it is a virtue (680). As a virtue, intellectual perseverance is a mean between the extremes of excess (recalcitrance) and of deficiency (capitulation) (670). Agents with this virtue don’t give up on inquiry too soon, but they don’t stick with it too long either, and their efforts are guided by a love of truth (rather then their need to win an argument). On Battaly’s account, the intellectual virtue of intellectual perseverance consists of the following dispositions: (1) to make good judgments about one’s intellectual goals; (2) to reliably perceive obstacles to one’s intellectual goals; (3) to respond to obstacles with the appropriate degree of confidence and calmness; (4) to overcome obstacles, or otherwise act as the context demands; and (5) to do so because one cares appropriately about epistemic goods.34 (Battaly 2017, p. 688)
190 Jonathan Matheson Unsuccessful autonomous deliberation is rife with obstacles. It presents plenty of opportunities to cultivate intellectual perseverance. Since intellectual perseverance can be cultivated even in cases where one’s intellectual projects are not completed, the fact that autonomous deliberation has failed does not prevent it from developing intellectual perseverance in the inquirer.35 So, thinking for yourself can help bring about intellectual perseverance. Consider next intellectual humility. According to Whitcomb et al. (2017), intellectual humility consists in being appropriately attentive to, and owning, one’s intellectual limitations. According to this account, owning one’s intellectual limitations characteristically involves dispositions to: (1) believe that one has them; and to believe that their negative outcomes are due to them; (2) to admit or acknowledge them; (3) to care about them and take them seriously; and (4) to feel regret or dismay, but not hostility, about them. (Whitcomb et al. 2017, p. 519) When such appreciating and owning of one’s limitations is motivated by the subject’s desire for epistemic goods (e.g. truth, knowledge, understanding, etc.), intellectual humility is an intellectual virtue (520). This account of intellectual humility shows how thinking for yourself can foster intellectual humility. In particular, failed autonomous deliberations, can make one’s intellectual shortcomings evident. In thinking for yourself about some question and failing to find the answer, it becomes clear that you cannot figure this question out on your own. Such failures do not automatically make an individual intellectually humble, but they make the foundation of this intellectual virtue evident. If any time someone thought about an issue, they were able to uncover and understand the answer, it would be very hard for them to be intellectually humble. Failed inquiry can cultivate intellectual humility. So, thinking for yourself can help cultivate intellectual virtues. Intellectual humility and intellectual perseverance can each be cultivated through autonomous deliberation. Further, unsuccessful inquiry seems essential to developing these intellectual virtues. What all of this shows is that there is epistemic value in the journey of autonomous deliberation, not simply in the destination. So, the epistemically autonomous agent will not only think for themselves when there is a good chance of success. Rather, we have reason to think for ourselves even when the prospect of successful inquiry is dim.
9.7 Conclusion Building from previous accounts virtue of epistemic autonomy, we have seen a new account of this intellectual virtue. Epistemically autonomous thinkers exhibit healthy intellectual interdependence. They know when,
The Virtue of Epistemic Autonomy 191 and how, to rely on their own thinking, as well as when, and how, to rely on the thinking of others. We have also seen the types of reasons that should factor into an individual’s decisions in inquiry. How these factors weigh against each other will depend upon the details of any particular case.36
Notes 1 See Brighouse 2005, Ebels-Duggan (2014), Nussbaum (2017), and Seigel (1988) among others. 2 See Baehr (2011), Roberts and Wood (2007), and Zagzebski (1996) for some central examples. 3 King (2020), Roberts and Wood (2007) and Zagzebski (2007, 2012) are notable exceptions. 4 See Sosa (1991, 2007), and Greco (2010). 5 See Code (1991), Montmarquet (1993), Zagzebski (1996), Roberts and Wood (2007), and Baehr (2011). 6 Character virtues admit of a further division pertaining to the possessor’s responsibility, or lack thereof, for possessing the character trait in question. Responsibilist character virtues are traits that the agent is praiseworthy for possessing. They are traits that the agent has exercised some significant degree of control in cultivating and is thereby accountable for. In contrast, personalist character virtues don’t require this same control or responsibility for the character trait in question. See Battaly and Slote (2015) for a helpful discussion. In what follows, I will be neutral on this further distinction among character virtues. 7 See also Fricker (2006), McMyler (2011), and Zagzebski (2007). 8 See Code (1991) and Goldberg (2013). 9 See also Oshana (2008). 10 See also Nedelsky (1989). 11 See Code (1991) and Grasswick (2018). 12 See Church and Barrett p. 71. 13 See Dellsén (2021) and Elgin (2021). 14 For an extended defense of the permissibility of some forms of epistemic paternalism, see Ahlstrom-Vij (2013). 15 Thanks to Chris Ranalli for pointing me to this example. 16 We can be neutral here about under what conditions an agent should accept some norm or directive (whether it be following their evidence, proper function, etc.). 17 While Zagzebski does not explicitly categorize epistemic autonomy as a virtue, in seeing it as an epistemic ideal, her account of epistemic autonomy would be an intellectual virtue as we are understanding intellectual virtues. 18 For a more detailed account of this criticism, see Jensen et al. (2018). 19 This follows Matheson (Manuscript). 20 See King (2020) ch. 3. 21 See King (2014) and Battaly (2017). 22 See Roberts and Wood (2007) ch. 6. 23 Even here, we are setting aside our reliance on others to even be in a position to inquire in the first place. 24 See Goldberg (2020), Hinchman (2005), and Moran (2006). 25 See Hazlett (2017). 26 See Fricker (2007). 27 See Dellsén (2020) for an extended argument for this conclusion. 28 See Dellsén (2020).
192 Jonathan Matheson 29 Expertise admits of degrees, though our discussion has proceeded as if it is an all-or-nothing matter. The greater one’s level of expertise, the stronger the reason they have to autonomously deliberate on the matter. This is because one’s level of expertise corresponds to credibility on the matter at hand. 30 See Elgin (2021). 31 In addition, understanding the debate can help one to better identify who the relevant experts are, to ensure that they defer to the appropriate voices. 32 See Nickel (2001) and Nguyen (2018). 33 See also King (2014). 34 Compare with King (2014, pp. 3517–3518) who understands intellectual perseverance “a disposition to continue with serious effort in one’s intellectual projects in the pursuit of intellectual goods, for an appropriate amount of time, despite having to overcome obstacles to the completion of these projects.” 35 In fact, King (2014, p. 3516) argues that intellectual perseverance can be cultivated and exercised even when no progress is made in inquiry. 36 I am indebted to Heather Battaly, Kirk Lougheed, and Sarah Wright for helpful comments on an earlier draft. This publication was made possible through the support of a grant from the John Templeton Foundation (ID# 61802). The opinions expressed in this publication are those of the author(s) and do not necessarily reflect the views of the John Templeton Foundation.
References Ahlstrom-Vij, K. (2013). Epistemic paternalism: A defence. London: Palgrave. Baehr, J. (2011). The inquiring mind. New York: Oxford University Press. Battaly, H. (2017). Intellectual perseverance. Journal of Moral Philosophy 14: 669–697. Battaly, H. and Slote, M. (2015). Virtue epistemology and virtue ethics. In L. Besser-Jones and M. Slote (eds.), The Routledge companion to virtue ethics, New York: Routledge, pp. 253–269. Brighouse, H. (2005). On education. London: Routledge. Church, I. and Barrett, J. (2016). Intellectual Humility. In E.L. Worthington Jr., D.E. Davis, & J.N. Hooks (Eds.), Routledge Handbook of Humility. (pp. 62–75). New York: Routledge. Coady, C. A. J. (2002). Testimony and intellectual autonomy. Studies in History and Philosophy of Science Part A, 33(2), 355–372. Code, L. (1991). What can she know?: Feminist theory and the construction of knowledge. Ithaca, NY: Cornell University Press. Cohen, L. J. (1992). An essay on belief and acceptance. Oxford: Clarendon Press. Dellsén, F. (2020). The epistemic value of expert autonomy. Philosophy and phenomenological research 100(2): 344–361. Dellsén, F. (2021). We owe it to others to think for ourselves. In J. Matheson & K. Lougheed (Eds.), Epistemic autonomy. New York: Routledge. Ebels-Duggan, K. (2014). Autonomy as intellectual virtue. In H. Brighouse & M. MacPherson (Eds.), The aims of higher education. Chicago: University of Chicago Press. Elgin, C. (2021). The realm of epistemic ends. In J. Matheson & K. Lougheed (Eds.), Epistemic autonomy. New York: Routledge.
The Virtue of Epistemic Autonomy 193 Fricker, E. (2006). Testimony and epistemic autonomy. In J. Lackey & E. Sosa (Eds.), The epistemology of testimony (pp. 225–251). Oxford: Oxford University Press. Fricker, M. (2007). Epistemic injustice: Power and ethics of knowing. Oxford: Oxford University Press. Goldberg, S. (2013). Epistemic dependence in testimonial belief, in the classroom and beyond. Journal of Philosophy of Education, 47(2), 168–186. Goldberg, S. (2020). Conversational pressure. Oxford: Oxford University Press. Grasswick, H. (2018). Epistemic autonomy in a social world of knowing. In H. Battaly (Ed.), The Routledge handbook of virtue epistemology (pp. 196–208). New York: Routledge. Greco, J. (2010). Achieving knowledge. Cambridge: Cambridge University Press. Hardwig, J. (1985). Epistemic dependence. The Journal of Philosophy, 82, 335–349. Hazlett, A. (2017). On the special insult of refusing testimony. Philosophical Explorations, 20, 31–51. Hinchman, T. (2005). Telling as inviting to trust. Philosophy and Phenomenological Research, 70(3), 562–587. Jensen, A., Joly Chock, V., Mallard, K., & Matheson, J. (2018). Conscientiousness and other problems: A reply to zagzebski. Social Epistemology Review and Reply Collective, 7(1), 10–13. King, N. (2014). Perseverance as an intellectual virtue. Synthese, 191, 3501, 3523. King, N. (2020). The excellent mind: Intellectual virtue for the everyday life. Oxford: Oxford University Press. McMyler, B. (2011). Testimony, trust, and authority. New York: Oxford University Press. Matheson, J. (Manuscript). Why think for yourself? unpublished manuscript. https://philpapers.org/rec/MATWTF. Montmarquet, J. (1993). Epistemic virtue and doxastic responsibility. Lanham: Rowman & Littlefield. Moran, R. (2006). Getting told and being believed. In J. Lackey & E. Sosa (Eds.), The epistemology of testimony (pp. 272–306). Oxford: Oxford University Press. Nedelsky, J. (1989). Reconceiving autonomy: Sources, thoughts and possibilities. Yale Journal of Law and Feminism, 1, 7–36. Nickel, P. (2001). Moral testimony and its authority. Ethical Theory and Moral Practice, 4(3), 253–266. Nguyen, C. T. (2018). Expertise and the fragmentation of intellectual autonomy. Philosophical Inquiries, 6(2), 107–124. Nussbaum, M. (2017). Not for profit: Why democracy needs the humanities (Updated ed.). Princeton: Princeton University Press. Oshana, M. (2008). Personal autonomy and society. The Journal of Social Philosophy, 29(1), 81–102. Raz, J. (1988). The morality of freedom. Oxford: Clarendon Press. Roberts, R. C., & Wood, W. J. (2007). Intellectual virtues: An essay in regulative epistemology. Oxford: Oxford University Press. Seigel, H. (1988). Educating reason: Rationality, critical thinking, and education. New York: Routledge.
194 Jonathan Matheson Sosa, E. (1991). Knowledge in perspective. New York: Cambridge University Press. Sosa, E. (2007). A virtue epistemology: Apt belief and reflective knowledge. New York: Oxford University Press. Whitcomb, D., Battaly, H., Baehr, J., & Howard-Snyder, D. (2017). Intellectual humility: Owning our limitations. Philosophy and Phenomenological Research, XCIV, 509–539. Zagzebski, L. (1996). Virtues of the mind: An inquiry into the nature of virtue and the ethical foundations of knowledge. Cambridge: Cambridge University Press. Zagzebski, L. (2007). Ethical and epistemic egoism and the ideal of autonomy. Episteme, 4(3), 252–263. Zagzebski, L. (2009). On epistemology. Belmont, CA: Wadsworth. Zagzebski, L. (2012). Epistemic authority: A theory of trust, authority, and autonomy in belief. New York: Oxford University Press. Zagzebski, L. (2013). Intellectual autonomy. Philosophical Issues, 23, 244–261. Zagzebski, L. (2015). Epistemic authority. New York: Oxford University Press.
10 Understanding and the Value of Intellectual Autonomy Jesús Vega-Encabo
10.1 Intellectual Autonomy and Its Value We care about being intellectually autonomous. The Kantian admonition to “think for yourself” or “avoid undue influences on your intellectual life” is widely accepted. Something particularly valuable is lost in leading a life full of dependency on others, in not daring to be guided by our own judgment and criteria, and deferring to authorities (more or less socially recognized), in letting ourselves go and conform to certain social “pressures” to accept information and beliefs whose soundness is far from clear to us. There seems to be something distinctively valuable in becoming free of undesirable interferences and being able to govern oneself. Nevertheless, it is less clear what we do genuinely value in intellectual autonomy. Is intellectual autonomy intellectually virtuous? A first way to answer these questions is to determine what the value of autonomy itself is, since intellectual autonomy probably shares the very same value. A first step along this road requires clarifying what we mean by autonomy. Joel Feinberg identifies autonomy with a (gradual) capacity, that of governing oneself, with the very condition of self-government and, therefore, with the virtue (or virtues) of that condition, with an ideal of character and with the sovereign authority of one who governs oneself (Feinberg, 1986). In addition, the concept of autonomy is multidimensional and polyhedral; this condition or the associated ideal can be interpreted as self-sufficiency (autarcheia), independence, self-government, self-determination, etc. And one can even stress different capacities and attitudes in the condition of being autonomous, such as the capacity for self-reflection or rational determination of one’s own states, authenticity, integrity, etc. I will start from a rather neutral conception of autonomy that is directly applicable in the intellectual realm. John Christman (2020) identifies two components in the notion of autonomy or self-rule: “the independence of one’s deliberation and choice from manipulation by others, and the capacity to rule oneself.” It is a minimal, but informative, characterization, since it seems to align – in the political arena – with the promotion
196 Jesús Vega-Encabo of negative and positive liberties, and it has an apparent direct application to intellectual autonomy. Intellectual autonomy refers to the capacity, condition, virtue, or ideal of someone who is in charge of her own intellectual life, for which (1) she must exhibit a certain independence of thought, and (2) a capacity for self-government. 1. The first component can be explained as follows: the person who holds the condition of being intellectually autonomous does not exhibit complete independence in intellectual matters, but rather an independence that prevents undue interference (influences that in the intellectual realm are analogous to coercion, domination or imposition). Besides, this independence manifests itself as a willingness not to defer to others except in appropriate circumstances; the cultivation of a virtue of intellectual autonomy teaches us how to determine what the appropriate circumstances are in each case by assessing the epistemic and intellectual situation in which one finds oneself. 2. On the other hand, the capacity for intellectual self-government is more complex to gloss over. One option would be to identify it with the self-determination of a rational will, but – without appropriate qualification – this Kantian model is hardly applicable in the intellectual domain. Intellectual self-government seems to imply a willingness to exhibit rational authority over one’s own decisions, actions, and attitudes in general, and govern one’s own intellectual behavior according to the best reasons one recognizes. Thus intellectual autonomy is a multi-faceted condition, ideal, or virtue. We admire in the intellectually autonomous person her independence of thought, which manifests itself in freedom from coercion and undue influence in the formation of belief, and her capacity for rational self-government, which is exhibited through a careful examination of the rational foundations of belief. In a sense, both conditions seek to ensure that one is in possession of one’s beliefs and, on many occasions, that one can be held accountable for them. Such a conception of intellectual autonomy seems to encourage the idea that its exercise promotes the acquisition of epistemic goods, at least in the sense of contributing instrumentally to the attainment of truth and knowledge. Some of the debates around the epistemic value of intellectual autonomy have assumed that the conditions of independence and self-government entail exhibiting self-reliance and manifesting a lack of trust in others as sources of rational belief or knowledge. But intellectual autonomy neither is nor implies self-reliance understood in the sense of depending exclusively or mainly on one’s own epistemic resources in order to guide our intellectual behavior. Rather, it is compatible with accepting the word of those who are in better epistemic positions and,
Value of Intellectual Autonomy 197 above all, requires that we be in a position to delegate to others (and even devices and artifacts) many of the activities of monitoring and controlling belief, particularly those we are not able to carry out. True enough, insofar as our epistemic aims are obtaining the truth and avoiding error, excessively relying on our own epistemic resources could have unbearable costs, in terms of loss of truth (and knowledge). But we assume risks both by uncontrollably increasing our dependence on others and by inadequately restricting it when relying fundamentally on ourselves and our own capacities and resources. What a good epistemic performance requires is to find a balance in the negotiation of these risks of losing the truth or falling into error, and in many cases allow for a discharge of tasks and responsibilities in social environments where there are policies for the control of epistemic risks and for the distribution of the burdens of responsibility. If intellectual autonomy is indispensable and essential in finding this balance, it is yet to be established; in any case, our conception of autonomy cannot imply to renounce the epistemic goods that derive from participating in a space of strong epistemic dependencies in which our cognitive activities unfold.1 Let’s assume that intellectual autonomy is one dimension of the condition or ideal of an autonomous person. Thus, given that we value becoming autonomous individuals, with a disposition to avoid excessive dependence on others and with an ability to make one’s own decisions and shape one’s own life, then we value the same dispositions and capacities in the intellectual realm. This is the core of what I will call quick arguments for the value of intellectual autonomy. This sort of argument would be decisive if we would have a good response to the question about the (nature and) value of autonomy as such. And obviously if we had a clear conception of what makes of intellectual autonomy a facet or a necessary dimension of autonomy. There are two ways of accounting for the value of autonomy: first, as determined by the value of the objects we choose and the goods that it makes possible; second, as something valuable for its own sake (Young, 1982). Remember how J.S. Mill justifies the value of freedom of thought and autonomy. For him, both are constituents of happiness and wellbeing. Moreover, autonomy helps to shape a pattern of choices that make our life a unified whole we can be in charge of. As such, it is a constitutive feature of the good life and a component of our well-being. It is valuable to pursue autonomy as such because it is constitutive of our well-being. On this idea, we can build a first quick argument for the value of intellectual autonomy: . Well-being is valuable for its own sake. 1 2. Autonomy (including intellectual autonomy) is a constitutive feature of well-being.
198 Jesús Vega-Encabo 3. If (1) and (2), then autonomy (including intellectual autonomy) is a constitutive feature of well-being. 4. Autonomy (including intellectual autonomy) is a constitutive feature of well-being. 5. Intellectual autonomy is then valuable for its own sake. In addition to the fact that it is far from obvious that there is necessarily an intellectual dimension to well-being (at least, many people would be happy by letting down a sustained engagement in intellectual activities), there is the more obvious fact that intellectual well-being could be acquired in many different ways. If so, why should the intellectual dimension of well-being necessarily involve intellectual autonomy? There seems to be the possibility of meeting many of our intellectual needs in conditions in which we renounce, at least partially, to autonomy. This is particularly so if we conceive of autonomy as self-reliance and reliance in our own resources and procedures. There are worlds in which the satisfaction of our intellectual needs and desires could be better achieved in conditions of strong dependence. For instance, let us suppose there is a paternalistic state that takes care of making easily available all the epistemic resources that allow the individuals to respond to their intellectual needs. The state (or a complex of institutional settings devoted to the task) is so careful and efficient in providing these epistemic goods that any individual is in a position to be satisfied regarding what they consider essential for their intellectual well-being. Only if we can show that autonomy itself is an intellectual good worth pursing for its own sake, we can take it, for that very reason, as a necessary constituent of well-being. In other terms, we need to justify, first, why intellectual flourishing is one of the necessary aspects of human flourishing; and, second, why intellectual flourishing as such requires intellectual autonomy. One possibility is to hold, for example, that some epistemic goods cannot be constituted without meeting the conditions of being an autonomous intellectual being, and that these epistemic goods are essential to human (intellectual) flourishing. A second quick argument for the value of intellectual autonomy reads as follows: (a) The value of autonomy (in general) derives from a demand of respect for the dignity of persons and, therefore, of self-respect. (b) If the value of autonomy (in general) derives from a demand of respect for the dignity of persons and, therefore, self-respect, then the value of intellectual autonomy does so as well. (c) Therefore, the value of intellectual autonomy derives from a demand of respect for the dignity of persons or self-respect. The core of the argument lies in the intuition that abdicating our intellectual autonomy is a way of disrespecting ourselves in intellectual matters.
Value of Intellectual Autonomy 199 It is the value that derives from our dignity as persons (and which should lead us to direct our own lives) that explains the value of intellectual autonomy. This is an interesting point, hard to deny. But the question that immediately strikes us is how does self-respect connect with what is valuable in the intellectual realm? What is epistemically valuable about self-respect?2 Both quick arguments, I dare to claim, have the same shortcomings: they leave it far from clear why exhibiting autonomy has any connection with epistemic value. They are not able to illuminate the links between the presumed value of autonomy and the values recognized in the intellectual domain. If we do not show respect for autonomy, in which sense are we losing anything of epistemic value? For instance, if we manifest a disposition to defer to authorities (not in the suitable circumstances, whichever they are), what could we lose in epistemic terms? And if we develop a strong disposition to outsource epistemic tasks to gadgets and devices, which epistemic loss is to be expected (Ahlstrom-Vij, 2016, Carter, 2018, 2020)? Does autonomy contribute to the promotion and constitution of something with epistemic value? In the next section, I propose a new starting point in order to address this question.
10.2 Autonomy and Agency Autonomy implies self-government, the determination by oneself of one’s own conduct. This idea can take on several meanings. And it is not clear which of them best illuminates what intellectual autonomy is and why it is valuable. I take a new start in this section by viewing autonomy as a condition of agency. As Hurka has argued in an illuminating paper (Hurka, 1987), the ideal of autonomy is linked to our ideal of agency. That our actions exhibit agent autonomy consists in their being attributable to us. Of particular interest is Hurka’s description of the ideal of agency. It is an ideal of causal effectiveness, of us playing a causal role in the world, guided by the ends we set ourselves and according to the choices we make. Autonomy is a matter of acting in the world, of making things happen. In the intellectual domain, it is about being able to determine one’s own beliefs. It is about strengthening our cognitive contact with the world by searching for the truth. One of the first advantages of considering autonomy from this ideal of agency is that it allows us to explain its value. We value autonomy because it contributes to increasing what a person achieves. Where there is capacity for choice, each time a choice is made one becomes responsible for multiple facts; therefore, the capacity for self-determination increases the capacity for agency, for success in intentional action. Hurka argues that this value is connected to a more general value of relation-to-theworld, and therefore also affects cognitive performances, at least those in which one determines whether one’s belief conforms with how the world
200 Jesús Vega-Encabo is.3 This connectedness to the world, in so far as is open to our determination, is certainly valuable to us. Otherwise, it would not make sense how we are deemed to be responsible for what we are capable of determining. The same is true in the intellectual realm where there seems to be many aspects open to our determination and agency. A minimal reading of this idea applied to the intellectual realm supports the view that being intellectually autonomous consists in being in a position to form beliefs that are attributable to us. An epistemic agent is an autonomous epistemic agent. But on a more informative second reading, it is the exercise of our intellectual autonomy that facilitates intellectual achievements (such as knowledge and/or understanding) that otherwise would be off limits to us. The idea goes beyond the recognition that beliefs should be appropriately owned in order to be attributable to the doxastic agent; success that amounts to an epistemic achievement should also be attributable to the epistemic agent.4 Intellectual autonomy is (epistemically) valuable to the extent that its exercise helps us bringing about such intellectual achievements. The argument goes as follows: . Knowledge and/or understanding are intellectual achievements. 1 2. Intellectual achievements are epistemically valuable. 3. Intellectual autonomy contributes to or is exercised in these intellectual achievements. 4. To that extent, intellectual autonomy is (epistemically) valuable. An achievement is a success because of the exercise of one’s abilities and competences: a success because of ability. An achievement is a success attributable to the competence of an agent. Success due to the competence of the agent excludes undue dependence on luck. In relation to the agent’s aims, any attempt that achieves success by manifesting competence is (fully) attributable to the agent who deserves credit for this (Sosa, 2007, 2015). Knowledge (and perhaps other epistemic statuses such as understanding) is an achievement. Knowledge consists in reaching the truth because of one’s own abilities and competences and, therefore, attributable to the subject that exercises those cognitive abilities. Achievements are distinctively valuable. We value them for their own sake.5 We seem to prefer those situations where we achieve success by putting our abilities and competences into play to those where we are simply getting the result, even if a secure one. The idea has had a straightforward translation in the epistemic realm: epistemic normativity concerns how we carry out evaluations within a domain of human performance where there is success and failure. The value of achievements is not exhausted in the attainment of success (truth); it is mainly grounded on how the agent is involved through the exercise of their competences in obtaining the truth. Moreover, whatever it is, the value of what is obtained, succeeding
Value of Intellectual Autonomy 201 in the conditions of appropriate involvement of abilities inaugurates a normative dimension in which the very quality of the agency (in this case, epistemic) is what really matters. This debate about what makes of an achievement something valuable impacts on the reading of premise (1), because there are substantial divergences among epistemologists about which the specific achievements of the epistemic realm are. Is knowledge an epistemic achievement? In virtue of what? In virtue of a set of features that make it also valuable for its own sake? Or does it have final value in virtue of being a distinctive epistemic achievement? These disputes run through much of contemporary epistemology, particularly among those that defend some version (weak or strong) of a theory of virtues (Pritchard, 2010). For the moment, I am only interested in pointing out that in any of the versions we adopt there is at least one epistemic status – whatever it may be – that is identified as an achievement and whose value is explained by the very fact of being an achievement. Thus, however we settle these debates, premise (2) would be correct in a way that makes also true premise (3), for it could be that only understanding (as Duncan Pritchard argues) is genuinely an intellectual achievement (and not knowledge) or it could be that both epistemic statuses can be explained according to the structure of achievements. The version I favor says that it is enough for an achievement that it is due to the competence of the agent and not to luck, particularly if lucky cognitive success reflects a certain deficiency in the exercise of the competences that the agent puts into play (Sosa, forthcoming). Epistemic normativity is explained as a sort of telic normativity: we value the attainment of true beliefs due to competence. Our aims are framed by what we value in the exercise of agency: we strive for a life in which success is fully attributable to us as agents, as epistemic agents in this case. It is through these commitments that we inhabit a domain of epistemic evaluations; this domain requires a process of constitution of us as epistemic agents, that is, of constitution of subjects that manifest their competences in the obtaining of the truth and the avoidance of the error, and are capable of taking themselves as accountable for their performances. What we value: to secure that the obtaining of truth (or the avoidance of error) is due to us, that is, to be sufficiently involved in the task so that success be attributable to us. The epistemic normativity derives from this constitution of the domain and the subjects (Broncano-Rodríguez and Vega-Encabo, 2011). The constitution of the epistemic agents in this sense requires them to be engaged in the epistemic domain as concerned by what is normative within it. A condition of being autonomous is to exhibit a concern with those normative features that are proper of each of the domains we participate in. Autonomous intellectual beings are those that manifest this concern and govern their cognitive tasks by what they take to be ends and values worth being pursued.
202 Jesús Vega-Encabo
10.3 Understanding and Intellectual Autonomy Arguments relying on the value of achievements to ground the value of intellectual autonomy need to account for whether and how intellectual autonomy contributes to the attainment and/or constitution of the achievement in question, be it knowledge or understanding. The idea that achievements are dependent on the exercise of the subjects’ agency does not, in itself, account for the contribution of intellectual autonomy to their promotion and constitution. It is true that there is a minimal sense in which agency, as we have characterized it, supposes a condition of autonomy of the subjects, that they are capable of engaging in cognitive tasks based on their appreciation of what are the values and norms at stake in the epistemic domain. But this minimal condition does not account for how the virtue of intellectual autonomy in its dimensions of independence and rational self-government contributes specifically to promote or constitute distinctively epistemic achievements. Without some clarity on this point, the condition of being an autonomous agent in its basic sense is, in general, applicable where there are achievements of any kind involved, and not grounded in any epistemic dimension of the assessments. Understanding has been proposed as this kind of achievement that would ground the possible (epistemic) value of intellectual autonomy in virtue of contributing to its promotion and/or constitution. In recent epistemology, it is common to view understanding as a valuable intellectual achievement, perhaps the highest intellectual good that can be attained. Moreover, intellectual autonomy is also seen as that virtue that promotes the attainment of this epistemic good. This seems to suffice in order to establish the epistemic value of intellectual autonomy. Those virtues that contribute to the highest intellectual good are valuable and not in a merely instrumental sense; they contribute to the attainment of one of the highest goods in intellectual life in a virtuous way, that is, by manifesting the competences proper to the condition of being intellectually autonomous (Riggs, 2003). It is far from my purpose to get entangled in recent debates about the nature and value of understanding. In order to make full sense of the arguments that I will examine in this section, a few indications on the notion of understanding at stake will suffice. As is well known, there are several types of understanding that one can distinguish, for instance a holistic understanding of a field or an object or an explanatory understanding of why something is so. In what follows I will focus exclusively on this last type of explanatory understanding. There are also many disagreements about whether understanding is a kind of knowledge or whether it implies belief, truth, or justification. For my purposes, I will accept that there are cases in which understanding may well not be knowledge, but that there are others that constitute a particular kind of
Value of Intellectual Autonomy 203 knowledge, a knowledge of why or the knowledge of causes, a kind of explanatory knowledge. Finally, I want to emphasize that understanding exhibits several features that make it a distinctive achievement. It is a way of grasping, with different degrees of correctness and depth, certain connections in reality in such a way that they take on a new sense for the subject, being able to see the coherence between different representations or manifest certain (generally explanatory) abilities.6 Understanding is a cognitive achievement, perhaps the highest achievement in epistemic terms. Moreover, it is worth pursuing for its own sake as long as it constitutes such an achievement. But the question I am interested in exploring is how the condition of autonomy or the specific cultivation and exercise of the intellectual virtue of autonomy enters into the acquisition and constitution of such an achievement. In what sense is understanding the result of the manifestation of the virtue or virtues of intellectual autonomy? 10.3.1 Pritchard on “Seeing for Oneself” and Intellectual Autonomy Duncan Pritchard has explicitly linked the value of intellectual autonomy to the value of understanding or, better, to a general category of “seeing for oneself,” which includes both a form of “active perceptual seeing for oneself” and a form of “active intellectual seeing for oneself” or understanding. In his terms, seeing things for oneself, as manifested in understanding, “serves what is claimed to be a fundamental good: intellectual autonomy” (Pritchard, 2016b, p. 29). He goes on to explain the value of seeing things for oneself in the following terms: “[t]his value is ultimately rooted in the role that seeing it for oneself plays in the promotion of intellectual autonomy and, thereby, a virtuous life of flourishing” (Pritchard, 2016b, p. 40). Pritchard accepts as a starting point the intuition that there is an “epistemic improvement” in seeing things for oneself versus relying on others, on their testimonies or on their abilities in general. He wonders, I think rightly, if there is not some kind of epistemic fetishism on our part in doing that. Seeing things for oneself, in all its versions, is a manifestation of intellectual virtue, and he offers three lines of argument to account for its value. Surprisingly, none of them points to what makes “seeing things for oneself” or “understanding” epistemically more valuable, i.e., to the sort of epistemic improvement involved. The first argument is based on a premise that is difficult to dispute: that an appropriate guide to the intellectually good is how one carries out the assessment of what is considered a well-conducted inquiry. Pritchard argues that inquiry is often assessed in terms of whether understanding is achieved and not just truth or knowledge. This alone does not show that understanding is more valuable than knowledge or in what sense. Nor does it give any clue as to how it connects with the cultivation of
204 Jesús Vega-Encabo intellectual autonomy; it simply reflects the fact that it is oneself that makes this perceptual and intellectual seeing. In Pritchard’s view, knowledge can be enough to close inquiry. But sometimes knowledge from understanding seems to be required. If so, he has to argue that in these cases it would not be legitimate to close inquiry without achieving understanding: there would be something epistemically valuable that we would deprive ourselves of that makes it illegitimate to close inquiry before obtaining it. But where does this demand for understanding come from, and in what sense would we be less virtuous not to satisfy it? Pritchard is well aware that it cannot be a normative demand for epistemic subjects to have to close inquiry only when a state of understanding is attained. Again, what is epistemically defective in closing inquiry before reaching understanding? It cannot be an absolute and unconditional demand. Therefore, among the virtues of the intellectual being, there must be the capacities that contribute to determine which epistemic demands must govern the inquiry at every moment (Pritchard, 2016b, p. 35). I would suggest that this is one of the specific contributions of intellectual autonomy to epistemic life. Take notice that this does not mean the promotion of understanding in particular. Cultivating intellectual autonomy would serve to evaluate the epistemic demands in each case; it would promote the acquisition of dispositions to determine whether the epistemic situation is adequate, to determine whether the means at one’s disposal are sufficient given the normative requirements involved at each point, etc. For example, the intellectually autonomous subject would assess whether he should close the investigation with the resources at his disposal or he has to defer to the competences of others, with a view to satisfying such or such epistemic desiderata. She will assess whether it is worthwhile for such and such a case of inquiry to advance towards understanding and how it would be epistemically more appropriate to acquire it. At each point the question of whether is better in epistemic terms to proceed towards understanding will be raised and assessed. Besides, this is an issue that could require more than assessments in exclusive epistemic terms. The second argument aims to establish the final value of understanding on the basis that understanding is what Pritchard calls a strong cognitive achievement, an attainment of the truth that involves significant levels of skill and the overcoming of intellectual obstacles. He stresses the active and conscious integration of relevant items of information in contrast with more dependable ways of grasping the truth. However, one could agree that understanding is an achievement that demands a particular effort and high level of skill without accepting that success in the enterprise be in itself due to the particular manifestation of intellectual autonomy. We have already pointed out that dependence as such does not cancel the agency of subjects or the consecution of epistemically
Value of Intellectual Autonomy 205 significant achievements. Moreover, strong cognitive achievement may require multiple and diverse abilities acting together, some of them constituted in conditions of dependence. The last argument seeks to make even more explicit the connection between autonomy and understanding. Pritchard gives it a particular conviction. Understanding manifests intellectual virtue. The value of understanding is grounded in manifesting virtue, and virtues have ultimate value, at least insofar as they play a role in human flourishing. The virtue associated with understanding is intellectual autonomy and, therefore, part of the virtuous life of human flourishing. But, couched in these terms, this is just another version of our quick argument for the value of intellectual autonomy. It does not make explicit the distinctive form of epistemic evaluation that is at stake. At other places, Pritchard has defended that truth is the fundamental epistemic good, in particular, “grasping the truth” (Pritchard, 2014, 2016a). How could this idea fare with understanding being highly valuable? First, nothing excludes that understanding, as an achievement, also had non-epistemic (ethical) value. Second, the difference in epistemic value can be expressed in terms of greater degrees of grasp of the truth. Understanding involves “a deeper and more comprehensive grasp of the truth” (Pritchard, 2016a). It is not an issue about deeper and more comprehensive truths, whatever this could mean. It is the grasp itself that qualifies as deeper. If so, we seem to aim not merely at truth, but also at grasping the truth in a distinctive way. It looks as if the grasping itself would do the job and becomes the aim in our attempts. Nonetheless, what is the role of the virtue of intellectual autonomy in the pursuit of this goal? Pritchard suggests that this distinctive grasp of the truth is a sort of active seeing for oneself. First, “seeing it by yourself” is not a condition derived from the exercise of intellectual autonomy, even of self-reliance in its more stringent version; it is just a peculiarity of the grasp involved. Second, since the point of the argument is to rescue the epistemic dimension in valuing understanding, the crucial question is then whether the way the grasp is attained has anything to do with its correctness. In other words, Pritchard owes us an explanation of why it is that by aiming at this sort of a deeper and more comprehensive grasp it is correctness that is attained. Again, why is the correctness involved only secured in conditions in which we manifest intellectual autonomy? 10.3.2 “Coming to Understand” Michael P. Lynch (2014, 2016, 2017) has also explored this connection between understanding and intellectual autonomy, though in a more indirect way. Lynch takes understanding as a particularly valuable kind of knowledge by which we grasp the relationships of dependence between states of affairs. In addition, he offers a functional characterization of
206 Jesús Vega-Encabo understanding that is particularly distinguished by its etiology and consequences. Lynch puts special emphasis on the etiology of the state of understanding: it is constitutive of understanding that is “caused” by an active cognitive act of grasping dependency relationships. “Grasping” is a constitutive act of a previous act of “coming to understand” without which there is no genuine understanding. It is an active mental act, which requires effort and conscious attention. Moreover, Lynch conceives it as a creative act of imaginative integration of different items fitting together. This point is especially important, since these are the features that explain the special value of understanding and motivate our desire to become intellectually autonomous beings. “Coming to understand” is something I must do for myself; therefore, it cannot be transmitted directly through testimony, nor can it be outsourced (Lynch, 2016). The sources of the value of understanding are the effort (proper to cognitive achievements) and the creative dimension of the act of coming to understand. A characterization that insists on an etiology of understanding of this kind excludes almost by definition that we can acquire understanding by relying on external sources. I consider, however, that there are, in fact, cases in which one could acquire understanding through the testimony of others. It is at least controversial that this cannot happen, as D. Whiting (2012) or Malfatti (2019) among others have argued. Lynch’s etiological characterization excludes it. Testimony can serve as a basis for the subject to come to understand, but ultimately the testimonial basis is not the act of understanding itself that requires the creative activity of the individual’s mind. In fact, this means that no genuine epistemic dependence is constituted in the reliance on external sources, that is, that the possibility of achieving anything of value is excluded once we depend on the epistemic contribution of others to the formation of the belief. Of course, this coming to understand implies the development of certain skills, experience, and interaction with the world (including others), but it is first and foremost an act that I must perform on my own and that cannot be deferred. If understanding is an achievement, it is so by virtue of the act of “coming to understand”, of a specific active effort of the epistemic subject. Therefore, the achievement would seem to be primarily linked to grasping the very relationships of dependence and not so much to the truth that one grasps, since the relevant epistemic basis could be in the person who, for example, offers the testimony whose basis is crucial for understanding such a truth or connection between true propositions. This act of grasping is an achievement in itself, close to what Grimm would call subjective understanding (Grimm, 2012). Getting things right could be due to the correctness of the information provided by others, but the grasping itself as constitutive of my subjective understanding is due to my distinctive cognitive abilities, by “putting things together” in such a way that strikes me as right. But if there is achievement here – as it could be an
Value of Intellectual Autonomy 207 achievement to obtain a justified belief – it is not as such the achievement of knowledge itself (or knowledge from understanding). “Coming to understand” seems to be a particular case of “coming to believe,” one with a peculiar phenomenology. On the one hand, one could express doubts that the understanding has to be preceded by a specific psychological act; we could manifest understanding without it being accompanied by a specific phenomenology, maybe because it is more implicit than explicit. On the other hand, why should this mark a strong difference with other “acts” of coming to believe, for instance, those that are preceded by taking a deliberative stance and in which we are normatively engaged as agents in the fixation of belief for reasons? This is something that I must do for myself in the very same sense in which I “come to understand” by integrating items of reliable information. The “act” itself cannot be deferred, how could it? It is clear to me that this is not what is at stake in the debates about the value of understanding and intellectual autonomy. For Lynch, the creative dimension of the act also makes a difference. Again, this is not enough to endow understanding with a particular epistemic value. First, it is hard to see how “creativity” as such – even if valuable for its own sake – can be a condition for obtaining knowledge from understanding. Either we must deflate what is taken as “creative” in order to allow for modest acts of understanding or we must accept that there are ways to come to non-creative understanding. The value of understanding derives both from being a cognitive achievement and from the active and creative involvement of the subjects. Regarding the first feature, we have seen that it is not necessary that the correctness of the belief itself be due to (or explained through) active personal involvement in the very act of grasp. Regarding the second feature, creativity helps explaining at least one aspect of the value of understanding, an aspect that is not separable of intellectual autonomy in so far as it involves personal acts of expression. This is why, even if we could outsource understanding, it would not be desirable. Value again is attached to intellectual autonomy in terms of what makes us human, not in terms of what is epistemically better, or so Lynch seems to accept in The Internet of Us (Lynch, 2016). 10.3.3 Understanding, Intellectual Autonomy, and Epistemic Competences In the first chapter of Epistemic Explanations, E. Sosa explores the nature of a particular kind of epistemic achievement: the understanding of why something is thus-and-so and first-hand knowledge of why it is. Sosa asserts that there are certain issues that demand that epistemic agents do not settle them through deference to others but significantly through the exercise of their own competences7. This is true at least in all those
208 Jesús Vega-Encabo matters where “rational appreciation” is at stake; they are matters that could not be adequately settled by consulting others. One’s insight and understanding of the issues must be the guide in forming the belief, and no possible deference can properly close the deliberation or settle the matter. In other words, it would be epistemically inappropriate to defer in these matters, even if deference to others might provide us with true, reliable beliefs, knowledge, and also some form of understanding. This last will be “truncated,” Sosa suggests. I am going to assume that the acquisition of this first-hand knowledge implies the manifestation of a virtue that we have traditionally identified as intellectual autonomy. Sosa, in several passages in his work, insists on the special value we place on intellectual autonomy, without which we would be unable to put in its proper place what other sources, including testimony, might deliver us. This idea is aligned with a very traditional conception of intellectual autonomy that encourages us to cultivate a certain independence of thought and to exercise a capacity for rational control of our own epistemic behavior. Let us briefly review the options we have for including intellectual autonomy within the framework of virtues in Sosa’s epistemology. In his most recent works, Sosa addresses epistemic normativity as a special case of telic normativity. Assessments are relative to aims, and attempts are thus assessed in terms of getting or not the respective aims. As achievements in general, the highest level of normativity is characterized as success that is fully attributable to the agent (Sosa, 2015, 2017, forthcoming). First, achieving truth by manifesting competence constitutes the epistemic/normative status of aptness. Second, achieving aptness by manifesting competence constitutes the epistemic/normative status of full aptness. These are achievements relative to the respective cognitive aims of truth and aptness. Nonetheless, when exhibiting intellectual autonomy is required, it seems as if the aim should be subtly different; in this case, it is a matter of achieving “first-hand knowledge in search of understanding” (Sosa, forthcoming). Is this a different kind of aim in our intellectual endeavors? It does not seem so; it is aptness again, but obtained under certain constraints derived from what is the manifestation of the intellectual virtue of autonomy, that is, “without the aid of deference,” “not by means of mere deference,” mainly by bringing into play one’s own competences (Sosa, forthcoming). In other words, the aim is to judge autonomously by mobilizing one’s own competences. One might ask what this achievement normatively adds within the epistemic domain in comparison with obtaining fully apt beliefs in conditions in which we do defer to others. One possibility would be to hold that testimony, as a paradigmatic case of obtaining beliefs by relying on others, provides knowledge but not full knowledge; and in the same way it could provide understanding, but not full, only truncated understanding. However, this line of argument does
Value of Intellectual Autonomy 209 not seem to be available to Sosa, since in testimonial contexts we can aim at aptness and acquire fully apt beliefs through deference, insofar as our assumptions about the competences of others and ourselves are correct and we get the truth due to the manifestation of competences and due to the correctness of such assumptions. But Sosa seems to share an image of testimony according to which it can hardly as such contribute to constitute the characteristic epistemic status of understanding, this first-hand knowledge of why something is thus-and-so. Testimony plays the role of a conduit of reasons, but does not constitute reasons as such, reasons that we have made “our own.” I cannot but agree that manifesting a disposition to avoid those conditions in which one might be deprived of reasons and a willingness to constitute the normative considerations for or against belief as reasons one possesses (an aspect of rational self-government) is an essential aspect in determining whether an agent exhibits the conduct of an intellectually autonomous being. What I consider hard to see is whether this can be taken as an indicator that a new dimension of telic assessment is here involved. Is a higher and more admirable achievement? Why is it of greater epistemic quality? Or is it a demand derived from the normative aspiration to full agency, an aspiration that can only be satisfied when one’s own competences are put into play? A new distinction in Sosa’s work could help here. Let us admit that there is something epistemically valuable in exhibiting intellectual autonomy. It is a virtuous trait of epistemic agents. Sosa distinguishes two forms of epistemic normativity. The first one is proper of gnoseology and tries to answer questions about the constitution of the status of knowledge in our cognitive performances. The second one, under the name of intellectual ethics, deals with normative evaluations in a broader sense related to intellectual issues in general and to how to conduct our inquiries. The first one is completely isolated from practical considerations (it is a kind of purely epistemic telic normativity); the second one lets in (and often requires) considerations of a practical nature and allows non-telic evaluations (see Sosa, forthcoming). This does not mean, or at least this is what Sosa wants, that there is not a purely epistemic dimension of intellectual ethics, which is identified once the other issues and evaluations of a practical nature are put into brackets (Sosa, 2015). In parallel with this distinction, Sosa includes two different sort of virtues in his universe of virtues and competences: first, those whose manifestation constitutes knowledge and, second, those that help promoting or fostering the acquisition of epistemic goods insofar as they put us in a position to know, but do not manifest themselves in the attainment of these goods. To which of the two categories does intellectual autonomy belong? If a constitutive virtue of epistemic achievements (let’s say the kind of understanding we are considering), intellectual autonomy would
210 Jesús Vega-Encabo contribute to grounding the success of the particular attempts of the cognitive agent. The idea is that there is a more admirable epistemic achievement that can only be constituted through the exercise of the virtue (or virtues) of intellectual autonomy. In what sense is it more admirable in epistemic terms? Which is the sort of improvement in epistemic quality here involved? In Sosa, two souls coexist: on the one hand, the epistemic improvement has to consist in an increase in reliability; on the other hand, the quality of the agency is also a function of the attributability of the achievements. True enough, it does not seem that an increase in reliability necessarily results from the manifestation of the virtue of intellectual autonomy (Kornblith, 2012). At most, we can say that is the attributability that is reinforced, to the extent that full knowledge is acquired under those conditions in which the epistemic agent is able to own the reasons that ground the epistemic status of the belief. In principle, this does not prevent that an agent sometimes manifests a certain incompetence by not deferring to others, to an authority in the matter, even though this may not constitute a first-hand achievement. But I wonder whether, once the aim of the attempt is to achieve understanding by oneself, we are not already incurring the obligation not to defer to others. That is, it is not just that we are not obliged to defer in these cases; it is rather that we seem to be obliged not to defer, even if that implies that the issue remains unsolved or unsettled. Our problem again is that it is not obvious what makes this advisable in epistemic terms. I cannot but agree that determining under what conditions the acquisition of epistemic achievements could be affected by undue deference is certainly a function of exhibiting a certain independence and autonomy. Nonetheless, it is also an aspect of the exercise of autonomy to determine under what conditions we should abandon such independence – in its most restrictive sense – and be ready to defer to others. Being able to do this properly can put us in a position to know or to achieve understanding. If so, it might be more appropriate to accept that the virtue of intellectual autonomy, and the complex of dispositions and attitudes that accompany it, is rather part of the group of virtues that Sosa, already in 2015, called auxiliaries, a kind of agential virtues that puts one in a position to know. Intellectual autonomy, arguably, is an essential component of our intellectual ethics. But there is at least one aspect in which cultivating the virtues and competences that allow us to put ourselves in a position to know is subordinated to the aims of the epistemic domain, since they must contribute to ensuring that the quality of the epistemic agency – at its different levels and degrees – is not affected or diminished. Again, the attributability of success is central. Auxiliary competences would reinforce the conditions under which, if one were to affirm, one would achieve success, or achieve it with sufficient reliability or aptness. They could help to avoid recklessness and negligence in forming beliefs by securing the mobilization of one’s own skills in the appropriate situation and shape. They could help us to assess if one is in a position to know
Value of Intellectual Autonomy 211 given the competences and the conditions of their exercise. They would thus contribute to securing the triple-S profile (Skill, Shape, Situation) of the competences. One way of expressing the contribution of this kind of virtues/competences to (epistemic) full agency is to recognize in them the dispositions of the agent to monitor the conditions of exercise of one’s own competences and, therefore, the epistemic situation in which they are exercised. Sosa calls this function “putting oneself into perspective.” Only in this way can we shape and own our own intellectual world. I would dare to claim that here lies the specific contribution of the dispositions that we attach to the virtue of intellectual autonomy. They help us to assess whether, and when, one is in a position to know and to assess the risks associated to the exercise of one’s own competences. In order to secure these assessments, being intellectually autonomous requires of the epistemic agent to exhibit authority over important aspects of her own cognitive life, being able to shape her own intellectual world and epistemic identity. Our epistemic identities reflect how we normatively engage in our cognitive endeavors, which ends we value, which risks are we disposed to take, how we respond to changing (contextual) standards, etc. What we value in intellectual autonomy is a function of what we value in the epistemic identities we ourselves contribute to shape. But this is not tied anymore to the constitution or promotion of specific epistemic aims (or goods), such as understanding, but to an aspiration to perfect our own agency. This is not a distinctively epistemic aspiration; it is a normative demand that is rooted in general features of our nature of autonomous agents. In the end, some version of the quick argument for the value of intellectual autonomy will prevail.
10.4 Conclusion Our question was about the value we assign to the condition of being autonomous and to the cultivation of a certain virtue of intellectual autonomy that we manifest in our cognitive endeavors. Autonomy is fundamentally about agency and to that extent it is a constitutive aspect of intellectual life because of its contribution to obtain more and more epistemic achievements. The value of intellectual autonomy is also rooted in the value that is attributable to the epistemic achievements it helps to promote. That there is a closer link between the pursuit of understanding and exhibiting intellectual autonomy is a point that several recent arguments have highlighted. But none of these arguments are able to account for what is distinctively epistemically valuable without at the same time accounting for the value of autonomy in general. In other words, what brings autonomy into play is the ability of epistemic subjects to normatively engage in cognitive tasks. As an ideal, autonomy (and intellectual autonomy) translates into an aspiration for perfection of our agency. Further, to be a full agent is inseparable from how one governs one’s
212 Jesús Vega-Encabo behavior in response to what matters to one normatively, what one considers good to do. Agency therefore implies a way of engaging in normative tasks in which one takes oneself as the source of normativity. On the other hand, this aspiration is grounded on the cultivation and exercise of certain capacities, which are linked to how we shape ourselves our own epistemic identity. As intellectual beings who aspire to make their agency complete, we cannot help but see and value ourselves as beings who engage in epistemic inquiry under a certain conception of what is important to us. Our epistemic identities reflect a set of dispositions and attitudes toward oneself and others in relation to how to evaluate situations where the epistemic ends that matter to us are at stake. The exercise of these capacities is neither a necessary condition nor a guarantee that a certain epistemic status, be it knowledge or understanding, is constituted as an achievement as such. Intellectual autonomy matters, however, because it is the way in which we respond to what is epistemically valuable in shaping our own intellectual world. The value of intellectual autonomy is intimately linked to what is valuable in building and preserving our own epistemic identities, for they reflect our own evaluations as aspects of the aspiration for full agency. To accept this idea is to assume, at the same time, that the respect due to intellectual autonomy must be preserved even if it might lead to epistemic identities that are defective, that do not actually contribute to placing us in a position to know. It calls for respect for how each person builds her own epistemic identity in virtue of the value derived from the very exercise of one’s rational powers (Cholbi, 2017). This is so because, in the end, the value of autonomy, also of intellectual autonomy, is inseparable from how one is willing to constitute something like one’s own reasons together with others who recognize them and even contribute to strengthen and guarantee them. In the face of recognizable epistemic defects, the recipe is rational persuasion, not coercive correction; in other words, they demand a more complete intellectual autonomy, that there be no undue influence and that one exhibit a certain rational authority, which we can only do in adequate spaces of interlocution8.
Notes 1 These remarks stress the compatibility between the ideal of intellectual autonomy and the epistemic dependence on others. For a defense of the compatibility, see for instance Roberts and Wood (2010). About epistemic dependence, see the introduction to a recent published issue in Synthese by BroncanoBerrocal and Vega-Encabo (2020). 2 I personally would be inclined to accept something as the following: selfrespect entails a respect for the truth. It is not my objective to defend here such a thesis. It suffices with the following: an ideal of (intellectual) autonomy cannot be in conflict with the normative requirements of what makes of a belief an epistemically good belief.
Value of Intellectual Autonomy 213 3 I do not think that this very idea requires being committed to doxastic voluntarism. 4 For this, see Carter (this volume). 5 This is a very controversial idea, because it is difficult to establish what makes achievements distinctively valuable. I do not need a particular explanation of it in order to run the argument, beyond the fact that I have recorded above: our preference for success due to our involvement in bringing it about. This gets me away from Bradford’s explanation of the value of achievements in terms of the difficulty in overcoming the obstacles in bringing something about (Bradford, 2013) and gets closer to what Pritchard has dubbed a “general sense of achievement that captures the idea of a successful exercise of agential powers” (Pritchard, 2010). This notion of achievement and its value does necessarily require neither the overcoming of difficulties nor being the result of applying a significant or outstanding level of skill. 6 The nature and value of understanding has been widely discussed in recent literature. See for review Gordon (2017) and Hannon (forthcoming). 7 Sosa talks, for instance, of questions in the humanistic domains (moral, aesthetic, etc.). 8 Thanks to Fernando Broncano-Rodríguez, Josep Corbí and Jesús Navarro for very fruitful conversations on intellectual autonomy. My special thanks to the editors of this volume, Jonathan Matheson and Kirk Lougheed, who commented an earlier version and proposed many and enriching suggestions to improve the chapter. I am also grateful to audiences in conferences in Madrid and Valencia. This research has been funded by the Spanish Ministry of Science and Innovation through a research grant (FFI2017-87395-P).
References Ahlstrom-Vij, K. (2016). Is there a problem with cognitive outsourcing? Philosophical Issues, 26(1), 7–24. Bradford, G. (2013). The value of achievements. Pacific Philosophical Quarterly, 94(2), 204–224. Broncano-Berrocal, F., & Vega-Encabo, J. (2020). A taxonomy of types of epistemic dependence: Introduction to Synthese special issue on epistemic dependence. Synthese, 197, 2745–2763. Broncano-Rodríguez, F., & Vega-Encabo, J. (2011). Engaged epistemic agents. Crítica, 43(128), 55–79. Carter, J. A. (2021). Epistemic autonomy and externalism. In K. Lougheed & J. Matheson (Eds.), Epistemic autonomy. New York: Routledge. Carter, J. A. (2020). Intellectual autonomy, epistemic dependence, and cognitive enhancement. Synthese, 197, 2937–2961. Carter, J. A. (2018). Autonomy, cognitive offloading, and education. Educational Theory, 68(6), 657–673. Cholbi, M. (2017). Paternalism and our rational powers. Mind, 126(501), 123–153. Christman, J. (2020), Autonomy in moral and political philosophy. Stanford encyclopedia of philosophy, https://plato.stanford.edu/entries/autonomymoral/ accessed 11/01/2021. Coady, C. A. J. (2002). Testimony and intellectual autonomy. Studies in History and Philosophy of Science Part A, 33(2), 355–372.
214 Jesús Vega-Encabo Feinberg, J. (1986). Harm to self. The moral limits of criminal law. Oxford: Oxford University Press. Gordon, E. C. (2017), Understanding in epistemology. Internet encyclopedia of philosophy. https://iep.utm.edu/understa/ Grimm, S. (2012). The value of understanding. Philosophy Compass, 7(2), 103–117. Hannon, M. (2019). What’s the point of understanding? In What’s the Point of knowledge? A function first-epistemology (pp. 225–255). New York: Oxford University Press. Hannon, M. (forthcoming). Recent work in the epistemology of understanding. American Philosophical Quarterly. Hurka, T. (1987). Why value autonomy? Social Theory and Practice, 13(3), 361–382. Kornblith, H. (2012). On reflection. Oxford: Oxford University Press. Kvanvig, J. (2003). The value of knowledge and the pursuit of understanding. Cambridge: Cambridge University Press. Lynch, M. P. (2014). Neuromedia, extended knowledge and understanding. Philosophical Issues, 24(1), 299–313. Lynch, M. P. (2016). The internet of us: Knowing more and understanding less in the age of big data. New York: W. W. Norton. Lynch, M. P. (2017). Understanding and Coming to Understand. In S. Grimm (Ed.), Making sense of the world: New essays on the philosophy of understanding (pp. 194–208). Oxford: Oxford University Press. Malfatti, F. I. (2019). On understanding and testimony. Erkenntnis, 1–21. https:// doi.org/10.1007/s10670-019-00157-8. Pritchard, D. (2010), Knowledge and understanding. In D. Pritchard, A. Millar, and A. Haddock, The nature and value of knowledge. Three investigations, (pp. 1–88). Oxford: Oxford University Press. Pritchard, D. (2014). Truth as the fundamental epistemic good. In R. de Vitz & J. Matheson (Eds.), The ethics of belief (pp. 112–129). Oxford: Oxford University Press. Pritchard, D. (2016a). Epistemic Axiology. In P. Schmechtig & M. Grajner (Eds.), Epistemic reasons, norms, and goals (407–422). Berlin: De Gruyter. Pritchard, D. (2016b). Seeing it for oneself: Perceptual knowledge, understanding, and intellectual autonomy. Episteme, 13(1), 29–42. Riggs, W. D. (2003). Understanding ‘virtue’ and the virtue of understanding. In M. DePaul & L. Zagzebski (Eds.), Intellectual virtue: Perspectives from ethics and epistemology (pp. 203–226). Oxford: Oxford University Press. Roberts, R. C., & Wood, W. J. (2010). Intellectual virtues: An essay in regulative epistemology. Oxford: Oxford University Press. Sosa, E. (2007). A virtue epistemology. Oxford: Oxford University Press. Sosa, E. (2015). Judgment and agency. Oxford: Oxford University Press. Sosa, E. (2017). Epistemology. Princeton: Princeton University Press. Sosa, E. (forthcoming). Epistemic explanations. A theory of telic normativity, and what it explains. Oxford: Oxford University Press. Whiting, D. (2012). Epistemic value and achievement. Ratio, 25(2), 216–230. Young, R. (1982). The value of autonomy. The Philosophical Quarterly, 32(126), 35–44.
11 Epistemic Myopia Chris Dragos
11.1 Introduction In this chapter, I offer an account of a deep sort of epistemic failure I call epistemic myopia. Using Fisch’s account (2017) of rational framework transitions in science, I present a dialogical model of foundational epistemic well-being, that is, an account of how one can rationally monitor one’s most fundamental commitments – the beliefs, norms, and values at the core or foundation of one’s noetic framework. On this account, keeping one’s most fundamental commitments in good epistemic standing requires dialogue with trustworthy critics. I then present the flipside of this model, which is an account of the deep sort of epistemic failure I call epistemic myopia. One who is epistemically myopic cannot, in principle, rationally monitor their most core or fundamental commitments. They lack the critical agency necessary to maintain their foundational epistemic well-being.
11.2 The Problem In Creatively Undecided (2017) Menachem Fisch offers an original account of how rational transitions from one scientific framework (or worldview) to another can be instigated by individual agents who engage in dialogue with trustworthy critics. Fisch’s primary historical case study is how George Peacock’s Treatise on Algebra (1830) instigated a transition during the 1830s and 1840s in British mathematics from a realist to a formalist conception of algebra. In this section, I present this account as a general model of how one keeps one’s most core or foundational commitments in good epistemic standing. Fisch’s model is based in two neo-Kantian ideas (2017, ch. 2) and one neo-Hegelian idea (ch. 3). The two neo-Kantian ideas are, first, an understanding of normative agency in terms of autonomous self-governance and, second, the framework dependency of all noetic judgments. Taken alone, neither of these are particularly controversial ideas but they stand in tension when it comes to the issue of whether and how one can keep
216 Chris Dragos in good epistemic standing one’s deepest, most axiomatic commitments – those norms, values, and beliefs at the core or foundation of one’s noetic framework. Following Fisch, I call these core commitments one’s “framework principles.”1 Fisch starts with the Popperian identification of rationality with critical introspection. To be rational is to test one’s commitments and revise them when they’re found wanting. The neo-Kantian infusion comes from Kant’s distinction between merely being subject to a norm and subjecting oneself to a norm. Proper self-governance pertains to the latter: what compels me to act or assent in accordance with a normative standard of judgment is my endorsement of that norm. To normatively respond is to respond to a norm. This is, in Harry Frankfurt’s words, “an inward directed monitoring oversight,” resulting from “a sort of division of our minds” between that which is judged and that which takes normative responsibility for rendering judgement (Frankfurt, 2006, p. 4; Fisch 2017, p. 46).2 Kantian epistemic agency in particular is reflective self-governance of one’s beliefs, norms, and values in accordance with one’s endorsed epistemic standards. So, according to a Popperian model of rationality, explicated within the framework of Kantian agency, rationality requires that I test my commitments, to the end of satisfying epistemic standards I endorse. This model of rational agency is called “critical rationalism.” The second important neo-Kantian idea is that all judgment is framework dependent (Fisch 2017, pp. 40–43). The version of framework dependency required is minimal. It’s the idea that any noetic judgment I render on my own behalf is necessarily from within the normative framework I endorse. I can articulate and analyze standards different from those I endorse. But for me to find those standards worthy of my endorsement, I must judge them according to my endorsed normative framework. Likewise, for me to find wanting some standard I endorse, I must judge it according to my normative framework. If this is a constraint on all my judgments, then it is a constraint in particular on all my introspective judgements. Yet, if both critical rationalism and framework dependency are correct, the norms available for me to employ in introspective judgment are limited to those I can endorse by my current standards. Neither of these ideas, taken on its own, is particularly controversial. But deep tension emerges between them in the context of whether or how I can keep my framework principles in rational check. How can I introspectively determine whether my framework principles are wanting, as critical rationalism requires, if, given framework dependency, there are no norms in my framework more fundamental than these? If rationality requires autonomous, introspective judgment, then the framework dependency of judgment seems to make rational introspection upon framework principles impossible. That is, it seems impossible, in principle, for me to normatively appraise
Epistemic Myopia 217 the very normative framework constitutive of my thinking (Fisch 2017, p. 66). Put in Kantian terms, when it comes to governing my framework principles, there is no norm available that I can subject myself to. I can merely be subject to some norm I do not, and indeed cannot, endorse.3 Fisch finds accounts of normative self-governance on offer unequipped for addressing this fundamental problem (Korsgaard, 2009; Taylor, 1989; Frankfurt, 2006; McDowell, 1994). McDowell gets the closest in Mind and World (1994), where he’s concerned not with the general issue of how I keep myself in normative check but with the more fundamental issue of how I keep my very normative thinking in normative check. McDowell offers a limited notion of empirical accountability: I possess a second nature that’s shaped pre-conceptually through interaction with the world, and that unthinking second nature subsequently shapes the foundation of my normative thinking. But if my normative standards are shaped passively through subjection to unthinking standards, I do not employ my agency to govern myself in accordance with my endorsed norms. This is not critical rationality. Kant would say the norms generated through McDowell’s empirical accountability are norms I am merely subject to, not norms I subject myself to. Ultimately, these accounts fail to offer a way I can rationally maintain my framework principles. Fisch’s (2017) model is given in the context of scientific change – how fundamental commitments in scientific worldviews are rejected and replaced. When it comes to making sense of how the fundamental tenets of a scientific framework can rationally change, there seems to be an essential tension between, on the one hand, the Popperian idea that such changes occur through scientists’ reasoned, introspective judgments and, on the other hand, the Kuhnian idea that such judgments, and so any changes that ensue, are necessarily idiosyncratic, that is, undertaken without recourse to anything external to the framework in question. This is a specification of the tension between critical rationalism and framework dependency. The result is a division between two camps: the Popperians who operate as if framework dependency is not a reality, and the various schools of neo-Kuhnians who do not ground the normativity of frameworks in their survival of critical introspection. If individuals cannot rationally modify their framework principles, there remain two alternatives. One is that framework principles can be modified only arationally. Kuhn (1970) takes this option when he famously describes scientific revolutions as akin to conversions or gestalt shifts. In this respect, he does not depart from Carnap, according to whom “internal questions” are answerable within the terms of the frameworks in which they’re articulated, but “external questions,” which test scientific frameworks themselves, can be answered only on conventional, non-scientific grounds (1937, pp. §50–52, 28; Fisch 73).4 If we wish to retain the idea that framework transitions can be rational, it seems we’re forced toward collective critical rationalism. On this position, monitoring
218 Chris Dragos and modifying framework principles occurs at the collective level. I, as an individual, can follow suit, as subject to these principles, or stubbornly refuse. Any appearance that I am rationally instigating the process, that I subject myself to these principles, is epiphenomenal.5 However, the same problem rears its head. Regardless of whether we’re dealing with individuals or collectives, the same impediment prevents any subject from rationally modifying its framework principles from within its own framework. Resorting to collective rationality only pushes the problem up a level: left to its own devices, a group, community, or culture has no framework for rationally appraising its own endorsed framework principles. By definition, there are no endorsed principles more fundamental than framework principles. The problem is the same for any subject, individual or collective.6
11.3 A Dialogical Model of Foundational Epistemic Well-being The dialectical solution Fisch offers for rendering individual critical rationalism consistent with framework dependency is to go relational, not collective. As Michael Friedman (2002, 2010) argues in response to Kuhn, transformative ideas in the history of physics often originate outside the physics community, outside the reigning framework in which physics operates. For example, Hemholtz and Poincaré’s work on the foundations of geometry inspired Einstein to conceive of space in nonEuclidean terms (Fisch 2017, p. 68, note 12). The creative resources required to instigate normative inspection of that very framework of normative inspection must come from outside the box. Fisch expands on Friedman’s proposal substantially, arguing that Friedman only identifies historical cases in which theoretical alternatives originated from the outside and eventually won-out. Friedman does not elucidate the instigation of these transitions, what epistemic grounds one might have for finding what’s endorsed wanting and generating an alternative. It was not the mere availability of Hemholtz and Poincaré’s ideas that did the job for Einstein. He first needed to find something in his operative framework wanting. The problem is that one necessarily appraises any external criticism of one’s framework principles using those very framework principles. Thus, to endorse such criticism would be to find one’s normative framework wanting according to that very framework. This would be self-defeating. But there are other ways a recipient can consume criticism beside endorsement, other ways her commitments can be affected. Although I cannot fully inspect my critic’s grounds for her criticism of my framework principle, my trusting of her as a competent and sincere critic can render me ambivalent enough toward that principle that it loses its core standing. It then becomes possible for me to bring it into the purview of my normative framework and engage in a process of critical introspection.
Epistemic Myopia 219 Fisch contends that the criticism required to render me ambivalent toward my framework principle must be prudent; it must be normative appraisal of blameworthiness on my part, and thus a call to address it (Fisch 2017, pp. 86–87). The goal of prudent criticism is that the recipient rectifies the problem within her framework. Yet, given framework dependency, my critic cannot show me using my normative framework why my normative framework is wanting. Thus, prudent criticism is specified not to the critic’s satisfaction but, as far as possible, from the recipient’s perspective (88). In the typical case, I am blamed for how I apply my norms. This is what underlies the criticisms, “You should know better,” and, “You’re betraying your principles.” My critic tries to show me why I must do better in terms she thinks I accept.7 Here, Fisch relies on Michael Walzer’s notion of “connected” social criticism, according to which “criticism follows from connection … if [my critic] were a stranger, really disinterested, it is hard to see why he would involve himself in [my] affairs” (Walzer, 1988, pp. 371–372). If I am convinced that my critic is competent and sincere, not there just to win a debate or for any reason other than to benefit me, I cannot rationally dismiss her criticism as a dishonest portrayal. What options remain to me? If the aim of my critic and the requirement of rationality converge – the aim being my epistemic betterment – the only way left for me to rationally address the criticism is through reasoned refutation or modification of my views. However, just as the framework dependency of my normative reasoning precludes a complete, reasoned endorsement of prudent criticism of my framework principle, it precludes a complete, reasoned rebuttal I can endorse. I find myself at an impasse (Fisch 2017, p. 92). Frankfurt contends that it is rational to want others to see and think of me as I wish myself to be (1988, p. 163). More plausible still is that it’s rational to want others who I deem competent and honest and trustworthy to think of me as I wish myself to be. Thus, a sustained inability to resolve the dissonance between what I wish to be and what a trustworthy other sees and thinks me to be can loosen my commitment to the targeted principle. The typical state of cognitive dissonance is uncomfortable enough. But to experience dissonance concerning a framework principle is to be of two minds, “volitionally fragmented,” “moving … in contrary directions simultaneously,” “obstinately undefined” (Frankfurt, 2004, p. 92). Indeed, if rationality means introspectively applying one’s normative framework, then conflict within one’s normative framework threatens one’s very capacity for rationality (Fisch 2017, p. 95). Such dissonance demands attention. In time, my inability to refute trusted criticism of my framework principle, or to endorse the criticism, shakes that commitment loose enough from the foundational level of my framework. Having become sufficiently ambivalent toward that commitment, I can subject it to self-criticism.8
220 Chris Dragos One might object that this account fails to show how it’s possible for one to maintain one’s framework principles in strictly rational fashion. After all, I cannot, in principle, endorse all the grounds on which my critic challenges my framework principle. Are my framework principles, then, necessarily grounded in a blind faith, at least in part? Not if I can rationally discriminate between trustworthy and untrustworthy critics. It’s not controversial that I can do so in other contexts. I rely on experts to learn the difference between quarks, leptons, and bosons, and that taking copious quantities of vitamin D daily leads to calcium build-up in one’s organs. I can’t access all the evidence for these claims. But I can make a determination about the competence and sincerity of the relevant experts.9 How might I come to rationally trust a critic of my most fundamental commitments? Suppose you offer criticisms of the tenets underlying a political ideology we both oppose. I fully comprehend and endorse these criticisms. I see they’re given in good faith, that you aim to convince others rather than repudiate them. I believe you are a trustworthy critic – competent and sincere – whether or not this is acknowledged by those holding the views you target. I then encounter your criticism of my views about God’s (non-)existence. I understand your criticism and detect nothing in it to suggest it’s not like the criticism I endorsed: given with sincerity and competence. So, I cannot rationally dismiss your criticism as dishonestly portraying my views. I acknowledge that your criticism calls for a reasoned refutation or modification of my views. However, I can’t bring myself to rationally endorse any refutations or modifications I am confronted with or can conceive of. This is not because I can’t think outside the box. In such a case, it’s not that an alternate framework is unthinkable; rather, it’s not rationally endorse-able (Fisch 2017, pp. 115– 116). I cannot rationally endorse your criticism nor any refutation of it, though I know a rational response must take one of these two forms. The unresolved dissonance renders me “volitionally fragmented,” “obstinately undefined” (Frankfurt, 2004, p. 92). Once in this state, how does one become epistemically “whole”? How does one come to a rational resolution of the problem raised by the trustworthy critic? This may not be a direct or quick process. If we trace the shift in George Peacock’s work from a realist to a formalist conception of algebra, we won’t find a linear argument from one to the other. What occurred was not a tweaking of beliefs derived within a framework but a reorientation of the framework itself. The same is true of those we know who’ve changed their fundamental political or religious views. No one woke up seeing the world through the strictures of Aristotelian physics and went to bed a Cartesian or Newtonian. No single talk or experiment shifted someone from holding a firm Newtonian view of space and time to a relativistic conception of spacetime.10 There is an intermediary impasse: the acknowledgement of a problem concerning a fundamental
Epistemic Myopia 221 element of one’s noetic framework, yet the lack of a solution. Only when one becomes sufficiently ambivalent can a phase of creative indecision begin.11 Consider a historical case. Like some of his contemporaries, Copernicus was no longer a firm Aristotelian, and yet he was unable to rationally generate a complete replacement of the interconnected elements of Aristotelianism he found wanting. Geocentrism, after all, was wedded to Aristotelian terrestrial and celestial physics, among many other elements of Aristotelian-medieval natural philosophy. This state of ambivalence in Copernicus and others came after longstanding failure: Ptolemaic models in astronomy became increasingly complex and ad hoc over many centuries, and yet could never precisely align their predictions about the future positions of heavenly bodies with the positions they in fact came to occupy. But how did Copernicus in particular move from ambivalence toward geocentrism to offering its replacement? Two sources Copernicus likely encountered are the ancient Greek thinkers Philolaus and Aristarchus, especially Aristarchus, who proposed a heliocentric model nearly two millennia before Copernicus. Copernicanism won out as the eventual replacement of geocentrism. Granted, it was some time before it became widely accepted. A new physics was needed that could make sense of heliocentrism, which Descartes then Newton provided. But the lessons apply on a more mundane scale. Rejecting a framework principle may have reverberating effects. We experience deep dissonance when in a state of ambivalence toward a framework principle. We encounter our Philolauses and Aristarchuses – candidate replacements – which would not have even appeared on our radars before we became ambivalent to our previously firm commitments. We consider the consequences throughout the rest of our noetic frameworks of accepting a given replacement. We know from those we observe over time, and perhaps from our own experience, that fundamental shifts in people’s views don’t fit simple, critical-thinking models for how minds are supposed to change. This can be an uncomfortable and drawn-out process.12
11.4 Epistemic Myopia So far, we’ve explored how one succeeds in keeping one’s framework principles in good epistemic standing. In this section, we turn to ways one can fail to do so. One can hold one’s framework principles in epistemically vicious ways already delineated by epistemologists in more general contexts. This section is concerned with a different and deep sort of epistemic failure I call epistemic myopia, which is the state of being unable, in principle, to rationally monitor one’s framework principles. A necessary step in keeping my framework principles in rational check is openness to trustworthy (i.e. competent and sincere) criticism
222 Chris Dragos of my framework principles. This requires me to rationally discriminate between trustworthy and untrustworthy critics. The deepest way I can possibly fail epistemically with respect to my framework principles is to cut myself off from trustworthy critics. Suppose I take criticism of a given framework principle of mine to mark a critic as untrustworthy, that is, to show she is incompetent or insincere. Given that engagement with trustworthy critics of my framework principle is necessary for instigating rational self-criticism of that principle, this standard makes it impossible for me to keep that principle in good epistemic standing. I have made it impossible to become ambivalent toward that principle, and thus impossible for me to rationally engage with that principle. If I judge any critic of my framework principle to be untrustworthy, that commitment has become utterly entrenched, recalcitrantly held, immune to rational revision, rejection, or endorsement. In other words, I am in an epistemically myopic state with respect to that principle. In this state, my framework principle can be modified only arationally.13 For example, the only way I can keep the fundamental tenets underlying my political commitments in rational check is if that process is instigated by engagement with critics of those tenets who are indeed trustworthy and who I judge to be trustworthy. However, if I judge any such critic to be suspect, it is impossible for me to enter a state in which I can subject these tenets to any degree of rational scrutiny. This means I hold my political commitments, which are grounded in those tenets, in epistemically myopic fashion. Worse, this is a context in which epistemically myopic commitments present non-epistemic dangers. If the fundamental grounds for my political commitments have no rational check, then the only limits on me working out their consequences are arational (plus maintaining internal consistency). Those consequences may or may not be benign.14 To be clear about what precisely epistemic myopia is, it will help to distinguish between it and other states. Epistemic myopia is not a vicious example of what Bas van Fraassen calls an epistemic “stance.” A stance is a cohesive philosophical approach with sufficient scope and depth to offer guidelines for dealing with a broad set of existing and potential evidence and knowledge claims (Cassam, 2019, p. 83). Empiricism is a good example. It guides one to reject claims to factual knowledge not based in sense experience. A stance is “something one can adopt or reject” by rational means (van Fraassen, 2004, p. 175). One can imagine reasoning away from or toward an empiricist stance over time. But one cannot reason out of epistemic myopia. If the constitutive elements of stances can be rationally modified from within one’s noetic framework, then the constitutive elements of stances are not framework principles. Rather, they are subjected to normative appraisal according to one’s framework principles.
Epistemic Myopia 223 Epistemic myopia is also distinct from Quassim Cassam’s notion of epistemic insouciance, which is, … an indifference or lack of concern with respect to whether [one’s] claims are grounded in reality or the evidence. Epistemic insouciance means not really caring much about any of this and being excessively casual and nonchalant about the challenge of finding answers to complex questions, partly as a result of a tendency to view such questions as much less complex than they really are. (Cassam 2019, p. 79) Epistemic insouciance is an attitude of casual indifference. It’s the attitude of Harry Frankfurt’s bullshitter, producing assertions “without concern for the truth” (Frankfurt, 2005, p. 47; Cassam 2019, p. 80). Epistemic myopia is not an attitude. As Cassam tell us, “one’s attitude towards something is one’s perspective on it or one’s evaluation of it” (81). There are attitudes that may accompany epistemic myopia: critics are met with disdain, repulsion, aversion. (I return to this shortly.) But these are only some symptoms of epistemic myopia, not epistemic myopia itself. In any case, if one is disdainful of, repulsed by, or averse to engaging with critics of a framework principle, then one is anything but casually indifferent (i.e. insouciant). Furthermore, it’s possible to become epistemically conscientious – which I take to be the opposite of epistemically insouciant – by rational means. This happens whenever one is persuaded away from being indifferent to the truth about some matter. By definition, it is not possible to be rationally persuaded out of epistemic myopia, since to be epistemically myopic is have cut oneself off from all trustworthy critics, the only sources who can instigate the process. So, even if one is epistemically insouciant toward one’s framework principles – even if one might not care to keep them in check by engaging with trustworthy critics – once one is persuaded to care, one can engage with trustworthy critics. Unlike epistemic myopia, epistemic insouciance is a rationally soluble problem. Epistemic myopia is a special, deep kind of closed-mindedness. On Heather Battally’s account, “closed-mindedness is an unwillingness or inability to engage (seriously) with relevant intellectual options” (2018, p. 262). More specifically, epistemic myopia is more like a special case of dogmatism, which is a species of closed-mindedness, “an unwillingness to engage (seriously) with relevant alternatives to a belief one already holds” (p. 280, emphasis mine). But there is something qualitatively distinct about epistemic myopia that makes it a unique sort of dogmatism. One can be epistemically myopic only with respect to a framework principle. But one can be dogmatic with respect to a framework principle in a
224 Chris Dragos non-myopic way. One can exhibit an “unwillingness to engage (seriously) with relevant alternatives” to one’s framework principle. This happens if one avoids hearing any criticism in the first place. But this does not mean one cannot “engage (seriously) with relevant alternatives” if one so desired. If “seriously” entails “rationally,” then being epistemically myopic with respect to a framework principle means one cannot engage seriously with alternatives to that principle, even if one so desires. Epistemic myopia is a rationally insoluble dogmatism. A framework-level commitment to open-mindedness in some sense is necessary to avoid epistemic myopia. One worry might be that this account writes off anyone who is not open-minded as a lost cause epistemically – beyond epistemic help.15 However, one needn’t be openminded in a more general sense in order to avoid epistemic myopia. Most types of closed-mindedness and even dogmatism neither amount to nor entail epistemic myopia. For example, closed-mindedness toward non-framework principles is non-myopic. Indeed, some (probably most) types of closed-mindedness toward framework principles is non-myopic. On Battaly’s account, dogmatism is a species of closed-mindedness and epistemic myopia is a species of dogmatism. This includes passive, epistemically vicious states regarding one’s framework principles. Consider three people, one who avoids criticism of a framework principle, one who refuses to make a rational determination regarding the trustworthiness of a critic, and one who abstains from seeking a rational refutation or endorsement of trusted criticism once in a state of ambivalence. All are closed-minded but none are myopically so; it remains possible for one to rationally determine that they ought not avoid criticism, that they ought to make a rational determination regarding the trustworthiness of a critic, or that they ought to seek a rational refutation or endorsement of trusted criticism. Epistemic myopia is specifically the active determination that any critic of one’s framework principle are untrustworthy.16
11.5 Corroborating Cognitive Neuroscience I’ve argued that criticism of framework principles is different than criticism of other commitments. One corroborating piece of evidence is that it does feel different to receive criticism aimed at a deeply held, defining commitment (e.g. that I am free in the libertarian sense) rather than at some surface level belief (e.g. that Toronto is the capital of Canada). There is also corroborating evidence in recent cognitive neuroscience. It’s well known that we have a tendency to downplay evidence conflicting with our commitments and to seek out and ascribe greater significance than warranted to evidence that supports them. In response to conflicting evidence, we’re prone to form new memories and neural connections to strengthen our confidence in a targeted commitment (Nyhan and Reifler, 2010). This is called the backfire effect.
Epistemic Myopia 225 More specific and relevant are the results of a study performed by the Brain and Creativity Institute at the University of Southern California. When it comes to how we handle challenges to deeply held commitments, the effect is far more pronounced and associated with distinctive neurophysiological activity (Kaplan et al., 2016). Before the study, 40 subjects reported high confidence, on a scale of one-to-seven, in eight political beliefs to which they were deeply committed, as well as in eight nonpolitical beliefs. Researchers then observed through MRI the subjects’ neural activity when they were presented with five different challenges to each of their 16 beliefs. Subjects commonly reported a multi-point loss of confidence in their non-political beliefs after they were challenged. For example, subjects reported lower credence in the statement, “Thomas Edison invented the lightbulb,” after being presented with challenges such as, “Nearly 70 years before Edison, Humphrey Davy demonstrated an electric lamp to the Royal Society.” In response to challenges to political statements like, “the laws regulating gun ownership in the United States should be made more restrictive,” there was little to no change reported. In such cases, subjects displayed greater activity in their amygdalae, which are involved in perceiving and responding to environmental threats, as well as greater activity in their insular cortex, which determines emotional responses to stimuli, and in their Default Mode Network (DMN), a system associated with inward reflection upon matters of self-identity. Other studies show that higher DMN activity is not content dependent but instead associated with depth of commitment (Harris, Sheth, and Cohen, 2008; Harris et al., 2009). The DMN is also involved in reflection on religious belief (Harris et al., 2009) and core values (Kaplan et al., 2017). So, one built-in, physiological response to deep criticism is to initiate a deep state of thinking that tunes-out perceptions of one’s immediate physical environment. In other words, deep introspection is prioritized at the risk of exposure to (other) immediate environmental dangers. In an interview discussing the group’s findings, lead author, Jonas Kaplan, explained that “[t]o consider an alternative view, you would have to consider an alternative version of yourself” (Gersema, 2016). This is exactly right if the target is a framework principle, in which case I cannot direct myself – I cannot direct some more fundamental part of my normative framework – against myself. Thus, I respond in a different way. On the other hand, when I inspect my belief that Thomas Edison invented the lightbulb, I do so in the context of my normative framework. Whatever the neuro-correlates of such standard introspection are, we should not be surprised if they differ from the neuro-correlates of challenges to framework principles. After all, I cannot inspect challenges to my framework principles in the context of my normative framework. These studies suggest I respond to such a challenge by identifying it as a threat (amygdalae), prioritizing it over all else at hand (DMN), and warding it off with an emotional response (insular cortex).
226 Chris Dragos However, the philosophical model of foundational epistemic wellbeing presented earlier, along with the corollary model of epistemic myopia, offers some hope for handling deep criticism differently, so long as the source is deemed trustworthy and, thus, not constituting an immediate threat. A trustworthy critic of my framework principle is one whom I deem competent and sincere, trying to show me how I might better myself epistemically by framing her criticism from my perspective as much as possible. When I acknowledge the trustworthiness of her criticism, I experience but do not succumb to strictly arational responses. I realize that a reasoned response is required. The state of ambivalence toward a framework principle is attained through acknowledging the trustworthiness of criticism and yet being unable to endorse a complete, reasoned response. Over time, the two-mindedness – the dissonance – loosens my commitment to the target principle. I can then engage in critical introspection. The research cited above also suggests that when one of my deep commitments is criticized, if I identify it as a threat and reject it on strictly arational grounds, I am more likely to respond this way to any future challenges to my commitment. Subsequent challenges result in more pronounced neurological activity of the sort identified above. In other words, I become more recalcitrant with each challenge. Arational response begets a more reflexively arational response. This makes matters more pressing than the earlier philosophical account of epistemic myopia suggests on its own. If I repeatedly dismiss criticism of my framework principles as not requiring reasoned response, my reflexive, arational defenses become more dominant, making it more difficult for me to rationally engage with criticism. Acknowledging the trustworthiness of my critic is the way for me to rise above such responses and remain capable of rational agency with respect to my framework principles. Repeated, untrusted criticism makes it increasingly physiologically difficult to rise above, perhaps to the point where I am utterly closed off from trustworthy criticism. This is a state of utterly insoluble dogmatism, at least by rational means. This is a pathological myopia.17
Notes 1 Jon Matheson and Kirk Lougheed point out that this chapter’s proposals can be connected to the literature on hinge commitments and deep disagreement. I draw some connections in notes throughout this chapter. 2 This does not mean I am the source of the norms I’m obliged to follow. To subject ourselves to norms “it is not required that we ourselves make them, but that we make them ours by a willed act of endorsement” (Fisch 2017, p. 43). Rationality understood as a sort of self-alienation contrasts with passive models according to which my commitments and actions are determined by forces and according to standards not requiring my endorsement.
Epistemic Myopia 227 3 The Kantian forms of intuition, the categories, and the synthetic apriori truths derived from them are fixed, not normative. For Kant, there is no problem pertaining to the maintenance these structures because they cannot be modified. Today there are neo-Kantian models that relativize the constitutive apriori and so depict frameworks as acquired, not built into human nature, and thus normative. According to the neo-Kantian notion grounding Fisch’s model, I necessarily depend on a framework but not on any particular framework. I can acquire, reject, or modify my framework principles. The question is whether and how I can do so rationally. 4 Also pertaining to scientific change, Richard Rorty says that to change framework principles is to “gradually lose the habit of using certain words and gradually acquire the habit of using others” (1989, 20; Fisch 2017, p. 72). 5 At issue are the very reasons for instigating framework transitions in the first place, not reasons for choosing between an old framework and an already-generated, new framework (Fisch 2017, p. 66). Many philosophers of science have addressed the latter issue without asking what the rational motivation could be for generating an alternative in the first place. How, for example, was the Newtonian framework found wanting on Newtonian grounds? The question is not, how did the Newtonian community transition from a Newtonian picture of the world to an Einsteinian one, when there were already Newtonian and Einsteinian alternatives on the table? Before we criticize Aristotelian natural philosophers for failing to conceive of the concept of inertia, or eighteenth-century scientists for failing to conceive of nonEuclidean geometries, we should note that these ideas cannot be articulated from within the Aristotelian and classical physics frameworks, respectively (Fisch 2017, pp. 74, 158). We should first inquire about how it became possible to articulate these ideas. With respect to the collectivist option, the issue is whether framework transition is prospectively instigated at the collective level, with individuals retrospectively falling in line or refusing, or whether individuals prospectively instigate framework transition, with everyone else retrospectively choosing between already-given alternatives. This is the fuzziest part of Kuhn’s work: how precisely is an alternative generated in the first place, not just when? Following a Wittgensteinian reading of Kuhn, many contend that alternative frameworks are collectively generated, since collective practice grounds normative frameworks (Fisch 2017 p. 76). 6 A consequence of the problem outlined in this section, spelled out in terms of hinge commitments, is that hinge commitments are not “reasons-responsive” (Pritchard 2016, 90) – that is, epistemic reasons for analyzing hinge commitments run out. The solution offered in Section 11.3 of this chapter entails that they are reasons-responsive in principle. 7 The Kuhnian analogue for scientific contexts concerns malpractice during periods of normal science. But prudent criticism pertaining to framework principles is deeper. It’s criticism not of how I apply my norms but of my norms themselves. 8 Do I need to actively seek out trustworthy critics? Do I have a default entitlement to each framework principle, so long as I am open to potential hypothetical challenges from trustworthy critics? These questions deserve answers, but the purpose of this chapter is to introduce and delineate the broad elements of foundational epistemic well-being and, in the next section, of a deep epistemic failure I call epistemic myopia. 9 Though this is a relational account, it remains a version of individual critical rationalism, since my rationality remains strictly a matter of my individual critical introspection. I cannot directly judge my framework principle to be
228 Chris Dragos wanting but I can judge a critic to be trustworthy, and I thereby allow that trusted other to render me ambivalent to that principle. Once sufficiently ambivalent to that principle I can directly appraise it within my normative framework. I don’t move from ambivalence to endorsement, revision, or rejection except through my critical introspection. 10 Some mythology surrounding the 1919 Eddington voyages has it that these experiments did just this. Several accounts dispel this notion (e.g. Waller 2002, 53-8). 11 Here and elsewhere in the chapter the examples given are more of holistic approaches or systems of principles rather than individual principles. Fogelin (2005) is concerned with the former sorts of cases. More recently, Smith and Lynch (2020) draw this sort of distinction towards delineating different types of deep disagreement. I vacillate between types in this chapter, but I don’t take this to be salient for this chapter’s proposals. 12 Michael Lynch (2016) focuses on epistemic first principles – “principles that announce that basic methods for acquiring beliefs are reliable.” When it comes to defending these, he says, “reasons just give out,” and so, “our prospects [for defending those principles] depend on whether we can make sense of giving objective practical reasons for our epistemic first principles” (248). According to the relational account presented in Section 11.2 of this chapter, reasons needn’t give out. 13 The myopia is worse if I take it to be a necessary condition for trustworthiness that one positively affirm my framework principle. 14 It should be noted that to avoid epistemic myopia is not to deem those who fundamentally disagree with me to be equally well-reasoned or equally justified in their opposing views. What’s important is that I am open, through acknowledging the trustworthiness of others, to becoming ambivalent to my framework principles so that I can rationally make this determination myself. Changing my mind is not required in order to avoid epistemic myopia. Being capable of reasoning about my framework principles is. 15 I thank Jon Matheson for posing this worry. 16 Spelled out in terms of the epistemology of hinge commitments and deep disagreement, this chapter’s proposals are as follows. Hinge commitments are, in principle, reasons-responsive, to put it in Pritchard’s (2016, p. 90) terms, and their good epistemic standing is maintained in part relationally: through openness and, when it presents itself, through proper response to genuine deep disagreement, which is disagreement about hinge commitments. A proper response occurs in two stages. It is, first, the apt determination of the trustworthiness of another party to a deep disagreement. This is the relational part. Once sufficiently ambivalent to the commitment in question that one is capable of aptly endorsing an argument or arguments for rejecting or retaining the commitment in question, an apt response is, second, the apt endorsement of an argument or arguments for rejecting or retaining the commitment in question. Section 11.4 of this chapter focuses on failure at the first stage, which results in the state of epistemic myopia. 17 I thank Jon Matheson and Kirk Lougheed for very insightful comments. I thank participants at a 2019 symposium on epistemic paternalism held at the Canadian Philosophical Association’s Annual Conference, where I presented an early ancestor of this chapter. Thanks to Kirk Lougheed for organizing that symposium.
Epistemic Myopia 229
References Battally, H. (2018). Closed-mindedness and dogmatism. Episteme, 15(3), 261–282. Carnap, R. (1937). The logical syntax of language. London: Kegan Paul, Trench Trubner & Co. Cassam, Q. (2019). Vices of the mind: From the intellectual to the political. Oxford: Oxford University Press. Fisch, M. (2017). Creatively undecided: Toward a history and philosophy of scientific agency. Chicago: University of Chicago Press. Fogelin, R. (2005). The logic of deep disagreement. Informal Logic, 25(1), 3–11. Frankfurt, H. (1988). The importance of what we care about. Cambridge: Cambridge University Press. Frankfurt, H. (2004). The reasons of love. Princeton NJ: Princeton University Press. Frankfurt, H. (2005). On bullshit. Princeton: Princeton University Press. Frankfurt, H. (2006). Taking ourselves seriously & getting it right. Stanford: Stanford University Press. Friedman, M. (2002). Geometry as a branch of physics: Background and context for Einstein’s ‘geometry and experience’. In M. David (Ed.), Reading natural philosophy: Essays in the history and philosophy of science and mathematics presented in honor of Howard Stein on the occasion of his 70th birthday (pp. 193–229). Chicago: Open Court. Friedman, M. (2010). A post-Kuhnian approach to the history and philosophy of science. The Monist, 93(4), 497–517. Gersema, E. (2016, December 23). Hard-wired: The Brain’s circuitry for political belief. ScienceDaily. University of Southern California. Retrieved May 30, 2019 from www.sciencedaily.com/releases/2016/12/161223115757.htm. Harris, S., Kaplan, J. T., Curiel, A., Bookheimer, S. Y., Iacoboni, M., & Cohen, M. S. (2009). The neural correlates of religious and nonreligious belief. PLoS ONE, 4(10), e7272. 10.1371/journal.pone.0007272. Harris, S., Sheth, S. A., & Cohen, M. S. (2008). Functional neuroimaging of belief, disbelief, and uncertainty. Annals of Neurology, 63, 141–147. 10.1002/ Ana.21301. Janis, I. L. (1971). Groupthink. Psychology Today, 5(43–46), 74–76. Kaplan, J. T., Gimbel, S. I., Dehghani, M., Immordino-Yang, M. H., Sagae, K., Wong, J. D., Tipper, C. M., Damasio, H., Gordon, A. S., & Damasio, A. (2017). Processing narratives concerning protected values: A cross-cultural investigation of neural correlates. Cereb Cortex, 27(2), 1428–1438. 10.1093/cercor/ bhv325. Kaplan, J. T., Sarah I. Gimbel, and Sam Harris (2016). “Neural correlates of maintaining one’s political beliefs in the face of counterevidence.” Scientific Reports (by Nature) 6: 39589. doi: 10.1038/srep39589. Korsgaard, C. (2009). Self-constitution: Agency, identity, and integrity. Oxford: Oxford University Press.
230 Chris Dragos Kuhn, T. (1970). The structure of scientific revolutions. Chicago: University of Chicago Press. Lynch, M. P. (2016). After the spade turns: Disagreement, first principles and epistemic contractarianism. International Journal for the Study of Skepticism, 6(2–3), 248–259. McDowell, J. (1994). Mind and world. Cambridge, MA: Harvard University Press. Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330. Peacock, G. (1830). A treatise on algebra. Cambridge: Deighton. Pritchard, D. (2016). Epistemic Angst: Radical scepticism and the groundlessness of our believing. Princeton, NJ: Princeton University Press. Rorty, R. (1989). Contingency, irony, and solidarity. Cambridge: Cambridge University Press. Sagan, K., & Druyan, A. (1997). The demon-haunted world: Science as a candle in the dark. New York: Ballantine Books. Smith, P. S., & Lynch, M. (2020). Varieties of deep epistemic disagreement. Topoi. https://doi.org/10.1007/s11245-020-09694-2. Taylor, C. (1989). Sources of the self: The making of modern identity. Cambridge, MA: Harvard University Press. van Fraassen, B. C. (2004). The empirical stance. London: Yale University Press. Waller, J. (2002). Fabulous science: Fact and fiction in the history of scientific discovery. Oxford: Oxford University Press. Walzer, M. (1988). The company of critics: Social criticism and political commitment in the twentieth century. New York: Basic Books.
12 Intellectual Autonomy and Its Vices Alessandra Tanesini
In this chapter, I carve out a notion of intellectual or epistemic autonomy that stands in opposition to two families of intellectual vices of hyperautonomy and heteronomy of the intellect. I suggest that answerability as a form of responsibility for one’s beliefs is key to intellectual autonomy. Answerability provides the means to flesh out the idea that to exercise intellectual autonomy is to have one’s own reasons for one’s beliefs.1 However, the epistemic value of intellectual autonomy so understood is not transparent, since a person can believe something autonomously and yet that belief be shoddily arrived at or based on poor evidence. The same worry about the epistemic value of intellectual autonomy can be raised for other dominant accounts of this notion. For instance, an agent can be self-reliant in her believing and at the same time be careless. The view that epistemic autonomy consists in the harmonization of one’s attitudes faces the same problem since one can harmonize one’s beliefs away from the truth. In addition, there is no guarantee that psychological harmony tracks or reflects logical or probabilistic consistency. So, it would seem at best moot whether there is any value in the notion of intellectual autonomy that is not wholly subsumed into the practical value of the notion of personal autonomy. I answer this worry by proposing that the epistemic value of intellectual autonomy is, counterintuitively, as a necessary condition for an agent to qualify for the role of informant; that is, as someone who is able to convey information by means of testimony. However, to perform this epistemic role, it is not sufficient that agents are answerable for at least some of their beliefs because their reasons for these are their own, agents must also be recognized by other agents as being answerable. Hence, although it is possible to be intellectually autonomous without being recognized as such, without recognition intellectual autonomy loses its epistemic value (but not its practical value). In sum, intellectual autonomy that is worth having epistemically has others’ recognition as one of its pre-conditions. The chapter consists of four sections. The first argues that intellectually autonomous belief is belief for which the agent is answerable. The
232 Alessandra Tanesini second section demonstrates that the epistemic point of distinguishing agents who are intellectually autonomous from those who are not is that only the former are fit to offer testimony. That is, intellectual autonomy is epistemically valuable because it marks out those who can invite others’ trust and thus convey information to them. Since one can invite trust only if one’s authority to issue invitations is recognized, the value of autonomy partly depends on others’ recognition that one is answerable. The third section analyses how failures of recognition can undermine intellectual autonomy by effectively turning informants into mere sources of information. The final section explores the causal influences of structural oppressive relations on people’s psychologies contributing to the genesis of intellectual vices of arrogance and vain narcissism as irresponsible hyper-autonomy and of servility and timidity as intellectual heteronomy.
12.1 Intellectual Autonomy as Answerability In this section I argue that intellectual autonomy consists in the proper exercise of one’s epistemic agency understood as the ability to responsibly form, revise, and sustain one’s beliefs in the light of one’s own reasons. I contrast this view with two other existent accounts of intellectual autonomy. The first equates autonomy with self-reliance (Fricker, 2006). The second identifies autonomy with rational self-governance by means of conscientious reflection about one’s beliefs (Zagzebski, 2012). The person who is intellectually autonomous is the person who can make up her own mind as to what she believes. Her views and opinions are her own because they are based on reasons that are also her own.2 One could initially frame these thoughts in terms of non-interference. The autonomous person does not suffer from undue interference from external sources of information such as media outlets or other epistemic agents. She is also not subject to the interference of internal non-rational forces such as desires, wishes, or impulses. The account of intellectual autonomy as self-reliance is designed to screen off external influences. The view that autonomy is rational self-governance is especially suited to neutralize the influence of internal conative and affective forces on belief formation.3 Accounts of intellectual autonomy as absence of interference, despite their initial appeal, are defenseless against the thought that interference is always present in human cognition. Complete self-reliance would be stultifying for human beings who are interested in acquiring knowledge or at least true belief. Much of what we learn, we learn from other people. Thus, either not all outside influence undermines autonomy or autonomy is not something to which we should always aspire in the intellectual domain. Either way we need to distinguish good from bad external influences. I suspect, however, that it would prove exceedingly hard to formulate a principled demarcation in the vocabulary of self-reliance.4
Intellectual Autonomy and Its Vices 233 Similarly, human reasoning is always or nearly always affected by emotional responses. Epistemic feelings and emotions such as doubt, certainty, anxiety, and curiosity have been clearly shown to have widespread influence on human cognitive processing (Dokic, 2012; Hookway, 2008). There is robust empirical evidence that these feelings play an important and epistemically positive role in the self-governance of human cognition (Proust, 2013).5 Arguably, the effective self-management of belief is as much due to affective monitoring and control than it is to reflection. Therefore, either emotional influence on reasoning is not always autonomy undermining or intellectual autonomy itself is not something always to be pursued. Either way, there is no clear criterion based on the notion of reflection that demarcates affective influences that are compatible with autonomy or explains the circumstances in which autonomy is detrimental. Be that as it may, it is possible to begin thinking of autonomy not primarily as the absence of interference on one’s thinking of forces outside reason but in terms of epistemic responsibility. According to this view, an agent is intellectually autonomous if she is responsible for at least some of her beliefs. Relatedly, a belief is autonomously held by an agent if she is responsible for it. Autonomy so understood is not a virtue or a character trait. It is a property of those epistemic subjects whose beliefs at least sometimes are responsibly held and sustained. This notion of epistemic responsibility is in turn fleshed out as answerability. That is, a person is epistemically responsible for one of her beliefs if and only if she is answerable for that belief.6 I borrow from Shoemaker (2015) this notion of answerability as a kind of responsibility. He argues that agents are responsible in the sense of being answerable for those actions and beliefs that are the expression of the quality of their judgment. More specifically, agents are answerable for those views and behaviors that they are capable of justifying by supplying reasons in their support and by considering whether their conduct and beliefs are better than some relevant alternatives. For example, a person who decides to set off for a sailing trip is answerable for her choice if she can assess whether it is a good idea to go sailing by considering reasons that support going but also reasons in favor of a different decision. Of course, the person who decides to go sailing, and sets off without consulting the weather forecast, is answerable for her poor decision making. She might not have good reasons for her decision, and she might have given no thought to the option of staying put. Nevertheless, she is answerable, provided she can assess the situation by considering a variety of reasons in support of alternative choices. Importantly, in Shoemaker’s view the ability to consider reasons for doing or believing something other than what one does or believes is essential to answerability. A person who can trot out reasons in favor of a belief or a course of action but is genuinely unable to entertain alternatives is not answerable for her attitudes
234 Alessandra Tanesini and activities. In his opinion, such a person might be a psychopath. Her actions exhibit a lack of judgment, rather than manifesting its poverty. It might be thought that the view of intellectual autonomy as answerability is a version of the capacity for rational self-governance model. In many ways this observation is correct since answerability is often said to require this capacity (Scanlon, 1998). However, the kind of rational self-governance presupposed by answerability is different from Linda Zagzebski’s account in several respects. First, Zagzebski presupposes that self-governance requires the ability to critically examine one’s attitudes by way of reflection. In her view, this ability partly consists in “making higherorder judgements about the components of the self that ought to change” (2012, p. 236). Instead, the ability to justify one’s views and to compare them with alternative beliefs, which I take to be necessary for autonomy, does not require the capacity to reflect on one’s beliefs. It is sufficient for answerability that one is responsive to both the reasons in favor of, and against, an attitude. It is enough that one can evaluate the evidence supporting one’s opinion and the opposite point of view. Unlike Zagzebski’s, the model of rational self-governance as answerability, does not require the presence of second-order beliefs about one’s first order beliefs. Second, in Zagzebski’s account, rational self-governance is identified with minimizing cognitive dissonance. In her view, the conscientious agent surveys her attitudes and seeks to harmonize them, in the sense of reducing any felt psychological tension among them. Her account of reasons falls out of this structural view of rationality. Reasons, then, would be the considerations endorsed by the rational person. This is the person in whom cognitive dissonance has been dissolved. Yet, such person might engage in extensive wishful thinking if she has ended up harmonizing her beliefs to make them consistent with her desires (Fricker, 2016, p. 163). The answerability model I advocate relies instead on a more substantial notion of reason as a consideration in favor of a belief, a choice, or a course of action.7 The model of intellectual autonomy as answerability is intuitively attractive because it offers a plausible articulation of the thought that the intellectually autonomous agent is the person whose beliefs are based on her own reasons. According to this view, to base one’s beliefs on reasons that are one’s own is to be answerable for those beliefs because one can evaluate the reasons for and against them. In short, to be autonomous is to be answerable for one’s beliefs since these reflect the quality of one’s judgment (or evaluative abilities). Plausible notions of epistemic praiseworthiness and blameworthiness also fall out of this account as one would expect given that autonomy is necessary for epistemic responsibility. Epistemically praiseworthy belief is belief that is to the credit of the agent who is answerable for it. Similarly, epistemically blameworthy belief is belief to the demerit of the answerable agent. Both categories apply exclusively to beliefs that reflect the epistemic quality of the agent’s judgment and thus are autonomously
Intellectual Autonomy and Its Vices 235 held. Blame is fitting only when the belief reflects fallacious inferences, neglect of available counter-evidence, culpably mistaken assessments of the probative qualities of one’s evidence, and so forth. Praise is fitting only when the belief reflects an excellent use of the agent’s evaluative abilities in ways that are intellectually virtuous (e.g., open-minded).8 Finally, the model’s plausibility is strengthened by its ability to yield the intuitively correct answer in several hypothetical examples. For instance, on the one hand, the person who has been indoctrinated by a guru is not intellectually autonomous to the extent to which she has lost the ability the evaluate reasons against what her leader tells her to believe. Indoctrination undermines the abilities required for answerability, so that the follower is no longer able to even entertain the possibility that the guru is not to be trusted. The follower might trot out reasons to endorse the guru’s beliefs, but if she has really been indoctrinated, she is not answerable for her beliefs as she is not able to evaluate any alternatives. On the other hand, the person who believes something on the mere saying so of another is autonomous in her believing if she is answerable for trusting the other’s testimony. She is answerable when her trusting attitude reflects the quality of her judgment. Provided that this agent can consider reasons not to trust someone’s testimony, her judgment to trust is her own. This pair of examples suggests that outside forces undermine autonomy only when they erode a person’s ability to consider alternatives to her beliefs but are otherwise compatible with it. It is now also possible to demarcate those internal influences that undermine intellectual autonomy from those that do not. An individual that, due to extreme tiredness, makes a reasoning mistake has been temporarily incapacitated in her evaluative abilities. Her mistake is not reflective of the quality of her judgment and this is why on this occasion her belief is not autonomously held. However, a person who trusts her intuition on some specific point could be exercising her autonomy and, in fact, her belief might even be praiseworthy if her intuitions reflect her expertise on the given subject matter. To summarize, I have proposed that we think of intellectual autonomy as the possession of the capacities required for answerability. These are the abilities to evaluate the reasons for one’s views but also for those opinions that one opposes. I have fleshed out this idea by saying that autonomously held beliefs are those that reflect the epistemic quality of the judgment of the believer. I have also added that epistemic blame and praise are fitting only when the agent is answerable for her belief.
12.2 Answerability and Recognition In the previous section I have identified intellectual autonomy as a property of those subjects with the evaluative capacities required to provide reasons for their beliefs and consider alternatives to them. So understood,
236 Alessandra Tanesini intellectual autonomy is not a relational property. Hence, were there to be a thinker who is completely deprived of any contact with other epistemic agents, that person might be autonomous in her believing. She would still be answerable to herself for her views. In this section, I argue first that, although intellectual autonomy is not a positive epistemic status, it demarcates something that is of great epistemic importance since it identifies an essential property of those who are fit for the role of informants. Second, I show that, to perform this role, possession of the evaluative abilities that make one answerable for one’s beliefs is not sufficient, one must also be recognized by other epistemic agents as a person whose reasons are her own. Absent this recognition, an epistemic agent cannot offer reasons on which others depend for their testimonial beliefs. Intellectual autonomy is not itself the mark of a positive epistemic status since one can be autonomously unreliable, shoddy, and somewhat prejudiced. This is apparent when self-reliance is singled out as the mark of intellectual autonomy, since a person can be extremely bad at finding and evaluating evidence and counter-evidence for her beliefs whilst carrying out these activities without depending epistemically on any other agent. But this is also true in Zagzebski’s model of autonomy as critical self-reflection. In her view, reflection requires doing conscientiously what we already do naturally. What we do naturally is seek to reduce felt psychological tension or cognitive dissonance. We perform the task conscientiously when, animated by the desire for truth, we use our faculties as well as we can (Zagzebski, 1996, p. 55). But now suppose that felt psychological tension is not a good indicator of contradiction or of probabilistic inconsistencies. The careful use of our faculties to achieve harmony among our attitudes could lead us to form beliefs that are based on fallacious reasoning, if the agent experiences some consistent beliefs as dissonant or some inconsistent ones as harmonious.9 The conception of intellectual autonomy as answerability is no different in this regard. The individual who is answerable for her beliefs has reasons that reflect the quality of her judgment, but this can be very poor indeed. It might be argued that even if intellectual autonomy is not sufficient to confer any positive epistemic status to belief, autonomy is necessary for beliefs to have some positive epistemic status. The point is, at best, moot. One might have a true belief that one has not autonomously acquired or maintained. It also seems possible to continue to know something having both forgotten how one knows it and lost one’s ability to evaluate properly that content. This might happen in the early stages of dementia, for example. That said, it is not implausible that intellectual autonomy is a necessary condition for some internalist notion of justification. But, even if that is granted, it only serves to raise again the question of epistemic significance. In the same way in which one might wonder why we should care for intellectual autonomy since it is not a truth-conducive
Intellectual Autonomy and Its Vices 237 notion, one may question the epistemic value of internalistic accounts of justification. Be that as it may, in what follows I argue that intellectual autonomy as answerability is an epistemically useful notion because of its connection to the role of an informant in the epistemic practice of testimony. More precisely, according to the view defended here, being an informant is a social epistemic status that entitles one to play the role of giver of testimony. Individuals function as informants when they are properly treated as answerable for their beliefs. Thus, functioning as an informant comprises two requirements. The first is intellectual autonomy, which consists of the intellectual abilities that make one answerable for at least some of one’s beliefs. The second is others’ recognition that one is answerable.10 Testimony is essential to the transmission of knowledge. But in order for the practice to work well we need to mark out those who are fit to play the role of testifier. Craig (1990) has argued that we first developed for this purpose the concept of “good informant” out of which we devised the concept of “knower” to include good informants who cannot be recognized anywhere and by anyone. This account implicitly subscribes to the indicative picture of telling that Moran (2018) has extensively criticized. According to this view, a speaker’s testimony is evidence of what the speaker believes but supplies by itself no epistemic reason to believe what the speaker says. Craig’s good informant is the person who reliably believes what is true, and honestly says what she believes. Given the indicative picture of telling, Craig’s good informant is the ideal person on whose testimony others can rely. In my view the ordinary epistemic practice of testimony does not fit the indicative picture (Tanesini, 2020a). Instead, it is best understood as involving the performance of speech acts of telling which institute novel obligations akin to assurances. If this is right, the significant distinction in this epistemic practice is not that between good informants and everything else including mere sources of information as well as bad informants.11 Rather, the crucial contrast is between informants (both good and bad) and mere sources of information. This contrast is crucial because it identifies the category of entities capable of giving assurances from those that lack this capacity. Intellectual autonomy as answerability specifies what it takes to assure, since assuring involves making oneself answerable for one’s claim. In what follows, I first make intuitively plausible the claim that only intellectually autonomous agents can be informants. Second, I deploy Moran’s (2018) account of testimony as the offer of an assurance to develop the connection between answerability for belief and being an informant. Third, I show that to be an informant one must be recognized by others as someone capable of answering for one’s beliefs. We can get a grip on the notion of being an informant by demarcating it from the concept of being a source of information. Intuitively only
238 Alessandra Tanesini agents can be informants, while other entities function exclusively as sources of information. That said, some human beings are also not fit to be informants, while others who are capable of informing function at times as sources of information. Seeing what these examples have in common helps to single out the qualities required for functioning as an informant on a given occasion. Those human beings who are not fit to be informants are those whose evaluative judgment is missing or severely impaired due to mental illness, dementia, or psychopathology. Arguably, very young children are also not able to function as informants because they have yet to develop their evaluative abilities. Thus, people who cannot be informants are those who are not answerable for their beliefs due to their impaired or unformed judgment. There are also occasions when people who are capable of informing function as sources of information. There are at least two different cases. First, one might observe a person and infer some information from his appearance or some other fact about him. For instance, if a student arrives late to class covered in sweat, I infer that he has rushed to try to be punctual. I obtain this information from the student, but it would be inappropriate to say that he has informed me.12 Second, one might extract information from a person by means of torture, or by causing her to be extremely tired and stressed. In this situation also, it is appropriate to say that the interrogator gets some information out of an individual, but the person subjected to the treatment does not inform the interrogator since her speech, whilst constituting an intentional action, is not a genuine act of telling, at least if the latter requires the giving of an assurance. Assurances must be given freely. In this regard they are akin to genuine promises and offers since these also cannot be extorted.13 This is why the subject who speaks under torture or extreme duress is not functioning as an informant but as a mere, and most likely unreliable, source of information.14 These considerations suggest that the notion of being an informant is tied up with the possessions of those evaluative faculties required to be answerable for one’s beliefs. These same considerations also indicate that people function as informants in the epistemic practice of testimony when their speech is freely offered, precisely in the sense of being offered as something for which the speaker takes responsibility by making herself answerable for her beliefs. If this is right, the notion of intellectual autonomy is tied up with that of being an informant since only intellectually autonomous agents can genuinely convey information to others by way of testimony, because only those people who are autonomous have beliefs for which they are answerable. These connections between autonomy, answerability, and testimony are best articulated using Richard Moran’s (2018) assurance theory of testimony.15 In his view, when a person provides another with information by means of testimony, then that person tells the other something
Intellectual Autonomy and Its Vices 239 that conveys that information. In other words, testimony is conducted through the speech act of telling.16 Further, in Moran’s opinion, the giving of an assurance, which he models on the idea of promising or giving one’s word, is a necessary condition of telling. Thus, the offering of a piece of testimony is an invitation directed to the addressee to trust the speaker based on the fact that the speaker freely undertakes a responsibility to be answerable for her beliefs. That is, the speaker commits to having reasons that speak in favor of the piece of information that she is communicating via testimony. But, of course, the speaker can so commit only if she possesses the evaluative faculties that are constitutive of answerability. Hence, in offering her testimony a speaker presents herself as being answerable for her beliefs, and invites others to share that presumption. In Moran’s view, freely giving an assurance provides every agent (including but not exclusively the addressee) a defeasible epistemic reason to believe the testimony. That is, one can reason that the agent would not freely risk the burden of justifying the belief and the costs associated with failure to do so unless the agent had good reasons for the truth of her belief. However, the giving of an assurance also supplies the addressee specifically with an entitlement to censure the speaker if it turns out that she cannot discharge her obligation to answer for her belief.17 For Moran, telling, as the giving of an assurance, is a communicative act, and as such it requires uptake on the part of the addressee to be fully felicitous. Moran conceives of uptake in Gricean terms according to which it consists in the addressee recognizing that the speaker intends that the addressee has a reason to believe what he is told partly in virtue of recognizing the speaker’s intention to supply him such a reason by shouldering responsibility for the truth of what she is telling. But there are alternative accounts of uptake that could be adopted instead. These do not hinge on the audience’s recognition of the speaker’s intention but on its recognition of the speaker’s ability to testify, which is manifested in treating the speaker’s speech act as an instance of telling rather than of some weaker speech act in the assertive family such as suggesting, proposing, speculating, or even perhaps guessing (cf., Kukla, 2014). Be that as it may, it is not wholly up to the speaker whether she is fully successful in performing the speech act necessary to testify. Instead, her audience must at least grant the speaker’s presumption that she is answerable for her beliefs. Unless this assumption is accepted, it is impossible for the speaker to offer a testimony. This is so, because telling requires providing an assurance, something that the speaker can do only if her ability to shoulder the responsibility for the truth of her beliefs is acknowledged. If this presumption is not granted, the speaker is illocutionarily disabled since her speech act misfires by, for instance, being ignored or by being treated as mere speculation. I have suggested in this section that the primary epistemic value of intellectual autonomy is as a necessary condition for being an informant.
240 Alessandra Tanesini I have now argued that a person cannot fully function as an epistemic informant unless others share her presumption that she is intellectually autonomous. In this sense, social recognition that one is answerable for one’s beliefs in addition to actual ability to answer for one’s beliefs is also necessary to fully function as an intellectually autonomous individual can legitimately expect to function. To conclude, although the capacities constitutive of intellectual autonomy are individual cognitive abilities, an important reason why their possession matters epistemically is that they qualify those who have them for the crucial social epistemic role of being an informant. Hence, those individuals who are intellectually autonomous but whose autonomy is not socially recognized are denied the possibility to perform the social epistemic role that is fitting of their status.18
12.3 Oppression, Testimonial Injustice, and Intellectual Autonomy I have argued in Section 12.2 that when agents invite others to believe what they are telling them, they also invite others to share the presumption that they (the speakers) are answerable for their beliefs. I have also argued that if this presumption is not granted, speakers are disabled in their ability to perform the speech act of telling and thus ultimately to offer a testimony. Finally, I have also claimed that if people are unable to function as informants, they are unable to perform those functions that intellectually autonomous agents should legitimately expect to perform. Thus, insofar as the epistemic value of autonomy lies in the fact that it qualifies one for the role of being an informant, failure of recognition disqualifies one from the social status that gives autonomy its epistemic value. In this section, I contrast two different examples of testimonial injustice to illustrate two different ways of wronging an epistemic agent. The first injustice consists in treating someone as a lesser quality informant than she deserves to be treated. The second injustice consists in treating a person, who legitimately expects to be treated as an informant, as if she were a mere source of information. This second kind of case is naturally interpreted as denying an agent’s autonomy; in the first kind of case autonomy is diminished but not denied. We owe the concept of testimonial injustice to Miranda Fricker (2007) who defines it as the kind of injustice that befalls epistemic agents whose credibility is deflated due to persistent identity prejudice. A paradigmatic example of this phenomenon would be a case where a woman’s testimony on a mathematical problem is not taken as seriously as it deserves because her addressee holds prejudicial views about women’s mathematical abilities. In this instance, the listener attributes to the speaker a degree of credibility on the matter at hand that is less than the speaker deserves.
Intellectual Autonomy and Its Vices 241 This mistaken attribution is not an honest mistake, but it is culpable because it is based on a prejudice. It makes sense to think of the hearer’s conduct as wrongful because, by systematically deflating the speaker’s credibility, he is also, out of prejudice, attributing to her a worse quality of judgment than she deserves. But testimonial injustice can also cut deeper than this. Fricker’s own two paradigmatic examples of testimonial injustice arguably exemplify this more extreme form of testimonial injustice. For instance, Margie Sherwood’s claim that Ripley killed her boyfriend is dismissed by the boyfriend’s father on the grounds that it is based on female intuition rather than facts (Fricker, 2007, p. 9). This dismissal represents Margie as being unable to evaluate evidence since she cannot distinguish reasons from hunches. It is not designed to portray her as less credible than she actually is, but to present her as not able to perform the role of informant. This form of epistemic injustice does not consist in deflating the credibility of an agent by treating her as being less good than she is. Instead, she is treated as someone who cannot have reasons of her own because her judgment is impaired. Margie, as Fricker also claims, is treated as a mere source of information rather than as an informant. This deeper kind of epistemic injustice disables Margie, on that occasion, so that she cannot testify as to the probable causes of her boyfriend’s disappearance. This disablement does not mean that she is unable to responsibly form beliefs on the matter. But it deprives her of the benefits that give intellectual autonomy its epistemic value, since it makes it impossible for her to qualify in that instance as an informant.
12.4 Heteronomy and Hyper-autonomy I have argued that failures of recognition can rob intellectually autonomous agents of the social epistemic value that befits them as capable of being informants. In this section, I first note that failures of recognition can be causally responsible for changes to people’s psychology. These changes result in a progressive loss of intellectual self-trust that in turn leads to a reduction in the ability to form beliefs for which one is answerable because they reflect the quality of one’s own judgment. That is, intellectual autonomy is often to some degree causally dependent on its social recognition. Second, I elaborate on these points to explain how structures of oppression might contribute to the genesis of the intellectual vices of servility and timidity in the subordinated and those of intellectual arrogance and narcissism in the privileged. I conclude the section by showing that timidity and servility are forms of heteronomy of the intellect while arrogance and narcissism are tantamount to hyper-autonomy. People who are not thought of as answerable for some or all their beliefs are not asked to provide an opinion. They are not invited to join
242 Alessandra Tanesini the conversation. If they nonetheless volunteer some information, their contributions might be discounted, dismissed, given little weight, or taken as suggestions rather than assertions. This kind of treatment is corrosive of intellectual autonomy since the agent whose contributions are regularly dismissed is likely to suffer a loss of trust in the quality of her own judgment. When this happens, she might stop seeking to form her own reasons for her beliefs because she judges the quality of her critical abilities to be poor and thus not fit to be relied upon. These considerations indicate that lack of recognition can change agents’ psychology because it can erode or impair those abilities that are constitutive of autonomy.19 I borrow from Young (1990) the notion of oppression as a matter of structural relations among social groups leading to unfair distributions of goods or to failures of recognition. Young argues that these forms of injustice can be prevalent in democratic societies even in the absence of overt coercion. Further, as these relations are structural, it is possible for members of some social group to be oppressed even in circumstances where there is no specific social group whose members can be identified as the oppressors. Young identifies five forms, or faces, of oppression: exploitation, marginalization, powerlessness, subordination or cultural imperialism and violence. These are not intended to be exhaustive or mutually exclusive. Exploitation causes individuals to sacrifice their interests to serve those of other people. In the epistemic domain, it might involve engaging in extensive cognitive labor designed to address primarily other people’s epistemic needs rather than one’s own (Berenstain, 2016). Marginalization excludes people from the production and distribution of goods. This includes being shut out of the places where knowledge is produced or distributed such as universities. Powerlessness occurs when one has no control over the structure of one’s daily life. In the epistemic domain this is especially prevalent when members of some groups, such as ill people, are treated as objects of others’ research but are not allowed to contribute to it. Subordination occurs when some groups in society are represented as embodying the standard that others should also attempt to match. When these representations are internalized, they cause individuals who belong to subordinated groups to think of themselves as inferior. Finally, violence is the creation of a climate of fear through mocking, intimidation, and abuse. In the epistemic domain this is exemplified by the online abuse to which women who speak up against misogyny are subjected. These systematically unjust structural relations have pervasive, and well-established effects, on the psychology of those who are oppressed in these ways. My focus here is on the damage done to intellectual self-trust. Intellectual self-trust requires an optimistic attitude toward one’s ability to be successful in one’s epistemic activities (Jones, 1996, 2012). This attitude is partly an affective positive stance of self-confidence about one’s
Intellectual Autonomy and Its Vices 243 own evaluative abilities. Such confidence is eroded when one loses trust in the quality of one’s judgment. It is inflated, when one overestimates the quality of one’s judgment because one is full of oneself. In what follows, I elaborate on these features of self-trust. I show that oppression can erode or inflate it, before showing how its erosion causes heteronomy and its inflation hyper-autonomy. Intellectual self-trust is trust placed in one’s own intellectual abilities. Such trust involves a disposition to rely on these abilities but also a positive affective attitude of confidence about one’s abilities.20 An optimistic stance about one’s ability to deliver is necessary for self-trust since the person who relies on her faculties, but is doubtful about the epistemic values of their outputs, is not trusting her own faculties. Further, this trustful attitude is partly affective because, as Jones (2012) has convincingly argued, a person might believe that her faculties are trustworthy without feeling able to trust them. For example, one can be in the grip of anxiety about having forgotten one’s passport at home, and thus keeps looking for it, even though one distinctly remembers putting it in one’s bag. Affective self-confidence in one’s intellectual abilities is a component of self-trust that is susceptible to social influence. It is, as Jones also remarks, “socially porous” (2012, p. 245). Those relations that I have described above as oppressive are especially effective in undermining the intellectual self-confidence of those who are oppressed and in inflating the selfconfidence of those who are privileged. The existence of these structural relations generates misleading reasons that cause those who are wronged by them to believe that their intellectual abilities are epistemically worse than they are, and those who benefit from these structural relations to overestimate the quality of their intellect. But these same structural relations also directly impact the affective stances that individuals have toward their intellectual abilities by inducing a loss of confidence in some through the creation of a climate of fear, anxiety, and shame, and by causing overconfidence in others by generating an environment that stimulates complacency and self-infatuation. Exploitation, powerlessness, and subordination cause individuals to occupy positions of inferiority where they are often required to serve the needs of others, to structure their days in accordance with what others command and, if they have come to internalize social norms, to see themselves as inferior to those who occupy positions of privilege. Being repeatedly subjected to this treatment is a source of shame since it causes one to think of oneself as less worthy than other people. But people who are chronically made to feel ashamed of themselves cannot at the same time sustain a confident optimistic outlook about their intellectual capacities. Thus, by inducing shame these structural relations of oppression cause members of oppressed groups to experience a loss of intellectual self-trust.
244 Alessandra Tanesini As I have argued elsewhere, entrenched dispositions to think of oneself as intellectually inferior, to experience chronic shame about one’s alleged shortcomings, when combined with a tendency to attribute any failures to lack of ability and any success to environmental circumstances, are among the characteristic manifestation of the vice of intellectual servility or obsequiousness (Tanesini, 2018, 2021). Those suffering from this vice are also predisposed to try to please other more powerful individuals. This tendency is often a consequence of being exploited since survival in these conditions requires that one becomes adept at recognizing others’ needs and servicing them well. If these considerations are correct, then those who are subjected to some forms of oppression also suffer the psychological damage of losing trust in their own intellectual abilities and of developing a tendency to seek to ingratiate themselves to those who are more powerful than they are. Marginalization and violence cause oppressed individuals to be pushed out of the system for the production and distribution of knowledge or to prefer exclusion to the risk of being the target of intimidation, harassment, abuse, or threats. Being treated in these ways denies one access to resources that help to improve one’s intellectual capacities, but it also engenders a heightened sense of vulnerability. That is, it impacts what Jones (2004) has described as basal security which is an affective assessment of one’s vulnerability to threats. Those who have low basal security are always fearful, even when they are not actually afraid. As a result, they are also especially risk averse and hypervigilant about threats. The experience of oneself as especially vulnerable impacts one’s sense of oneself as able to cope and thus impacts confidence in one’s ability and generates widespread anxiety and pessimism about one’s future (Govier, 1993). Hence, this affective stance is also responsible with a resigned pessimistic outlook that is incompatible with the optimism characteristic of self-trust. The dispositions to retreat, to avoid confrontation, to self-silence are among the manifestation of the vice of intellectual timidity (Tanesini, 2018, 2021).21 Those who suffer from this vice are too scared to express their own opinions. Overtime they might also develop the belief that they are silent because they have nothing of interest to contribute. This conviction might be based on trusting the judgment of others who do not take seriously what one says. But it might also be the result of seeking to address cognitive dissonance between one’s self-imposed silence and one’s self-esteem. It is easier to live with oneself if one rationalizes one’s silence by thinking that one has nothing to add rather than thinking that one lacks the courage to speak out. Despite their differences, both intellectual servility and timidity erode intellectual autonomy understood as the ability to answer for one’s opinions because these reflect the quality of one’s judgment or evaluative abilities. This erosion takes two forms. First, timidity and servility
Intellectual Autonomy and Its Vices 245 impact the quality of a person’s judgment. They, therefore, contribute to making that person a worse thinker. To see this, consider that those who are timid or servile do not trust their intellectual abilities. This lack of self-trust includes pessimistic tendencies about one’s ability to improve and a tendency to attribute failures to one’s own shortcomings. These attitudes cause one to be resigned to failure, to avoid trying to improve or to stretch one’s horizons. All these tendencies are likely to atrophy one’s intellectual abilities. In this sense, lack of confidence in one’s intellectual abilities becomes a self-fulfilling prophecy since it induces attitudes that are detrimental to one’s intellectual development. But intellectual servility and timidity also erode autonomy in a second and deeper way. They cause one to become intellectually heteronomous, rather than merely a bad autonomous epistemic agent. To see this, consider that intellectual servility includes a tendency to ingratiate oneself to those who are powerful. Thus, obsequious individuals are predisposed to defer to others’ opinions more often than they should. It is this excessive deferential tendency that causes them to express opinions for which they do not have their own reasons. They are often not answerable for their beliefs because these do not express the quality of their judgments but of the powerful individuals they seek to please. Relatedly, intellectual timidity predisposes people not to have an opinion on many subject matters. When this happens, these individuals do not have beliefs for which they are answerable. Thus, intellectual timidity is another cause of heteronomy of the intellect because, by depriving people of the courage necessary to have a view for which one is answerable, it renders them mute. Structural relations of oppression also have detrimental effects on those privileged individuals who benefit from others’ oppression. My interest here lies with those effects that have an impact on self-trust by promoting its unwarranted inflation. Those who are the beneficiaries of others’ exploitation are used to have their needs serviced by others. They are thus likely to find that things are easy for them since others have already smoothed their path. This sense of ease combined with an experience of powerfulness gained at the expenses of others powerlessness is bound to produce an inflated sense of one’s own abilities. One might thus develop a tendency to attribute to one’s own capacities successes that are at least partly caused by others’ assistance, but also by the absence of challenges since potential competitors have been marginalized or have, out of fear of violence, self-silenced. In these ways, exploitation, marginalization, powerlessness, and violence contribute to misleading privileged individuals into thinking that they are cleverer, more creative, and intelligent than they are. But benefiting from these injustices also directly affects one’s self-confidence. Those who are made to feel powerful, feel invulnerable; they are, thus, prepared to take more risks than they would otherwise. The feeling that success comes relatively easy to them might also make them feel superior to others who are less successful than they are. These
246 Alessandra Tanesini feelings and emotions combine into an over-optimistic form of self confidence in one’s intellectual abilities. These dispositions and affective attitudes are characteristic of the intellectual vice of arrogance that includes feelings of superiority, of invulnerability, a sense that one’s opinions cannot be improved upon, and a conviction that one owes one’s successes primarily to one’s own talents (Tanesini, 2021). In addition, individuals who benefit from cultural imperialism are invited to think of people like them as the standard by which the performance of members of other groups is to be gauged. They are thus, likely to suffer from a tendency described by Spelman (1990) and by Lugones (2003) as “boomerang vision.” Those who exhibit it evaluate others positively to the extent to which they resemble themselves, but do not think that a feature they have might be good because members of some other group have it. This egocentric approach seems characteristic of arrogance but also narcissism which in the intellectual domain, for instance, manifests itself in behaviors that evince a conviction that everyone else must be interested in what one has to say. Both intellectual arrogance and narcissism are harmful to individuals’ intellectual autonomy. First, these epistemic vices contribute to making people worse thinkers by causing them to overestimate the epistemic quality of their evaluative abilities. But second, they also undermine intellectual autonomy itself. I argued in the first section of this chapter that to be answerable for one’s own beliefs one must be able to supply reasons for these, but also to assess contrary evidence. It is only when one can consider reasons against one’s views that the reasons one has reflect the quality of one’s own good or bad judgment. Arrogance and narcissism impair a person’s ability to take seriously the possibility that she might be mistaken. Insofar as the arrogant or narcissistic person can appreciate contrary views, even though she does not pay attention to them, she is autonomous in her bad thinking. However, there are individuals whose arrogance or narcissism goes so deep that they their capacity for assessing alternatives to their beliefs is reduced. These people often seem to retrofit the reasons they trot out to the beliefs that they already hold. When this happens, these individuals’ autonomy has been eroded because their excessive selfconfidence has impaired their judgment since they have lost the ability to evaluate alternatives to their viewpoint. In conclusion, I have argued that intellectual autonomy is best thought as the quality of epistemic agents who are responsible for their good or bad believing. I have fleshed out this notion in terms of answerability. I have also argued that the point of having a notion of intellectual autonomy is to identify those agents that are fit to be informants in the epistemic practice of testimony. I have shown that to occupy this role, individuals must be answerable for their beliefs, but also recognized as such. In this sense the value of intellectual autonomy is partly constituted
Intellectual Autonomy and Its Vices 247 by social relations of recognition. In the final section, I have argued that systemic relations of oppression are causally responsible for deflating or inflating people’s confidence in their evaluative abilities. These influences on people’s psychology undermine intellectual autonomy by promoting heteronomy in some and hyper-autonomy in others.22
Notes 1 The idea that the reasons for one’s belief are one’s own is ambiguous. Each conception of autonomous belief provides its own different disambiguation of this core idea. 2 These reasons include reasons to trust the testimony of some others. Her reasons to believe some testifiers (but not others) are her own in the sense that they reflect the quality of her judgment about whom to trust. 3 So understood, autonomy is a matter of degrees and inversely proportional to the influence of external and non-rational influences. Thanks to Jon Matheson for stressing the point. 4 Such demarcation can be made in terms of those influences that promote knowledge or true belief and those that are obstacles to their acquisition and preservation. But autonomy as self-reliance would be orthogonal to these concerns. See Code (1991) for a different criticism of intellectual autonomy as self-reliance. 5 There is, however, disagreement as to whether these feelings and emotions are metacognitive states (Carruthers, 2017). 6 Responsibility for belief and other cognitive states is not limited to epistemic responsibility. It is only this latter that I identify with answerability as the key to intellectual autonomy. Thus, we might be morally responsible for cognitive biases even though we are not epistemically responsible for them. 7 For this chapter, I set aside difficult questions about epistemic reasons for belief and whether these concern exclusively evidence in favor of said belief. 8 There are many accounts of responsibilist intellectual virtues. I take no stand on this issue here as all accounts agree that people are worthy of praise when they believe virtuously. 9 Be this as it may, if it is instead impossible in Zagzebski’s account to be a bad but autonomous thinker, then her conception of intellectual autonomy is highly revisionary. 10 Recognition is here thought as an attitude that tracks independent qualities that are constitutive of autonomy. Hence, recognition might be misplaced. One might fail to recognize individuals who are answerable or might misrecognize as answerable individuals who lack autonomy. 11 For defenses of the assurance picture in different guises see Moran (2018), Faulkner (2014). 12 See Fricker (2007, p. 132) for this kind of case. 13 That is the obligations one acquires by promising or offering must be voluntary if they are to be binding. 14 Moran (2018, p. 62) argues that speaking under duress still constitutes telling because it is unlike speaking under hypnosis which does not. I disagree. The difference between these two cases is that the person who speaks under duress is acting intentionally whilst the person under hypnosis is not. However, speaking under duress is not intentional under the description of being a telling because it does not constitute the giving of an assurance since the latter must be given freely, as Moran himself holds.
248 Alessandra Tanesini 15 I have offered an alternative to Moran’s view in my (Tanesini, 2020a). The differences between our positions do not matter much for the claims I wish to defend in this chapter, since I agree with him that telling is a communicative speech act that misfires without uptake. For this reason, I use Moran’s account with which the reader is more likely to be familiar. 16 There are tricky issues here since there seems to be instances of testimony that do not require telling. I set these cases aside. 17 Moran uses the vocabulary of accountability to flesh out the obligation undertaken by the speaker toward her audience. This is the relation that licenses specifically the addressee’s censure of the speaker. What warrants that censure is that speaker is blameworthy for her belief because it reflects the poor quality of her judgment. 18 In this regard my account differs from several feminist account of autonomy as answerability. Those accounts often identify answerability with a disposition to make oneself accountable to others (Code, 2006; Grasswick, 2019). In my view, it is possible to be autonomous and to lack this disposition. Autonomy combined with an unwillingness to make oneself responsible to others for one’s claims is, as I briefly show below, a trademark of the vice of intellectual arrogance. 19 For similar arguments with regard to personal autonomy see Govier (1993) and McKenzie (2008) and Gonzalez de Prado (2021). 20 This is not a full analysis of self-trust, which in my view also involves confidence in one’s willpower (Tanesini, 2020b). 21 Note that it might be prudentially rational to exhibit these dispositions in conditions of oppression even though they typify an intellectual vice. 22 Thanks to the editors of this volume for some extremely helpful comments on a first draft.
References Berenstain, N. (2016). Epistemic exploitation. Ergo, an Open Access Journal of Philosophy, 3(20170823). 10.3998/ergo.12405314.0003.022. Carruthers, P. (2017). Are epistemic emotions metacognitive? Philosophical Psychology, 30(1–2), 58–78. 10.1080/09515089.2016.1262536. Code, L. (1991). What can she know?: Feminist theory and the construction of knowledge. Ithaca: Cornell University Press. Code, L. (2006). Ecological thinking: The politics of epistemic location. Oxford and New York: Oxford University Press. Craig, E. (1990). Knowledge and the state of nature: An essay in conceptual synthesis (2nd ed.). Oxford: Clarendon. Dokic, J. (2012). Seeds of self-knowledge: Noetic feelings and metacognition. In M. J. Beran, J. L. Brandl, J. Perner, & J. Proust (Eds.), Foundations of metacognition (pp. 716–761). Oxford: Oxford University Press. Faulkner, P. (2014). Knowledge on trust. New York: Oxford University Press. Fricker, E. (2006). Second-hand knowledge. Philosophy and Phenomenological Research, 73(3), 592–618. doi:10.1111/j.1933-1592.2006.tb00550.x. Fricker, E. (2016). Doing (better) what comes naturally: Zagzebski on rationality and epistemic self-trust. Episteme, 13(2), 151–166. 10.1017/epi.2015.37. Fricker, M. (2007). Epistemic injustice: Power & the Ethics of Knowing. Oxford: Clarendon.
Intellectual Autonomy and Its Vices 249 Gonzalez de Prado, J. G. (2021). Gaslighting, humility, and the manipulation of rational autonomy. In J. Matheson & K. Lougheed (Eds.), Epistemic autonomy. New York and London: Routledge. Govier, T. (1993). Self-trust, autonomy, and self-esteem. Hypatia, 8(1), 99–120. 10.1111/j.1527-2001.1993.tb00630.x. Grasswick, H. (2019). Epistemic autonomy in a social world of knowing. In H. D. Battaly (Ed.), The Routledge handbook of virtue epistemology (1st ed., pp. 196–208). New York and Abingdon: Routledge. Hookway, C. (2008). Epistemic immediacy, doubt and anxiety: On a role for affective states in epistemic evaluation. In G. Brun, U. Doguoglu, & D. Kuenzle (Eds.), Epistemology and emotions (pp. 51–65). Aldershot: Ashgate. Jones, K. (1996). Trust as an affective attitude. Ethics, 107(1), 4–25. Jones, K. (2004). Trust and terror. In P. DesAutels & M. U. Walker (Eds.), Moral psychology: Feminist ethics and social theory (pp. 3–18). Lanham: Rowman & Littlefield Publishers. Jones, K. (2012). The politics of intellectual self-trust. Social Epistemology, 26(2), 237–252. 10.1080/02691728.2011.652215. Kukla, R. (2014). Performative force, convention, and discursive injustice. Hypatia, 29(2), 440–457. 10.1111/j.1527-2001.2012.01316.x. Lugones, M. (2003). Pilgrimages/peregrinajes: Theorizing coalition against multiple oppressions. Lanham; Boulder; New York; Oxford: Rowman & Littlefield. McKenzie, C. (2008). Relational autonomy, normative authority and perfectionism. Journal of Social Philosophy, 39(4), 512–533. Moran, R. (2018). The exchange of words: speech, testimony, and intersubjectivity. New York, New York: Oxford University Press. Proust, J. (2013). The philosophy of metacognition: Mental agency and selfawareness. Oxford: Oxford University Press. Scanlon, T. M. (1998). What we owe to each other. Cambridge (Mass) and London: The Belknap Press of Harvard University Press. Shoemaker, D. (2015). Responsibility from the margins (First ed.). Oxford: Oxford University Press. Spelman, E. V. (1990). Inessential woman: Problems of exclusion in feminist thought. London: Women’s Press. Tanesini, A. (2018). Intellectual servility and timidity. Journal of Philosophical Research, 43, 21–41. 10.5840/jpr201872120. Tanesini, A. (2020a). The gift of testimony. Episteme, 1–18. 10.1017/epi.2019.52. Tanesini, A. (2020b). Virtuous and vicious intellectual self-trust. In K. Dormandy (Ed.), Trust in epistemology (pp. 218–238). New York and London: Routledge. Tanesini, A. (2021). The mismeasure of the self: A study in virtue epistemology. Oxford: Oxford University Press. Young, I. M. (1990). Justice and the politics of difference. Princeton: Princeton University Press. Zagzebski, L. T. (1996). Virtues of the mind: An inquiry into the nature of virtue and the ethical foundations of knowledge. Cambridge: Cambridge University Press. Zagzebski, L. T. (2012). Epistemic authority: A theory of trust, authority, and autonomy in belief. Oxford; New York: Oxford University Press.
13 Gaslighting, Humility, and the Manipulation of Rational Autonomy Javier González de Prado
Humility is a matter of being sensitive to one’s limitations and responding properly to them.1 A proper response to one’s perceived limitations will sometimes involve refraining from acting as if one were not so limited. It seems, therefore, that humility can restrict our autonomy as independent, self-reliant agents. An arrogant agent that tends to disregard her limitations will sometimes be able to get more things done on her own, without having to rely on resources apart from her own capacities. True, in many cases by disregarding her limitations the arrogant agent will encounter foreseeable obstacles that will thwart her plans. But, when she is lucky and her limitations do not actually hinder her projects, an agent not burdened by the awareness of her own limitations can be in a position to achieve goals that would be deemed too risky by more humble individuals. In this way, an audacious, careless mountaineer may manage to climb without equipment difficult walls that a more prudent climber only dares to face with the help of ropes. Similarly, it is possible that an over-confident tourist who blindly trusts her memory and orientational capacities successfully navigates a labyrinthine city on her own, after a brief look to the map, while a more cautious person would have to be guided by a cicerone. To be sure, the arrogant tourist is also more likely to get lost, as the careless climber is more likely to have an accident: they both have to be lucky to succeed. Yet this possibility of lucky success is excluded from the start for their humble counterparts, who do not even undertake those risky endeavors. This applies to humility in all walks of life, including intellectual enterprises. Intellectual humility, a proper sensitivity to the limitations of one’s epistemic position, may curtail the scope of our epistemic projects. It sometimes happens that an arrogant researcher, dogmatically confident in her insight, successfully engages in the project of rebutting a wellestablished theory – whereas a more modest student, sharing the same insight, may be reluctant to judge that she is in a position to disprove that theory, aware of the reasonable possibility that she is overlooking some counterargument in the literature. Intellectual over-confidence may
Manipulation of Rational Autonomy 251 motivate successful (albeit risky) pursuits that would be discouraged by a humbler disposition of character. Should we conclude that intellectual humility and autonomy are clashing virtues, pushing in opposite directions? On the way of understanding autonomy and humility that I will present here, the answer is negative. Indeed, I will argue that humility contributes to rational autonomy. Recklessness may make an agent (luckily) successful, but it also undermines her autonomy. I will think of rational autonomy as the capacity to guide one’s behavior and attitudes by responding to (and only to) the available reasons. This capacity involves a disposition to avoid relying on considerations that do not constitute good reasons for one’s responses. In particular, treating certain considerations as reasons when it is likely by the agent’s own lights that this treatment is somehow defective reflects poorly on the agent’s reasons-responding abilities. Proper responsiveness to the fallibility of one’s reliance on reasons is a manifestation of humility, which can therefore be seen as a component of rational autonomy. In this way, the stubborn researcher that sticks to her conclusions despite there being reasonable doubts about the reliability of her inferential methods reveals a flaw in her rational autonomy. Rational autonomy, as a result of the role that humility plays in it, is sensitive to the presence of (reasonable) self-doubt. Thus, a way of interfering with someone’s rational autonomy is by giving her misleading reasons to doubt herself. This type of manipulation is facilitated by the social nature of our epistemic lives. We depend on others as sources of testimony and also of criticism, which we use to revise our attitudes. An agent with sufficient perceived epistemic authority can mislead others into doubting their own epistemic capacities, thereby impoverishing unduly the scope of their rational autonomy. Gaslighting is a prominent way of engaging in this form of epistemic manipulation. I will argue that gaslighting exploits the intellectual humility of rational agents in order to undermine their rational autonomy. So, victims of gaslighting are often rational to submit to their gaslighter’s pressures – this behavior manifests their intellectual humility, which is part of their competence as autonomous rational agents. In this way, victims cannot always resist gaslighting without failing to manifest full competence as autonomous agents. Gaslighting is fostered by the social structure of our epistemic practices, and therefore counteracting it often requires collective responses, and not just individual resistance. The plan for this chapter is as follows. In the first section, I present my view of rational autonomy and intellectual humility, according to which humility is an integral part of the capacities that constitute rational autonomy. After that, I discuss how rational autonomy can be exercised in contexts of epistemic dependence. Then, I argue that participants in social epistemic practices can be manipulated into doubting their reasonsresponsive competences, with the result that they lose access to some of
252 Javier González de Prado their reasons. In the final section, I examine gaslighting in terms of this form of manipulation, and I discuss its harmful effects on the rational autonomy of victims of gaslighting.
13.1 Rational Autonomy and Intellectual Humility I understand rational autonomy as the capacity to act and adopt attitudes on the basis of (and only of) reasons available to the agent. When these reasons-responding capacities are applied to the adoption of doxastic attitudes, I will talk of exercises of epistemic autonomy. Epistemic autonomy, therefore, is underwritten by a competence to be guided by (and only by) considerations that constitute reasons for one’s doxastic attitudes. This competence involves a reliable disposition to treat a consideration as a reason just in case it is (Sylvan 2015; González de Prado, 2020). I will regard reasons as facts that favor a certain attitude or response (Dancy, 2004; Alvarez, 2010; Parfit, 2011). In this way, autonomous agents are sensitive to whether a consideration is a fact that favors some attitude, and are disposed to avoid relying on considerations that are not such facts. Note that I am not claiming that epistemic responses are rationally autonomous only when they are based on actually good reasons. Epistemic autonomy can be manifested in unsuccessful cases in which the agent relies on merely apparent reasons (for instance, when the evidence available is misleading). As long as the agent exercises a reliable (though fallible) competence to be guided by (and only by) reasons, she will manifest her epistemic autonomy, even when conditions are unfavorable, and the agent’s attitude is based on considerations that are not actual reasons for it. The relevant contrast here is with behaviors that are not produced as a response to considerations that appear as reasons to the agent – for instance, sneezing, jerk knee reactions, or attitudes resulting from indoctrination or unreflective prejudice. Insofar as these behaviors do not manifest the agent’s responsiveness to reasons, they would not constitute an exercise of her epistemic autonomy. Agents can only base their attitudes on reasons that are accessible to them. One cannot be guided by facts that one is completely ignorant about. More specifically, in order to respond properly to some reason, one needs to be in a position to recognize that it is a fact favoring the relevant attitude. I will say that an agent has access to some fact as a reason for an attitude if and only if she is in a position to treat it competently as such a reason (Sylvan, 2015; González de Prado, 2020). In turn, an agent competently treats a consideration as a reason if and only if she does so by virtue of exercising a reliable competence to be guided by reasons and only by reasons. Reasons that are accessible for the agent in this way are often called “possessed reasons” (Alvarez, 2010; for the related notion of subjective reasons, see Schroeder, 2007). It should be stressed that possessing a
Manipulation of Rational Autonomy 253 reason does not only require having epistemic access to the fact constituting the reason. It is also necessary to be properly sensitive to the favoring relation between that fact and the attitude in question (see Sylvan, 2015; Lord, 2018; González de Prado, 2020). For example, as a layperson, I am not in a position to treat competently the premises of a sophisticated mathematical deduction as reasons for its conclusion. Far from being an exercise of rational autonomy, my reliance on those premises to infer the conclusion would be an act of recklessness, which would fail to manifest my responsiveness to reasons (even if I knew the facts constituting the premises). By contrast, an expert mathematician may have access to those reasons, provided that she is properly sensitive to the support relation between the premises and the conclusion of the inference. Autonomous agents, therefore, are self-governed, insofar as they adopt attitudes by virtue of recognizing the normative force of the considerations favoring them. How does intellectual humility fit in this picture of rational autonomy? Remember that the adoption of an attitude is rationally autonomous if and only if it manifests a competence to rely on, and only on, reasons. This competence does not just involve a reliable disposition to be guided by a consideration when it constitutes a good reason, but also a reliable disposition to avoid being guided by considerations that are not actual reasons (i.e. non-reasons). Broncano-Berrocal (2018) calls this a precautionary competence; that is, a competence to avoid producing a performance when the risks of failure are significant enough. Thus, the capacities constitutive of rational autonomy consist partly of a precautionary competence to refrain from basing attitudes on non-reasons. This is why suspending judgment can be a manifestation of rational autonomy. By refraining from forming a settled attitude on an issue when one lacks sufficient reasons in favor of any such attitude, one exercises one’s precautionary competence to adopt attitudes only on the basis of (sufficient) reasons. Intellectual humility, as I am understanding it, is a disposition to respond properly to one’s epistemic limitations and defects (Whitcomb et al. 2017). An agent that avoids forming an attitude when she lacks sufficient reasons for it manifests this type of disposition. In this case, the relevant epistemic limitation is one’s lack of sufficient reasons. Therefore, the dispositions characteristic of intellectual humility contribute to an agent’s rational autonomy – in particular, these dispositions underlie the precautionary competence to avoid relying on non-reasons.2 To be sure, humility could undermine one’s rational autonomy if it were a matter of underestimating one’s epistemic position, for instance by taking one’s accessible reasons to be weaker than they actually are. However, following Whitcomb et al. (2017), I am taking intellectual humility to involve a disposition to assess accurately one’s epistemic situation and capacities, acknowledging their possible shortcomings, but without overstating
254 Javier González de Prado them. Humility, therefore, is perfectly compatible with properly responding to all the reasons one has access to. As pointed out above, epistemic competences are in general fallible, so their manifestation can fail to achieve its aim. In particular, by refraining from relying on an actual good reason, an agent may exercise a reliable competence to avoid being guided by non-reasons. The agent’s competence will misfire in this case, given that the consideration she avoided relying on was actually a reason. However, the disposition displayed by the agent may still be reliable, in the sense that in normal cases it generally leads to avoid forming attitudes on the basis of non-reasons. The agent was just unlucky to be in unfavorable circumstances in which this generally reliable disposition failed to achieve its aim. Indeed, it can be the case that in these unfavorable circumstances the only way in which the agent could treat as a reason the relevant consideration is by manifesting an unreliable disposition to be guided only by reasons (see Lasonen-Aarnio, 2020). That is, in these unfavorable circumstances the agent cannot competently treat the consideration in question as a reason for her attitudes, despite the fact that it actually is such a reason. Therefore, this reason will remain inaccessible for the agent, since having access to a reason requires being in a position to rely on it competently. Suppose, for instance, that p is a fact but I have no way of knowing that this is the case. In this situation, I am not in a position to treat p competently as a reason for my attitudes – if I treat it as such, I will be manifesting a lack of competence to rely only on my reasons. Something similar happens when the agent acquires strong (but misleading) higherorder evidence that her assessment of p as a reason is misguided. Consider the following example: EXPERTS: Jane is an undergraduate mathematics student. She performs competently a sophisticated mathematical inference from known premises, which she takes to be sufficient reasons to endorse the conclusion of the inference. Unfortunately, after revising in detail her work, several prestigious experts in the field tell Jane that her deduction is clearly invalid. The experts, however, are mistaken. Jane’s inference is sound. Assume that the experts are far more reliable than Jane when assessing this type of mathematical inference. Then, the expert’s testimony provides Jane with persuasive (but misleading) higher-order evidence that her inference is likely to be incorrect. Intellectual humility, as I am conceiving it, requires that Jane refrains from endorsing the conclusion of her original inference, or at least reduces her degree of confidence in it – even if before consulting the experts she was rational to endorse the conclusion of the inference, which had been performed competently. Thus, intellectually
Manipulation of Rational Autonomy 255 humble agents will behave in accordance with the recommendations of conciliationist views of disagreement (for instance Christensen, 2007; Elga, 2007; Matheson, 2009): their first-order attitudes will be sensitive to the higher-order evidence provided by the existence of epistemic peer or superior disagreement, even when this higher-order evidence is misleading. One may think that intellectual humility only demands heeding nonmisleading higher-order evidence about one’s epistemic position (in accordance with the non-conciliationist views defended, among others, by Titelbaum, 2015; Weatherson, 2019). After all, intellectual humility is a matter of responding accurately to one’s epistemic defects and limitations. And, when the relevant higher-order evidence is misleading, it will not reveal any actual limitation in one’s epistemic position. So, if Jane moderates her initial views in the face of the expert’s misleading testimony, she will be underestimating her actual epistemic position. Jane’s conciliatory response would not accurately reflect her epistemic strengths and limitations – there was nothing wrong in her mathematical inference. It is important to note, however, that intellectual humility is a reliable, but not infallible disposition to assess accurately one’s epistemic limitations. There may be cases where in manifesting a reliable disposition an agent fails to respond to an actual epistemic limitation or defect she possesses. And yet, if the agent did not manifest that disposition in those circumstances, she would reveal a lack of humility. The reason for this, is that the agent would display a dogmatic disposition to ignore higher-order evidence that questions her epistemic position. In relevantly similar situations, this dogmatic disposition would lead the agent to react inadequately to actual epistemic limitations (for example, it could easily happen that, in situations similar to Jane’s, the testimony of the experts actually reveals a defect in her epistemic position). This dogmatic disposition, therefore, is not characteristic of intellectually humble agents, since it is not a disposition that leads reliably, in normal cases, to accurate responses to one’s epistemic defects (see Lasonen-Aarnio, 2020 for a similar point). As an analogy, consider the case of compassion, understood as the disposition to feel sorry for another’s misfortune. Imagine an agent who does not feel sorry for a friend who appears to her to be stricken by misfortune. This agent will not display a compassionate disposition, even if the friend was not actually unfortunate but only appeared to be so. It could easily happen, in a similar situation, that an agent with this uncompassionate disposition fails to feel sorry for a friend’s actual misfortune. Compassionate people will sometime be sorry for merely apparent misfortunes. Likewise, the dispositions characteristic of intellectual humility may involve responding to misleading higher-order evidence questioning one’s epistemic competences.
256 Javier González de Prado On the view I am presenting, intellectual humility is part of the capacity to be rationally autonomous. Therefore, rationally autonomous deliberation can also be swayed by misleading higher-order evidence about the agent’s epistemic position. This may seem initially surprising. In EXPERTS, Jane was originally basing her conclusion on good mathematical reasons, despite what the experts might think. If rational autonomy is the capacity to be guided by reasons (and only by them), then one could think that Jane would manifest her rational autonomy if she kept endorsing her conclusion (which is supported by good reasons) in the face of the experts’ disagreeing stance. Indeed, sticking to one’s views against disagreeing authorities can be seen as a paradigmatic form of intellectual autonomy. This appraisal of EXPERTS is, however, too quick. Whether the adoption of an attitude is rationally autonomous is not determined by its being based on good reasons, but rather by whether such an adoption manifests the agent’s competence to rely on reasons, and only on reasons. In other words, the attitude has to be competently based on (apparent) reasons. And this competence consists partly of a reliable disposition to avoid misguidance by non-reasons. Yet, in some unfortunate situations the only way in which an agent could base her attitude on a certain reason is by displaying an unreliable disposition to be guided just by reasons – that is, a disposition that in similar cases could easily make the agent rely on non-reasons (see Lasonen-Aarnio, 2020). In EXPERTS, Jane is in one of these unlucky situations. If she remained steadfast, she would be manifesting a dogmatic disposition to ignore higher-order evidence questioning her assessment of some reason. In many normal cases, this dogmatic disposition will lead her to being guided by non-reasons, so it is not a disposition that can ground a reliable competence to be guided only by reasons. In this way, a manifestation of this type of dogmatic disposition will constitute a defective exercise of rational autonomy. The crucial point is that I am understanding rational autonomy as a capacity not just to respond to available reasons, but also to avoid misguidance by non-reasons. This capacity will not be competently exercised by an arrogant agent who, lacking in humility, ignores reasonable doubts about her reliability in assessing some reason. In a relevant sense, therefore, the rational autonomy of this arrogant agent is poorer than that of a humble person who is appropriately sensitive to risks of being misguided by non-reasons. Consider, as an analogy, mushroom gathering. Autonomy as a forager of edible mushrooms does not just involve being good at finding edible mushrooms. One also needs to be sufficiently reliable in discarding poisonous specimens. Lacking this discriminatory capacity will limit severely one’s autonomy as a forager of mushrooms (expert advice, or the help of a guidebook will be needed). Analogously, autonomy as a follower of reasons requires sufficient reliability in not basing one’s responses on non-reasons.
Manipulation of Rational Autonomy 257 To be sure, I have not provided a full argument for the view of rational autonomy I am putting forward (for further discussion, see González de Prado, 2020). My aim here, besides setting out this picture of rational autonomy, is exploring some of its implications, in particular for epistemic deliberation in social contexts. Let me just review, nevertheless, some of the appealing results that recommend the view of rational autonomy I have presented. As we have seen, understood in this way, rational autonomy is not in competition with intellectual humility. Rather than being conflicting epistemic ideals,3 both rational autonomy and intellectual humility are an integral part of a virtuous epistemic character – indeed, intellectual humility contributes to rational autonomy. Thus, rational autonomy does not clash with being responsive to higher-order evidence about one’s epistemic limitations (which is a paradigmatic manifestation of intellectual humility). Quite the contrary, competent responses to higher-order evidence (even when misleading) are manifestations of one’s rational autonomy. In this way, we do not have to resist the intuitions supporting conciliationism about disagreement and related views about higher-order evidence (e.g. Elga, 2007; Horowitz, 2014; Matheson, 2009; Steel, 2019). Moreover, this view of rational autonomy and intellectual humility is perfectly compatible with a reasons-responsiveness account of rationality, according to which being rational is a matter of manifesting proper responsiveness to reasons (Schroeder, 2007; Kiesewetter, 2017; Lord, 2018). We just need to note that rational agents are responsive to those reasons that are accessible to them, but not to reasons beyond their reach (see Sylvan, 2015; Lord, 2018). Intellectual humility would require precisely proper sensitivity to the limits of one’s epistemic reach, thereby avoiding guidance by considerations one is not in a position to treat competently as reasons. In particular, a reason will not be accessible to an agent when she has sufficient reasonable doubts about its status as a reason. That is, reasonable doubts about whether some consideration is a reason can defeat the agent’s access to that reason (I call this dispossessing defeat, see González de Prado, 2020). Summing up, I have offered an integrated picture of rational autonomy and intellectual humility, which allows us to endorse reasonsresponsiveness accounts of rationality without having to renounce conciliationist intuitions about the rational impact of (misleading) higher-order evidence. I think these are attractive features for an account of rational autonomy. My goal now is to explore what this picture of autonomy implies about contexts of social epistemic dependence. I have argued that rational autonomy involves appropriate sensitivity to (self-)doubt. In what follows, I will discuss gaslighting as a phenomenon in which the perpetrator manipulates a subject’s rational autonomy by eliciting unwarranted doubts about their reasonsresponding capacities.
258 Javier González de Prado
13.2 Rational Autonomy in Contexts of Social Dependence Our epistemic lives are to a large extent social. Many of our beliefs are formed by relying on the testimony of experts and other agents, on instruments and technology, or on communities and institutions (for an overview and further references, see Broncano-Berrocal & Vega-Encabo, 2019). At first sight, it can seem that social epistemic dependence is in tension with the exercise of one’s own rational autonomy. After all, epistemically self-sufficient agents are often seen as the ideal exemplification of rational autonomous agents (for discussion, see Coady, 2002; Fricker, 2006; Zagzebski, 2015; Pritchard, 2016). Imagine, for instance, an agent who believes p merely on the basis of the bare testimony of an expert, without being in a position to recognize herself the force of the reasons that, from the perspective of the expert, recommend believing p. It seems that this agent is not adopting her belief by responding to her own appreciation of the reasons available – rather, she is trusting someone else’s appraisal of what reasons there are. One may think that this form of reliance on others fails to manifest the agent’s own rational autonomy. If we think of autonomy as self-sufficiency, the picture I have just sketched is appealing. However, I want to allow for a less stark contrast between rational autonomy and epistemic dependence. In particular, I will grant that reliance on testimony can manifest one’s rational autonomy. Before I go into this, let me note that epistemic dependence may reveal a limitation in the agent’s rational autonomy, insofar as she has no access on her own (without depending on others) to reasons directly supporting her attitude. For example, my need to rely on expert mathematicians reflects my lack of competence to perform by myself certain mathematical inferences (that is, my lack of mathematical autonomy). If I were a mathematical expert, I would have access to reasons that are beyond my reach as a layperson. In this sense, my autonomy would be wider as an expert: I would be able to respond competently to more reasons in my (mathematical) deliberations. Having conceded this point, it can still be argued that one may exercise one’s rational autonomy when relying on others. The key idea is that, even when the recipient in some testimonial exchange has no first-hand access to the reasons of her testifier, she can still respond competently to other relevant reasons in her deliberation. There are different ways to go here. One option is to adopt a reductionist view of testimony according to which rational reliance on testimony requires having evidence about the reliability of the testifier (e.g. Hume, 1977 [1748]; see also Fricker, 1987, 1994; Adler, 1994; Lyons, 1997). These facts about the reliability of the source would be the reasons to which the agent would respond to when forming an attitude on the basis of testimony. In this way, an agent would exercise her rational autonomy when being the recipient of a testimonial exchange: she would be adopting attitudes on the basis of
Manipulation of Rational Autonomy 259 reasons she recognizes as such. Alternatively, one can endorse an antireductionist account of testimony on which rational testimonial reliance is underpinned by reasons not constituted by evidence about the testifier’s reliability (e.g. Zagzebski, 2015). For example, recent work by Simion (2020) and Simion and Kelp (2020) suggests that there are default (defeasible) reasons to accept the testimony of others, arising from the social norms governing testimonial practices (see also Graham, 2012, 2015). Again, by responding to these default reasons in her deliberation, an agent can exercise her rational autonomy. Moreover, even from an antireductionist perspective, rational reliance on testimony would require being sensitive to the presence of reasons to doubt the trustworthiness of the testifier, when such reasons arise. For my purposes here, I do not need to choose a specific account of testimonial exchanges. I just want to make room for the possibility that agents may manifest their rational autonomy in their reliance on testimony. Autonomy, understood in this way, is not equivalent to self-sufficiency, but rather to self-governance – epistemic dependence is compatible with manifestations of rational autonomy, provided that, in engaging in such epistemic dependence, the agent responds to reasons she recognizes as such. Indeed, epistemic dependence can extend the agent’s rational autonomy, by affording her access to reasons that would remain otherwise unavailable. For example, if I learn via testimony that it is raining, this fact about the weather becomes a reason available to me when deliberating about whether to take an umbrella. The set of reasons I can competently respond to has expanded thanks to my reliance on testimony (even if, as noted above, my rational autonomy can still be in some respects more limited than the testifier’s, to the extent that I have no first-hand access to all her reasons). I am taking intellectual humility to contribute to rational autonomy. Thus, participating in social practices of epistemic dependence as rationally autonomous agents involves openness to the possibility of being corrected by others, and a disposition to revise cautiously one’s attitudes in the face of disagreement and uncertainty. This humble disposition, however, exposes agents to manipulations. In particular, others can abuse the agent’s humble dispositions by introducing unwarranted doubts about her epistemic capacities, as a result of which she may lose her grip on some of her reasons. In the next sections, I discuss this form of epistemic abuse, of which gaslighting is the paradigmatic example. I start by examining how the agent’s apparent possession of some reason can be challenged in communicative exchanges.
13.3 Communicative Challenges to Reason-Possession Testimony, on the picture just sketched, can allow agents to acquire reasons otherwise inaccessible to them. For instance, if Sam tells Mary that
260 Javier González de Prado it is raining, the fact that it is raining will become a reason Mary possesses to take her umbrella. I will not focus, however, on communicative exchanges whose goal is to share reasons, but rather on communicative acts with the opposite aim of challenging the hearer’s (apparent) possession of some reason. These will be cases in which the speaker questions either that the hearer is relying on good reasons, or that she has a competent grasp of those reasons. A reason, remember, is a fact that favors some attitude or response. Moreover, I am assuming that autonomous agents form attitudes by responding to reasons accessible to them, where access to a reason involves being in a position to treat it competently as such. So, challenging whether an agent possesses some consideration R as a reason to φ may involve: (1) questioning whether R is actually a fact supporting φ-ing, or (2) questioning whether the agent is in a position to treat competently R as a reason to φ. The first type of challenge may amount to denying that R is a fact (or that it supports φ). In turn, a way to issue the second type of challenge is to argue that the agent is not reliable in her assessment of R as a reason to φ (say, because she is incompetent, tired, drunk, or biased). Let us call the challenges I have just introduced challenges to reasonspossession. These challenges can help the agent realize that she was mistaken, or unreliable, in her assessment of certain apparent reasons. An intellectually humble agent will be disposed to respond properly to the risks of error that these challenges may reveal. Social interactions, therefore, may allow agents to detect and revise mistaken or unreliable attitudes that could have remained unquestioned in an individualistic context (think of communities of scientists vetting and scrutinizing each other’s work). There are cases, however, in which challenges to reasons-possession are misleading. A misleading challenge questions the agent’s possession of a reason that she actually possesses. Sometimes the challenger will be unaware of the misleading nature of her challenge. It may be that, from the perspective of the challenger, it was reasonable to question the agent’s (apparent) possession of the relevant reason. For instance, the challenger could be herself deceived by misleading evidence suggesting that R is not a fact. I will say that a reasons-possession challenge is warranted, even if possibly misleading, when the challenger has sufficient reasons to question or doubt the addressee’s possession of the reason (warranted challenges can still be misleading because there can be good reasons to doubt something that is actually true). In some cases, the challenger will know that her challenge is misleading (or, at least, she will not care about whether her challenge is warranted). For example, the challenger may lie about the truth of R, or raise unmotivated doubts about it. When this happens, the challenger will be intentionally misleading her addressee. It is crucial to note that unwarranted,
Manipulation of Rational Autonomy 261 intentionally misleading challenges can still give the addressee actual reasons to doubt her grasp of some reason. The key point here is that reasons to doubt depend on the evidence available to the agent, and challenger and addressee will often possess different bodies of evidence. In particular, the fact that the challenger is questioning the addressee’s possession of reason R may constitute a reason for the addressee to doubt her possession of that reason, even if the challenger herself knows that her questioning is unwarranted. For instance, you can give me reasons to doubt the truth of R by lying about it. Imagine that I believe that the train departs at 3.15 pm (that was its departure time when I last took it), but you tell me (lying) that you have just checked in the timetable that the depart time is 3.25 pm. Assume that I lack reasons to question your reliability as a testifier (or perhaps I even have positive evidence about your general reliability). Then, your (lying) testimony may give me reasons to doubt whether the train leaves at 3.15 pm. Of course, your testimony does not give you undefeated reasons for doubt, since you know that you are lying. Remember that I have argued that an agent’s possession of a reason R will be defeated by significant doubts about the reliability of the agent’s assessment of R as a reason, even if these doubts are misleading. When the agent has sufficient, undefeated reasons to doubt whether R is a reason, she can insist on treating it as such a reason only by failing to manifest a reliable disposition to avoid guidance by non-reasons. We can assume that the agent is not able to discriminate reliably when her (reasonable) doubts are misleading and when they are not. Thus, even if the agent’s doubts happen to be misleading in this case, a disposition to ignore such doubts would make her be guided by non-reasons in many relevantly similar cases (see Lasonen-Aarnio, 2020). That is, it is easy enough that this type of disposition leads to reliance on non-reasons. So, an agent that displays this disposition fails to exercise a competence to be guided by, and only by, reasons – and without the capacity to manifest this competence, she lacks possession of the relevant reason. Therefore, an agent does not possess some reason if she can only treat it as such by ignoring reasonable doubts about its status as a reason. Given this, misleading challenges to reasons-possession may succeed in dispossessing the addressee of some reason R, as long as they manage to provide sufficient reasons for the addressee to doubt her grasp of R as a reason (and despite the misleading nature of these doubts). Thus, agents in communicative practices can take advantage of their perceived trustworthiness in order to introduce intentionally misleading doubts, with the aim of undermining their audience’s access to some reason. In the next section, I suggest that this is precisely what happens in gaslighting: victims of gaslighting are manipulated into doubting themselves, so that they lose their grasp of some of their reasons. In this way, gaslighting erodes the rational autonomy of its victims.
262 Javier González de Prado
13.4 Gaslighting as Dispossession Gaslighting is a complex phenomenon, and it is not my purpose here to offer a comprehensive account of it (for philosophical discussion of gaslighting, see Abramson, 2014; also Spear, 2019). Nevertheless, I want to suggest that it is useful to think at least of many central cases of gaslighting in terms of reasons-dispossession. In general, gaslighting involves inducing misleading self-doubts about someone’s epistemic capacities. I will consider cases of gaslighting in which an agent is manipulated into doubting her own competence to assess and respond to some of her available reasons, even if her competence is not actually defective. What makes such gaslighting a form of manipulative deceit, and distinguishes it from warranted types of criticism and questioning, is that the gaslighter lacks sufficient reasons to doubt that the victim is indeed incompetent or unreliable in assessing the reasons available to her. Gaslighting can in principle take place without direct interactions between the gaslighter and the victim. The gaslighter can just tamper with the evidence in the victim’s environment, in order to make her doubt herself (for instance, the gaslighter may secretly change the location of the victim’s belongings, so that she ends up questioning her own memory). I will focus, however, in cases of gaslighting in which there are direct communicative interactions between the gaslighter and the victim, in particular cases in which the gaslighter challenges the victim’s assessment of some apparent reasons. Here is an example: WORKPLACE: After working in a new company for a while, Jane has experienced several episodes that make her conclude correctly that her work is not being properly appreciated by her colleagues – her voice is not heard in group discussions and her proposals are rarely implemented. When she consults her boss about this, he dismisses her conclusion, minimizing these incidents and pointing out that her colleagues are accomplished team-workers who have collaborated successfully with all types of people. He suggests that Jane is just over-interpreting things, perhaps because she is too tired and has been involved in stressful projects. Also, she is probably a bit anxious about being a new addition to the team: she needs to take it easy and try to adapt to the work dynamics of the company. We can fill in the details of the example so that Jane has reasons to regard her boss as a reliable epistemic authority about what goes on in her office. Perhaps he is an experienced manager, who knows his subordinates well, and who has been a helpful mentor for Jane, providing her with useful insights about her work. Given this perceived epistemic authority, Jane’s boss’ testimony offers her prima facie reasons to doubt her initial conclusion, and her assessment of her interactions with her colleagues. We
Manipulation of Rational Autonomy 263 can imagine that in the past Jane has been mistaken in her appreciation of different work-related episodes, and her boss has helpfully made her aware of her mistakes. Now, if Jane’s boss provides her with reasons to doubt her assessment of the situation, then his testimony may undermine Jane’s access to her original reasons to conclude that her colleagues were undervaluing her. After being challenged by her boss, Jane can only stick to her conclusion by stubbornly ignoring undefeated reasons to doubt her assessment of the reasons. Such stubbornness exhibits a failure to manifest a competence to avoid guidance by non-reasons. Therefore, once her evaluation of the reasons is questioned, Jane is no longer in a position to treat competently her experiences with her colleagues as providing (undefeated) reasons to conclude that she is being undervalued. Remember that an agent lacks access to a reason if she is not in a position to treat it competently as such. Thus, Jane has lost access to her reasons to conclude that her work is not being properly appreciated. Note that Jane’s original response to these reasons was perfectly competent – they were reasons she properly grasped. It is only after her boss makes Jane doubt herself that she stops being able to rely competently on those reasons. In other words, it is her boss’ questioning that has dispossessed Jane of reasons that were originally accessible to her. Assuming, that Jane’s boss is intentionally misleading her, this would be an example of dispossession via gaslighting. So, in the sorts of cases I am considering, gaslighting aims to dispossess an agent of reasons she would otherwise have (at least potential) access to. In other words, gaslighting is a form of intentionally misleading reasons-dispossession. Victims of gaslighting lose access to some of their reasons as a result of (rationally?) doubting their own competences. In order to induce this self-doubt, the gaslighter exploits her perceived epistemic authority about the issue discussed. The gaslighter will only succeed if he appears to the victim as sufficiently trustworthy and reliable, so that his questioning is taken to provide reasons for self-doubt (Spear, 2019, p. 9). This does not mean that gaslighters always have genuine epistemic authority. Arguably, what underlies the gaslighter’s perceived epistemic authority is often his political, social, or personal sway, which (through a form of hallo effect) makes him be seen also as an epistemic authority. It is not surprising, therefore, that gaslighting often takes advantage of asymmetrical relations of power and authority, as happens in WORKPLACE (see Abramson, 2014, p. 19). Hence the danger of gaslighting in personal or social relations with unbalanced power dynamics. A characteristic feature of gaslighting is that the doubts induced do not just concern the correctness of a specific attitude, but have to do more broadly with the victim’s capacities to assess certain types of issues on her own. This is why gaslighting poses a significant threat to the victim’s rational autonomy. Jane’s boss does not merely challenge her (correct) belief that her work is undervalued. Moreover, he questions her
264 Javier González de Prado competence to draw conclusions about her workplace on the basis of her observations and experiences – the boss suggests that she is tired and stressed, that she is not adapted to the company’s dynamics, that she is oversensitive, etc. If successful, Jane’s boss’ gaslighting will undermines Jane’s capacity to rely on her own appreciation of what goes on in her workplace when deliberating. As a result, Jane’s ability to deliberate autonomously about her workplace will be severely curtailed. The ultimate effect of the self-doubts introduced by gaslighting is to prevent the agent from being in a position to exercise her competence to respond reliably to certain types of reasons. In Abramson’s words, gaslighting aims to make the victim lose her “independent standing as deliberator” (Abramson, 2014, p. 8; also Spear, 2019) Thus, gaslighting leads to a manipulative impoverishment of the agent’s rational autonomy. Insofar as rational autonomy is valuable, this impoverishment causes an epistemic harm to the victim. A virtue of the account I am presenting is that it provides a natural distinction between cases in which the rational or justified response to gaslighting is compliance, and cases in which resistance is justified. Assume that an attitude is justified for an agent when it is sufficiently supported by the agent’s possessed reasons. Then, compliance with gaslighting is justified whenever the gaslighter manages to provide the victim with sufficient, undefeated reasons for doubt about her competencies. This is so, because, when such doubt is justified, the agent loses access to some of her original reasons, so that a cautious attitude becomes justified. By contrast, if the victim can defeat the apparent reasons for doubt introduced by the gaslighter, resistance will be justified. In this case, remaining steadfast will not be a risky attitude from the victim’s perspective – she will retain access to her original reasons. Thus, the victim of gaslighting is not always helpless. She can counteract the gaslighter’s challenge by defeating the apparent reasons for doubt he raises. Imagine, for instance, that Jane’s boss did not witness the episodes described by Jane, and has a sexist bias that makes him give less weight to the testimony and opinions of his female subordinates. Knowledge of these facts would undercut the apparent reasons for selfdoubt raised by Jane’s boss. In general, awareness about the power imbalances that often underpin gaslighting will put agents in a better position to successfully resist the challenges posed by gaslighters. The existence of cases in which resistance to gaslighting is justified should not make us think, however, that this option is always rationally available for the victim. There are situations in which the victim has no access to considerations that defeat the reasons for self-doubt introduced by the gaslighter. As I have argued above, the fact that a generally dependable source questions one’s reliability can provide sufficient reasons for self-doubt, even if that source is misleading on this particular occasion. Epistemic dependence has the consequence that one can be (intentionally)
Manipulation of Rational Autonomy 265 deceived by an agent one has no reason to regard as untrustworthy (or even an agent one knows to be generally reliable). When the gaslighter manages to give the victim sufficient reasons to doubt herself, submitting to gaslighting does not need to reveal any defect or mistake in the victim’s reasons-responding capacities.4 Rationally autonomous agents form attitudes by responding competently to (and only to) the reasons accessible to them. Yet the apparition of selfdoubt undermines the victim’s access to some of her reasons. Relying on these now inaccessible reasons would manifest a lack of competence in responding only to one’s possessed reasons. In order to exercise this competence, the victim has to refrain from relying on her original reasons, since from her current perspective there is a significant risk that her assessment of such reasons is mistaken. Thus, the victim will actually be exercising her rational autonomy by complying with gaslighting. This is so even if this autonomous response of the victim has as a consequence that her autonomy is eroded. Gaslighting has the perverse feature that it achieves its harmful effects via the autonomous responses of the victim. In a sense, the victim autonomously collaborates with the undoing of her own autonomy (see Abramson, 2014, p. 16). Srinivasan (forthcoming) has recently defended a radically externalist view in which misleading higher-order evidence, and in particular misleading self-doubt do not defeat the justification of the agent’s attitudes. In this way, she argues that resistance to gaslighting is always justified as long as the victim’s original assessment of the situation was correct – even if the victim may be blameless in submitting to gaslighting when she cannot realize from her perspective that she is being gaslighted. According to Srinivasan, an advantage of this externalist approach is that it reveals how the attitudes of agents can be epistemically distorted by the social structures they are embedded in – moving away from an exclusive focus on the internal perspective of the agent. I agree with Srinivasan that this externalist outlook can be illuminating. However, I think that we do not do justice to the epistemic plight of victims of gaslighting if we do not also consider the other dimensions of epistemic evaluation that take into account the competent response to uncertainty and self-doubt. More specifically, it should be stressed that resistance to gaslighting is not always an open option for rationally autonomous agents. Sometimes the only way to resist gaslighting would be by failing to be guided only by considerations that one is in a position to recognize autonomously as reasons. Resistance in these cases would involve displaying an unreliable disposition to avoid guidance by non-reasons. This dogmatic disposition is incompatible with the sort of intellectual humility that underwrites the competences constitutive of full rational autonomy. Thus, the gaslighter aims to manipulate social practices of epistemic dependence so that compliance with her gaslighting challenges becomes
266 Javier González de Prado the only competent response for a victim that acts as a rationally autonomous agent. It is important, therefore, to have evaluative epistemic notions that allow us to say that there may be nothing defective, irrational, or unjustified in conciliatory responses to gaslighting, when these responses reflect a competent sensitivity to uncertainty and self-doubt. What is epistemically incorrect or defective in these cases is not the victim’s response to uncertainty, but the gaslighter’s unwarranted creation of this uncertainty, and the unjust social structures that may be facilitating these episodes of gaslighting. Gaslighting is a social problem, which exploits practices of epistemic dependence and asymmetrical structures of authority. As such, gaslighting cannot always be successfully counteracted by means of individual resistance: collective responses are often needed – for instance, these responses may involve revising the power imbalances that typically underpin gaslighting, or monitoring illegitimate attributions and uses of epistemic authority. In general, when we navigate autonomously social epistemic practices, we become exposed to risks of manipulation and exploitation. A fully competent agent may be unable to avoid these risks on her own. Only through collective action can we shape our practices in order to minimize the prevalence of manipulative gaslighting and other forms of epistemic abuse.
Notes 1 Whitcomb et al. (2017) defend and develop this view in relation to intellectual humility. 2 Whitcomb et al. (2017) refer as “proper intellectual pride” to the disposition to be properly attentive and responsive to one’s intellectual strengths. On the picture I am presenting, proper pride would also contribute to rational autonomy, in particular to the competence to be guided by those reasons available to one (whereas intellectual humility would be relevant for the competence to avoid being guided by non-reasons). 3 Christensen (2010, 2013) argues for the possibility of conflicts between the epistemic ideals of reasons-responsiveness and humility in cases of misleading higher-order evidence. 4 Even when the gaslighter fails to make self-doubt epistemically justified for the victim, there may be non-epistemic factors that make submission to gaslighting excusable. For instance, if the gaslighter is in a position of power, the victim may be under psychological duress when facing his challenges.
References Abramson, K. (2014). Turning up the lights on gaslighting. Philosophical Perspectives, 28, 1–30. Adler, J. E. (1994). Testimony, trust, knowing. The Journal of Philosophy, 91(5), 264–275. Alvarez, M. (2010). Kinds of reasons: An essay in the philosophy of action. Oxford: Oxford University Press.
Manipulation of Rational Autonomy 267 Broncano-Berrocal, F. (2018). Purifying impure virtue epistemology. Philosophical Studies, 175(2), 385–410. Broncano-Berrocal, F., & Vega-Encabo, J. (2019). A taxonomy of types of epistemic dependence. Synthese, 1–19. Christensen, D. (2007). Epistemology of disagreement: The good news. The Philosophical Review, 116(2), 187–217. Christensen, D. (2010). Higher‐order evidence. Philosophy and Phenomenological Research, 81(1), 185–215. Christensen, D. (2013). Epistemic modesty defended. In D. Christensen & J. Lackey (Eds.), The epistemology of disagreement: New essays (pp. 77–97). Oxford: Oxford University Press. Coady, C. A. (2002). Testimony and intellectual autonomy. Studies in History and Philosophy of Science Part A, 33(2), 355–372. Dancy, J. (2004). Ethics without principles. Oxford: Oxford University Press. Elga, A. (2007). Reflection and disagreement. Noûs, 41(3), 478–502. Fricker, E. (1987). The epistemology of testimony. Proceedings of the Aristotelian Society Supplement, 61, 57–83. Fricker, E. (1994). Against gullibility. In B. K. Matilal & A. Chakrabarti (Eds.), Knowing from words: Western and Indian philosophical analysis of understanding and testimony (pp. 125–161). Dordrecht: Kluwer Academic Publishers. Fricker, E. (2006). Testimony and epistemic autonomy. In J. Lackey & E. Sosa (Eds.), The epistemology of testimony (pp. 225–250). Oxford: Oxford University Press. González de Prado, J. (2020). Dispossessing defeat. Philosophy and Phenomenological Research, 101(2), 323–340. Graham, P.J. (2012). Testimony, trust, and social norms. Abstracta, Special Issue 6, 92–117. Graham, P. J. (2015). Epistemic normativity and social norms. In D. Henderson & J. Greco (Eds.), Epistemic evaluation: Purposeful epistemology (pp. 247–273). Oxford: Oxford University Press. Horowitz, S. (2014). Epistemic akrasia. Noûs, 48(4), 718–744. Hume, D. (1977[1748]). An enquiry concerning human understanding. E. Steinberg (ed.). Indianapolis: Hackett Publishing Company. Kiesewetter, B. (2017). The normativity of rationality. Oxford: Oxford University Press. Lasonen-Aarnio, M. (2020). Enkrasia or evidentialism? Learning to love mismatch. Philosophical Studies, 177(3), 597–632. Lord, E. (2018). The importance of being rational. Oxford: Oxford University Press. Lyons, J. (1997). Testimony, induction, and folk psychology. Australasian Journal of Philosophy, 75, 163–178. Matheson, J. (2009). Conciliatory views of disagreement and higher-order evidence. Episteme, 6(3), 269–279. Parfit, D. (2011). On what matters (Vol. 1). Oxford: Oxford University Press. Pritchard, D. (2016). Seeing it for oneself: Perceptual knowledge, understanding, and intellectual autonomy. Episteme, 13(1), 29–42. Schroeder, M. (2007). Slaves of the passions. Oxford: Oxford University Press. Simion, M. (2020). Testimonial contractarianism: A knowledge-first social epistemology. Noûs. Online First Access (https://doi.org/10.1111/nous.12337).
268 Javier González de Prado Simion, M., & Kelp, C. (2018). How to be an anti-reductionist. Synthese, 197, 2849–2866. Spear, A. D. (2019). Epistemic dimensions of gaslighting: peer-disagreement, selftrust, and epistemic injustice. Inquiry, 1–24. Steel, R. (2019). Against right reason. Philosophy and Phenomenological Research, 99(2), 431–460. Srinivasan, A. (forthcoming). Radical externalism. Philosophical Review. Sylvan, K. (2015). What apparent reasons appear to be. Philosophical Studies, 172(3), 587–606. Titelbaum, M. (2015). Rationality’s fixed point (or: In defense of right reason). Oxford Studies in Epistemology, 5, 253–294. Weatherson, B. (2019). Normative externalism. Oxford: Oxford University Press. Whitcomb, D., Battaly, H., Baehr, J., & Howard‐Snyder, D. (2017). Intellectual humility: Owning our limitations. Philosophy and Phenomenological Research, 94(3), 509–539. Zagzebski, L. T. (2015). Epistemic authority: A theory of trust, authority, and autonomy in belief. Oxford: Oxford University Press.
Part IV
Epistemic Autonomy and Social Epistemology
14 Epistemic Autonomy for Social Epistemologists The Case of Moral Inheritance Sarah McGrath
14.1 Introduction1 On the one hand, it seems as though an important way of acquiring fullfledged moral knowledge is this: starting as young children, we simply adopt the moral beliefs that we are explicitly told or that are presupposed by the thoughts and practices of those around us. Certainly, a great deal of our knowledge about other subject matters comes to us in this way: for example, it seems safe to assume that most of our geographical beliefs are acquired from testimony or otherwise absorbed from our environments. On pain of sweeping skepticism about geography, it seems we should say: provided that the relevant beliefs are true, and the sources reliable, these beliefs count as knowledge. But if this is the right thing to say about geography, then there is substantial theoretical pressure to say the same thing about morality. For, absent compelling reason to think it cannot be had, we should prefer a unified account of knowledge, one on which the standards that must be met in order to count as knowing a proposition do not vary from domain to domain. On the other hand, there is a longstanding philosophical tradition of thinking that moral knowledge is special: that one must arrive at one’s moral beliefs, in some sense, autonomously. That an individual must “decide for herself” about questions of value as opposed to matters of non-moral fact is also a thought that resonates with common sense. Moreover, this individualistic ideal seems to be supported by the popularity of the method of reflective equilibrium as an answer to the question of how we acquire moral knowledge. According to standard articulations of the method, moral inquiry is a matter of an individual’s working back and forth among her own moral judgments at different levels of generality, resolving tensions and ultimately arriving at a more stable equilibrium among them. One’s beliefs accrue epistemic status and may ultimately count as knowledge only after surviving this process. Of course, nobody thinks that such a reflective process is necessary for knowledge of subjects like geography. And it is striking that the method of reflective equilibrium – a method of individual reflection on one’s own
272 Sarah McGrath judgments – enjoys so much popularity among, not only anti-realists who think that moral truth depends on our moral attitudes, but also among realists, who think that it does not. In this chapter, I attempt to reconcile three claims that, on the surface, appear to make up an inconsistent triad: (1) there is some important sense in which moral beliefs should be autonomous; (2) in other domains, such as geography, beliefs that we unreflectively inherit from our social environment count as knowledge, (3) epistemic standards do not vary across domains. I will begin by presenting the case for the view that moral beliefs amount to knowledge when they are simply inherited, provided that the sources are reliable, etc. Then, in Section 14.3, I explore the possibilities as to what a plausible autonomy condition on moral belief might look like. I argue that, properly understood, this requirement expresses an ideal that is important for moral agency, rather than for the epistemic justification of moral belief, and is thus consistent with the claim that much moral knowledge is socially inherited. In Section 14.4, I present what I take to be an important objection.
14.2 The Moral Inheritance View and the Objection from Autonomy In recent decades, epistemology has taken a social turn. The focus on traditional questions about “inner” sources of knowledge such as introspection, perception, memory, and reasoning has given way to an increasing interest in more social sources of knowledge, perhaps most significantly, testimony. Against the spirit of the Cartesian idea that knowledge is achieved as a result of the systematic scrutiny of reasons for belief and sources of doubt, many epistemologists now think that testimony is a source of full-fledged knowledge that individuals who are embedded in the world in the right way can passively and effortlessly receive. Starting as young children, we simply adopt beliefs that we are explicitly told and those that are presupposed by the thoughts and practices of those around us. Provided that the beliefs are true, and those from whom we have inherited them are reliable sources of information, those beliefs count as knowledge. Consider, for example, our beliefs about geography. It is plausible that most of the average adult’s geographical beliefs are beliefs that she accepts because she has acquired them from reliable sources around her. Most normal adults do not do anything to verify their geographic beliefs, to critically scrutinize them, or to validate them in any way. Someone who did set out to critically scrutinize the geographic claims she encountered would strike us as epistemically defective. Thus, acquiring true beliefs from reliable sources had better be sufficient for geographical knowledge: otherwise, there will be far less geographical knowledge than we thought
The Case of Moral Inheritance 273 that there was.2 But if it is true of adults that acquiring true beliefs from reliable sources is sufficient for their having geographical knowledge, and children’s geographical beliefs don’t differ from adults’ in any epistemically relevant way, then presumably we should also say that a child who has acquired her geographic beliefs from reliable sources also has beliefs that amount to full-fledged geographical knowledge. In other words, we should embrace: The Geographical Inheritance View: A child’s geographical beliefs count as knowledge, so long as the sources from whom she acquired the beliefs know, and are reliable sources of information whom she has no reason to distrust. The analogous claim as applied to the moral domain would be: The Moral Inheritance View: A child’s moral beliefs count as knowledge, so long as the sources from whom she acquired the beliefs know, and are reliable sources of information whom she has no reason to distrust.3 The presumptive case in favor of the Moral Inheritance View is that it employs the same standard for knowledge in the moral domain that we ordinarily assume with respect to other domains. In adopting this as a default assumption, we do not thereby assume that the sources of non-moral knowledge are the same as the sources of moral knowledge. Generally speaking, even if the standards for knowing a proposition are the same across different domains, the sources of knowledge might vary greatly from domain to domain. For example, it is plausible that we can arrive at mathematical knowledge via a priori reasoning, although it is obvious that we cannot similarly arrive at geographical knowledge via such reasoning. If a priori reasoning is a source of mathematical but not geographical knowledge, it does not follow that the standards one must meet in order to count as knowing a mathematical proposition differ from the standards that one must meet in order to count as knowing a geographical proposition. In both domains, knowledge might consist of, for example, sufficiently reliable true belief, or belief that is sufficiently safe (cf. Williamson, 2000), or justified true belief that satisfies an additional “anti-Gettier” condition, or warranted true belief (in the sense of Plantinga 1993), or a belief that tracks the truth (cf. Nozick, 1981), or … something else. This is a point on which we should expect even the moral skeptic, who denies that we have any moral knowledge, to agree. In fact, the skeptic should insist that the standards for moral knowledge do not differ in any significant way from the standards for knowledge of other subject matters, for the claim that we lack moral knowledge is potentially much less interesting if the sense in which we fall short of moral
274 Sarah McGrath knowledge involves our falling short of standards that differ from those required for knowledge of other subject matters. The Moral Inheritance View simply applies the same standards for non-moral domains to the moral domain. A child who comes to believe that Paris is the capital of France from reliable sources whom she has no reason to distrust is in a position to know that Paris is the capital of France, even if she has done nothing to validate or verify that belief. She is credited with knowledge even if she would have gotten it wrong in the counterfactual situation in which the testimony regarding the capitals that she received was false. Thus, applying the same standards, we should say that children who inherit true moral beliefs from their environments have moral knowledge provided that their sources are reliable, even if they are unable to justify their beliefs and even if they would have had false beliefs had they been raised in a less epistemically fortunate environment. Since the Moral Inheritance View does not deploy different standards for moral knowledge from the standards we ordinarily assume in other domains, in the absence of some compelling reason for thinking that things are otherwise, we should embrace it. The Moral Inheritance View is modest in the following respect: it does not purport to provide a complete answer to the question of where moral knowledge comes from. The most obvious reason that it cannot be the whole story is that moral knowledge cannot be from testimony “all the way back”: even if every person alive today has most of her moral knowledge from testimony, she got it from people who got it from somewhere further back, and at some point in the chain of testimony, someone must have come by it in some other way. And for all The Moral Inheritance view says, some of this knowledge might be innate, some might be acquired by intuition or perception; some might emerge from application of the method of reflective equilibrium. But even once its modesty is appreciated, the moral inheritance view faces a number of objections that point to ways that moral beliefs seem to differ from other kinds of beliefs, or ways in which the moral domain differs from other domains, that purport to undermine the view that inherited beliefs count as knowledge in the moral domain. One kind of objection agrees that the standards for knowledge are uniform across domains, but holds that our moral beliefs or some subset of them do not meet the standards. Suppose, for example, that in a given community, controversial moral beliefs are so variable among adults that a child who just happened to inherit the true moral views would be, in some sense, too lucky to count as knowing. Or suppose that the adults in some community all publicly disagree with each other about certain moral questions. General claims about the circumstances in which disagreement undermines knowledge might be deployed in an argument that testimony about those questions would fail to transmit knowledge. If the
The Case of Moral Inheritance 275 truth about peer disagreement is that awareness of it undermines knowledge – irrespective of the subject matter – then moral beliefs that meet the relevant condition would fail to count as knowledge for reasons that had to do with the extent of moral disagreement, not because of special standards for moral knowledge. Neither the argument from luck nor the argument from disagreement depends on the idea that the standards for knowledge vary across domains. For that reason, such arguments will often fail to target moral knowledge uniquely: to the extent that they succeed in undermining moral knowledge they potentially undermine knowledge in other areas where our beliefs are just as lucky or just as contested. Although I think that any full-fledged defense of moral knowledge must reckon with such challenges, in this chapter, I am going to set this kind of objection aside in order to focus on the idea that moral inheritance fails because there is some kind of epistemic autonomy condition that moral beliefs in particular must meet in order to enjoy a certain kind of positive, yet-to-be-specified status. According to this objection, although it is natural, inevitable, and perfectly appropriate for a normal adult to rely on authority for most of what he believes about geography and many other subject matters, it is not similarly natural, inevitable and perfectly appropriate for a normal adult to rely on authority for his moral views.4 And if there is a problem with the claim that adults can simply inherit moral knowledge, then we cannot argue for the Moral Inheritance View in the same way that we could argue for an analogous Geographical Inheritance View. Why think that there is something special about moral beliefs, such that it is not natural, normal, or appropriate to simply absorb them from someone else? The extensive literature on moral deference offers many examples designed to elicit the reaction that there is something peculiar, or problematic, or, to borrow Enoch’s preferred term, “fishy” (Enoch, 2014) about moral deference. Alison Hills, for example, invites us to consider the example of Eleanor, who defers to her friend about whether eating meat is wrong.5 R.J. Howell imagines what life would be like were we able to download a new app, “Google Morals”: No longer will we find ourselves lost in the moral metropolis. When faced with a moral quandary or deep ethical question we can type a query and the answer comes forthwith. Next time I am weighing the value of a tasty steak against the disvalue of animal suffering, I’ll know what to do. … I’ll just Google it. While these theorists offer different explanations as to what it is about these examples that makes them seem “fishy,” and whether and to what extent moral deference is in fact problematic, most agree that there is something of interest here to be explained, and that the correct explanation
276 Sarah McGrath would have the potential to shed light on the nature of morality or our cognitive relationship to it. Among the competing suggestions as to what explains the felt asymmetry about moral deference as compared to deference in other domains is the suggestion that the explanation has something to do with autonomy.6 The view that the correct explanation will have something to do with autonomy is reminiscent of the familiar Kantian idea that in order to be subject to the moral law, the will must legislate the moral law, and must be free from contingent empirical influences, lest it substitute “for morality a bastard patched up from the limbs of very different parentage” (1785/1985, p. 43). Summarizing (but not going on to endorse) the idea in this passage, Anscombe writes: To take one’s morality from someone else … would make it not morality at all; if one takes it from someone else, that turns it into a bastard sort of morality, marked by heteronomy. (Anscombe, 1981, p. 45) Although R.M. Hare would presumably not offer a Kantian explanation as to why moral judgments cannot be “taken from someone else,” he did seem to agree with the basic idea that they can’t be taken from someone else. According to Hare, what distinguishes “any serious moral problem” from purely factual questions is that: a man who is faced with such a problem knows that it is his own problem, and that nobody can answer it for him. He may, it is true, ask the advice of other people; and he may also ascertain more facts about the circumstances and consequences of a proposed action, and other facts of this sort. But there will come a time when he does not hope to find anything else of relevance by factual inquiry, and when he knows that, whatever others may say about the answer to this problem, he has to answer it. (1962, p. 1) One might think that in any sense in which it is true that a person must ultimately make up her own mind about a serious moral question with which she is concerned, it’s equally true that she must ultimately make up her own mind about any question that she considers. But Hare explicitly denies this: the sense in which each of us is ultimately responsible for arriving at her own moral convictions is stronger than any sense in which we are responsible for arriving at our purely factual, non-moral opinions about how the world is arranged (p. 2). Similarly, Robert Paul Wolff describes “the responsible man” as someone who must make up his own mind about moral matters:
The Case of Moral Inheritance 277 He may learn from others about his moral obligations, but only in the sense that a mathematician learns from other mathematicians – namely by hearing from them arguments whose validity he recognizes even though he did not think of them himself. He does not learn from them in the sense that one learns from an explorer, by accepting as true his accounts of things one cannot see for oneself. (Wolff, 1998/1970, p. 14) On the face of it, if what Wolff is saying is true, then the Moral Inheritance Thesis is false. At the very least, if moral judgment is special in the way that these passages suggest, then the epistemology of moral knowledge is importantly different from the epistemology of other domains, where much of what we know is inherited unreflectively. Let’s call the requirement that a moral agent must in some sense “make up her own mind” about moral matters, or that she has to recognize the answers to her moral questions for herself, the epistemic autonomy requirement, leaving open for now what the requirement would be a requirement for. What exactly would it be to fulfill this requirement? We could try to offer a negative characterization of what it is to arrive at one’s judgment autonomously: we could say that a judgment is epistemically autonomous when one arrives at it in some way other than by blind deference. But this merely negative characterization won’t do: after all, one might arrive at one’s judgments by blindly guessing. Although that would satisfy the negative characterization, it presumably wouldn’t fulfill the ideal. Moreover, autonomy comes in degrees. If blind deference is at one end of the spectrum, the other end would be the borderline incomprehensible idea of complete independence from any other sources at all. I am not going to try to offer a complete account of the ways of making up one’s mind that would properly count as “autonomous,” in the sense that is suggested by the passages above. Instead, I will ask: of the many things that “autonomy” has been used to mean, which of these things matters? Nomy Arpaly (2003) asks this question about moral responsibility (which of the varieties of autonomy matters for praise and blame?), but our question is: of the many things that “autonomy” has been used to mean, which of these things figures in the ideal that the agent who blindly defers to another person, or googles her morals, fails to fulfill? I will argue that the most plausible answers to this question have to do with the connection between autonomous judgment and moral agency, rather than a connection between autonomous judgment and the achievement of epistemic goods, such as full-fledged knowledge or justified belief. Thus, getting clear about why epistemic autonomy is important provides material for reconciling the epistemic autonomy requirement with The Moral Inheritance View: if there is an epistemic autonomy requirement that applies to moral judgment, it is consistent with the Moral Inheritance View.
278 Sarah McGrath
14.3 Making The Autonomy Requirement More Precise So far, we have used the term “epistemic autonomy” as a kind of placeholder for whatever it is that the moral judgments of someone like the deferential spouse are lacking. Supposing that there is an ideal of “autonomy” that adults are supposed to fulfill with respect to their moral judgments, what is it? Let’s start by considering various things that the term “autonomy” might be used to mean.7 In the context of discussions of moral responsibility and agency, “autonomy” can refer to a kind of self-control involving the ability to decide which of one’s competing desires will succeed in motivating one to act.8 Alternatively, “autonomy” can refer to the kind of normative status that one must have in order to suffer an “autonomy violation” – as when, for example, someone takes something that belongs to someone else without permission. It can also refer to a quality that agents have when they identify with their desires and lack when they don’t: agents who experience some of their desires as alien or invasive are said to lack autonomy in this sense. And “autonomy” can also mean personal efficacy: in this sense of autonomy, one becomes more autonomous when one, for example, is old enough to drive, because learning to drive enables one to do more things without relying on other people. Whatever it is that seems “off” about someone who defers to an expert about moral matters, it is not that she is lacking in self-control, experiencing her desires as alien, or lacking in the kinds of things that would enable her to do more on her own. Now, autonomy can also mean “independence of mind”, and anyone who can correctly be described as a “deferential” probably lacks autonomy in the independence-of-mind sense. But not-deferential cannot be what we are after in our characterization of “epistemic autonomy”: it does not tell us what positive ideal a moral judgment is supposed to fulfill in a good case. Again, someone who forms her moral judgments by randomly guessing might have independence of mind in the sense that she is not overly deferential to someone else, but presumably she doesn’t thereby fulfill the relevant ideal. Moreover, as Driver points out, it is not even true that decisions to defer are lacking autonomy in the “independence of mind” sense: When an agent decides to accept the testimony the agent is acting autonomously. There is an autonomous decision not to make one’s own decision. So, one does display independence of thought at this level. (2006, p. 636) Finally, autonomy can refer to authenticity and reasons-responsiveness. When an agent acts authentically, she acts in a way that is true to her real self or deepest values. Reasons-responsiveness is a quality that an
The Case of Moral Inheritance 279 agent exhibits when she acts in response to moral reasons. These kinds of autonomy characterize actions, and, if Arpaly is right, they are the kinds of autonomy that matter for determining praise and blame (2003, p. 131). Could they also be the kinds of autonomy that matter for our purposes? Even if someone who blindly defers to someone else about whether it is permissible to eat meat knows what she ought to do on the basis of testimony, her knowledge is not grounded in an appreciation of or sensitivity to the features that make Φ-ing the right thing to do. And the fact that her knowledge is not grounded in the right-making features indicates something about her values. We can put this point in terms of Enoch’s (2014) distinction between transparent and opaque evidence. Transparent evidence for a proposition not only suggests that the proposition is true, but also provides insight into why it is (or would be) true. In contrast, opaque evidence for a proposition suggests that the proposition is true without providing any indication as to why it is (or would be) true. Consider, for example, two different kinds of evidence you might have that some mathematical proposition is true. First, while taking a mathematics exam, you might encounter a question that asks you to prove the proposition. Since you know that it is unlikely that you would be asked to prove a proposition that is not true, the fact that the exam asks you to prove this particular proposition is evidence that it is true. This evidence is opaque evidence since it provides no insight into why the mathematical proposition is true. If you subsequently succeed in proving the proposition in some canonical way, the proof itself is excellent (indeed, conclusive) evidence that the proposition is true; in addition, it might very well provide insight into why the proposition is true. Insofar as it does afford such insight, it is transparent evidence. Suppose that there is an ideal, associated with moral agency, of being motivated to act by the right-making features of an action. The deferential agent’s belief that eating meat is wrong is based on opaque evidence, so it doesn’t put her in the position to fulfill this ideal. On a given occasion, she will believe that Φ-ing is the right thing to do in her circumstances because she has been informed that this is so by her friend, who knows that Φ-ing is the right thing to do. While she is in a position to do the right thing because it is right – her concern for morality might motivate her to Φ – she not motivated by the reasons that make it right. For her knowledge that it is the right thing to do is not grounded in an appreciation of or sensitivity to the features that make Φ-ing the right thing to do, but rather on the opaque evidence provided by testimony. Even if this amounts to an excellent reason for her to believe the proposition – indeed, even if puts her in a position to know that proposition – she might lack any understanding as to why it is true.9 Thus, even in the best case of a moral view held on the basis of deference – one in which it is natural to credit the agent with genuine knowledge of the action’s
280 Sarah McGrath rightness – the agent is still not in a position to fulfill the ideal of doing the right thing for the reasons that make it right, for fulfilling that ideal requires a kind of insight that she lacks.10 By contrast, when a person defers to another about geography, her doing so does not similarly frustrate the achievement of any ideal associated with agency. What about authenticity? Moral beliefs have a special connection to who we are: they reflect the depth of our moral concern in a way that geographical beliefs do not. In a case where someone’s moral beliefs are authentic, they directly reflect what she cares about. When someone acquires the belief that eating meat is wrong by deferring to someone else, where relying on her own judgment would have led her to the opposite conclusion, her actions will at best indirectly align with what she deeply cares about.11 To be clear, there are cases in which it is all things considered better that an agent act on moral beliefs she has acquired from someone else. For example, someone who recognizes that her moral judgment is unreliable in a certain kind of situation, or realizes that it is impaired, might do what is all things considered best by deferring to someone she knows to be more reliable. Nevertheless, if this results in mismatch between her practical judgment and what she cares about, then she falls short of the relevant ideal. The same thing is not true in the domain of geography. Someone who believes that Paris is the capital of France only because she read this in a book somewhere doesn’t thereby exhibit a mismatch between what she cares about and what she believes. So we now have an idea of what epistemic autonomy could be, such that it matters for moral but not other kinds of beliefs, and we have an idea of why it would matter. Moral judgments based on opaque evidence won’t be based on an agent’s sensitivity to right making features, and won’t reflect her deepest concerns. This matters because an agent who farms out her moral judgments to another person will typically not be in a position to do the right thing for the reasons that make it right.12 Notice that this is a story about the kind of autonomy that matters for moral judgment that is entirely consistent with the Moral Inheritance Thesis. That is, even if moral judgment is associated with these ideals of autonomy, that is consistent with the claim that we can come to know moral claims by deferring to reliable sources, and with the claim that the standards for knowledge don’t vary from domain to domain. Thus, our seemingly inconsistent triad is reconciled.
14.4 The Objection from Epistemic Reductionism I have argued that a person who holds a moral view on the basis of opaque evidence will typically not be in a position to fulfill two important ideals associated with moral agency: the ideal of doing the right thing for the
The Case of Moral Inheritance 281 reasons that make it right, and the ideal of acting authentically. Thus, on my view, what is valuable about arriving at the answers to practical questions on the basis of transparent evidence is not that this places the agent in a better position to get the answers right, or to achieve knowledge, or even to achieve reasonable belief. What’s valuable is being better placed to act on one’s own sensitivity to right-making reasons, and act in a way that reflects the depth of one’s concern. These are ideals associated with moral agency, not requirements for moral knowledge. Thus, the explanation for why it is important to “make up one’s own mind” when it comes to moral matters does not put the significance of epistemic autonomy for moral judgments in any direct tension with the Moral Inheritance Thesis. The claim that a belief acquired by deference does not put an agent in a position to fulfill these ideals is consistent with the idea that the standards for knowledge do not vary across domains.13 In this section, I focus on an objection to the claim that because testimony typically provides opaque evidence, it does not typically put one in a position to fulfill the ideal of doing the right thing for the reasons that make it right. I believe that a version of the objection would carry over to the claim that deferential moral beliefs do not typically put an agent in a position to act authentically, but for simplicity, I will focus on the objection to my claims about right-making reasons. According to the objection, a person who blindly defers to someone she takes to be a moral expert is doing the right thing for the reasons that make it right, even if the evidence provided by testimony is opaque evidence. Whereas I have assumed that there is a robust distinction between (1) the reasons that make an action the right thing for an agent to do, and (2) the reasons that justify the agent in believing that the action is the right thing for her to do, some contemporary views of right-making reasons collapse this putative distinction. On views of this kind, whenever an agent has compelling reasons to believe that Φ-ing is the right thing for her to do, those reasons also make it the case that Φ-ing is the right thing for her to do in the circumstances. There is thus no genuine possibility, on this view, that an agent could have access to compelling reasons to believe that she ought to Φ but not have access to the reasons that make her Φ-ing right.14 According to the objection, acting on opaque evidence can make an action subjectively right: opaque evidence can be a subjectively rightmaking reason. Moreover, on this view, the ideal of doing the right thing for the right-making reasons is an ideal of subjective rightness.15 What does it mean to say that the ideal of acting on right-making reasons is an ideal of subjective rightness? We can illustrate this idea by considering a case in which an agent acts on the basis of non-moral testimony that turns out to be incorrect. So consider: Medical Mistake: A mother gives her child antibiotics. The reason that she gives the child antibiotics is that the doctor told her that this
282 Sarah McGrath would make the child better. But the doctor was wrong; antibiotics will make the child worse. Let’s stipulate that the doctor’s testimony provides the mother with opaque evidence: she believes antibiotics will make her child better, but she does not have the medical knowledge necessary for grasping why it will. The claim that the mother did the subjectively right thing is the claim that, given her evidence – the doctor’s testimony – she did what she should have done. Even though she did the objectively wrong thing – the medicine that she gave the child is not the medicine that will actually make the child better – she is not in any way blameworthy. And more importantly for our purposes, even though she acts on opaque evidence, she does not fail to fulfill any ideal of moral agency: that she relied on opaque evidence does not make her action sub-par in any way. That the mother is in no way criticizable supports the idea that, if there is an ideal of moral agency that goes under the heading “acting for the right-making reasons,” the ideal concerns subjective rightness. She did not fail to fulfill the ideal of acting on “right-making reasons,” where right-making reasons consist in her evidence about what to do. Since she did the objectively wrong thing but did not fail to fulfill any ideal of moral agency, the objector concludes that the ideal of acting on right-making reasons must be an ideal of properly responding to one’s evidence, and the relevant reasons must be subjectively right-making reasons. Thus, the objector concludes, we should agree that an agent who acts on the basis of opaque evidence can nevertheless fulfill the ideal of acting on the basis of right-making reasons. First, we should be clear about the way in which the doctor’s testimony is a “right-making reason” in Medical Mistake. His testimony obviously doesn’t make the medicine the right one for the mother to give in the sense of conferring healing powers on it. The healing powers of antibiotics do not depend on what the doctor or anybody else says about them – they don’t work when they do because a doctor said they would. So at least in this case, we can draw a distinction between reasons that are constitutively right-making, and reasons that are not. Can we take this distinction between two ways that a feature can make a medicine right – constitutively and merely evidentially – and carry it over to other cases? It is hard to see why the distinction would not carry over. When Eleanor’s friend tells her that it is wrong to eat meat, this obviously does not confer wrongness on meat eating, in the sense of “conferring wrongness” in which God would confer wrongness if Divine Command Theory were true. Assuming that we can draw this distinction, we can now ask whether there is an ideal of moral rightness that concerns responsiveness to these constitutive rightmakers. The case of Wrong Medication might make it seem like there isn’t: the mother in this example certainly doesn’t fail to fulfill any ideal in virtue of not knowing which constitutive
The Case of Moral Inheritance 283 features would make which medication right. But now compare Wrong Medication with: Wrong Weight: Amelia has been considering quietly slipping into her office to do research, even though the building is closed to nonessential staff, due to the health emergency caused by a pandemic. She knows that it is wrong to break the rules in this way, but she also knows that if she stays home, she will be distracted by her children and she won’t get any research done. She decides that all things considered, she should break the rules and go in. Let’s stipulate that as a matter of fact, the reasons against going in are stronger, and even though she won’t be able to get any research done, Amelia should simply give up on her work ambitions for now, and not go into her office. Unlike the mother in Wrong Medication, Amelia is missing moral information: she doesn’t know the correct answer to the question of how strong the reasons for and against breaking the rules are. By contrast, the mother in Wrong Medication knows that she ought to try to make her child well: she recognizes this as the objectively right thing to do. What the doctor’s testimony provides her with is non-moral evidence about how to achieve this goal. Wrong Weight is a case of moral ignorance, one that does not derive from underlying non-moral ignorance. And I suggest that there is an ideal of moral agency Amelia fails to fulfill in virtue of the fact that she is not motivated to do the right thing by the transparent evidence available to her. Cases of moral ignorance raise the question of whether false moral testimony from a seemingly reliable source can make it subjectively right to do the objectively wrong thing, just as false non-moral testimony from a seemingly reliable source can make it subjectively right to do the objectively wrong thing. It is much less obvious that Amelia is blameless than that the mother in Wrong Medication is blameless. Whether she is blameless depends on whether false moral beliefs are exculpatory in the way that false non-moral beliefs are. This issue is controversial: according to, for example, Rosen (2004) false moral beliefs exculpate. Provided that one has fulfilled one’s procedural epistemic obligations, moral ignorance is as good an excuse for objectively wrong behavior as is non-moral ignorance. According to, for example, Harman (2011) they do not. On Harman’s view, we are often in the situation of being morally obligated to believe the moral truths relevant to our actions, and are blameworthy if we do not. I believe that we can defend the idea that there is an ideal of responsiveness to objective-right-makers without taking sides on this issue. Even if failure to believe the relevant moral truth were not itself blameworthy, still, failure to respond appropriately to objectively right-making features might still be a failure of moral agency. I take it that this is a lesson we can extract from Arpaly’s discussion of Huck Finn. Huck’s moral
284 Sarah McGrath expert – Miss Watson – has told him that Jim is someone’s property, and that helping him escape would be wrong. But Huck is motivated by objectively right-making features: the course of action that will help Jim escape is the course of action he should pursue. Arpaly’s point is that being motivated by right-making features is sufficient for praiseworthiness. The point here is about agents who are not motivated by right-making features: leaving open whether that is sufficient for blameworthiness, they nevertheless fail to live up to an ideal of moral agency. So, for example, contrast Wrong Weight with: Wrong Weight Revised: This case is just like Wrong Weight, except that Amelia stays home because she tells someone whose moral judgment she trusts what she has decided to do, and this person tells her that this would be wrong and that she should comply with the rule. Even if Amelia does the right thing in this case, the fact that she was not sensitive to the objectively right-making features seems to be a failure to live up to an ideal associated with moral agency. Just as there are subjective reasons that constitute an agent’s evidence as to what she objectively ought to do, there are objective features in virtue of which the right aims are objectively right. When an agent knows what the objectively right-making features are, but can’t recognize which act has the objectively right-making features, she relies on her evidence. So long as the agent responds properly to the evidence that she has, she does the subjectively right thing. Thus, the mother in Wrong Medication who recognizes that this will make my child well as an objectively rightmaking feature, and responds properly to her evidence about which course of action has that feature, does not fall short of any ideal of moral agency. But Wrong Weight does not fit this model. Thus, even if we grant that any piece of evidence that an agent possesses regarding what to do is, in some sense, a right-making reason, we can still maintain that there is a distinction between objectively right-making features and the reasons or evidence that support competing claims about how different courses of action might be connected to performing the action with the objectively right-making features. A good moral agent will have the right ends, in the sense that she will aim at the things that have objectively right-making features. An agent who relies on testimony to find out what the objectively right-making features are or how to balance them thus fails to fulfill an important ideal of doing the right thing for the reasons that make it right.
14.5 Conclusion In this chapter I have argued that the claim that much of our moral knowledge is uncritically absorbed from our environments is consistent with the idea that epistemic autonomy has a special significance when it
The Case of Moral Inheritance 285 comes to moral judgment. The reason is that the kind of epistemic autonomy that matters for moral judgment involves the special connection between moral judgment and action, rather than a special requirement for moral knowledge. This may have prompted my reader to wonder whether the kinds of autonomy that I have claimed matter for moral judgments really do deserve the name “epistemic,” or whether this chapter somehow ended up in the wrong book. After all, the central claim has been that these kinds of autonomy are not necessary for securing any epistemic goods, such as knowledge, justification, or reasonable belief. However, I think that reasons responsiveness and authenticity actually are varieties of autonomy that count as epistemic. As I understand them, they can be used to characterize judgments: someone who has inherited most of her moral views can nevertheless make moral judgments that can be characterized as more or less reasons-responsive and authentic. Insofar as these are characterizations of how her judgments are formed and what they reflect, they can properly be considered epistemic.
Notes 1 For helpful comments on an earlier draft of this chapter, I am grateful to James Dreier, Elizabeth Harman, Kirk Lougheed, and Jonathan Matheson. 2 Coady (1992) is a seminal work on testimony that makes this kind of argument. For similar arguments see also Foley (1994), Plantinga (1993), Strawson (1994), and Webb (1993). 3 I discuss and defend this claim further in my (2019: esp. pp. 59–67). 4 I consider a variety of explanations for this asymmetry in my 2009, 2011, and 2019. While in my previous work I consider various hypotheses as to why moral deference seems problematic in a way that non-moral deference does not, I do not address the question of whether the sense that moral judgment must in some sense be autonomous can be reconciled with the claim that much of our moral knowledge is highly dependent on our social environments. 5 Hills (2009: 94). For discussion of similar examples see McGrath 2009, 2011 and 2019. 6 See, for example, Driver (2006) and Howell (2014). 7 Arpaly (2003: 117–225) distinguishes these eight different senses of autonomy in order to argue that theorizing about moral responsibility would proceed more productively if it were to bypass discussions of autonomy all together. 8 This may be the kind of autonomy that Howell has in mind when rejects the view that moral deference compromises autonomy: “It doesn’t seem that the agent is acting unfreely in deferring. Assuming she was an autonomous agent at the time, she could freely and autonomously choose to defer. Similarly, after the agent possesses moral knowledge, it seems she is free to do the right or the wrong thing and there is no reason she is not doing so autonomously. There is, to be sure, a sort of dependence of the deferrer on the adviser, but it doesn’t seem that this dependence is in conflict with autonomy.” (Howell 2014: 400.) 9 The fact that moral testimony does not typically deliver moral understanding is noted by Nickel (2001), Hopkins (2007), and emphasized by Hills (2009). 10 A parallel point holds with respect to the ideal of refraining from wrongdoing for the reasons that make an action or behavior wrong: if a child refrains from lying on a given occasion because he knows via testimony that lying is wrong but has no grasp on why it is, then his refraining from lying on that occasion is not based on the reasons that there are not to lie.
286 Sarah McGrath 11 For defense of the idea that what seems peculiar about moral deference is that it results in moral beliefs that fail to live up to an ideal of authenticity, see Mogensen 2017. 12 I argue for the claim that a person who acts on moral testimony is not in a position to do the right thing for the reasons that make it right in McGrath 2011 and McGrath 2019. Arpaly (2003) and Markovits (2010) agree that it is suboptimal when an agent is not in a position to do the right thing for the reasons that make it right. But according to Markovits, you are in such a position when you act on moral testimony. 13 It is also consistent with her being praiseworthy for deferring about whether to ϕ as opposed to making up her own mind. When an agent is in a position to recognize that someone else is more likely to get the answer to the question of what to do correctly, then it will be true that this is what she ought to do, and she can be praiseworthy for recognizing that doing the right thing is more important that autonomously recognizing what the right thing is. 14 See, especially Kearns and Star (2009) and Markovits (2010). 15 A version of this objection can be found in Markovits (2010). Driver (2006) argues, on similar grounds, that the explanation of our negative reaction to moral deference can’t be that the agents who defer to moral experts lack autonomy in the responsiveness-to-reasons sense.
References Anscombe, E. (1981). Authority in morals, in ethics, religion, and politics. Minneapolis, MN: University of Minnesota Press. Arpaly, N. (2003). Unprincipled virtue. Oxford: Oxford University Press. Coady, C. (1992). Testimony: A philosophical study. Oxford: Oxford University Press. Driver, J. (2006). Autonomy and the asymmetry problem for moral expertise. Philosophical Studies 128: 619–644. Foley, R. (1994). Epistemic egoism. In F. Schmitt (Ed.), Knowledge and the social (pp. 53–73). Hare, R. (1962). Freedom and reason. Oxford: Oxford University Press. Harman, E. (2011). Does moral ignorance exculpate? Ratio, 24, 443–468. Hills, A. (2009). Moral testimony and moral epistemology. Ethics, 120(1), 94–127. Hopkins, R. (2007). What is wrong with moral testimony? Philosophy and Phenomenological Research, 74(3), 611–634. Howell, R. (2014). Google morals, virtue, and the asymmetry of deference. Noûs, 48(3), 389–415. Kearns, S., & Star, D. (2009). Reasons as evidence. Oxford Studies in Metaethics, 4, 215–242. Markovits, J. (2010). Acting for the right reasons. Philosophical Review, 119(2), 201–242. McGrath, S. (2009). The puzzle of pure moral deference. Philosophical Perspectives, 23, 321–344. McGrath, S. (2011). Skepticism about moral expertise as a puzzle for moral realism. The Journal of Philosophy, 108(3), 111–137. McGrath, S. (2019). Moral knowledge. Oxford: Oxford University Press.
The Case of Moral Inheritance 287 Mogensen, A. (2017). Moral testimony pessimism and the uncertain value of authenticity. Philosophy and Phenomenological Research, 45(2), 261–284. Nickel, P. (2001). Moral testimony and its authority. Ethical Theory and Moral Practice, 4(3), 253–266. Plantinga, A. (1993). Warrant and proper function. Oxford: Oxford University Press. Rosen, G. (2004). Skepticism about moral responsibility. Philosophical Perspectives, 18, 295–313. Strawson, P. (1994). Knowing from words. In B. Matilal & A. Chakrabarti (Eds.), Knowing from Words (pp. 23–28). Dordrecht: Kluwer Academic Publishers. Webb, M. (1993). Why I know about as much as you: A reply to Hardwig. Journal of Philosophy xc, 5(May 1993), 260–270. Wolff, R. P. (1970). In defense of anarchism. New York: Harper and Rowe.
15 Epistemic Autonomy and the Right to be Confident Sanford Goldberg
15.1 Introduction In this chapter I query whether epistemic autonomy, as an ideal of epistemic self-reliance, can be motivated by (or understood in terms of) the conditions on having the right to be confident. In Section 15.2, I characterize the doctrine of epistemic autonomy and suggest why this might seem to capture an ideal of epistemology. In Section 15.3, I introduce the notion of having the right to be confident, and begin to suggest why one might try to motivate epistemic autonomy as an ideal in terms of the right to confidence. In Section 15.4, I present an argumentative strategy that purports to do just this, and I identify a key assumption of this strategy. In Section 15.5, I present an example that, I argue, is a counterexample to the key assumption, thereby pointing to the failure of this strategy. In Section 15.6, I argue further that the reason for this failure strongly suggests that an insistence on epistemic autonomy as an ideal runs counter to the robust practices of epistemic reliance that characterize our mutual epistemic dependence on one another. Section 15.7 concludes.
15.2 Epistemic Autonomy (EA) As I will understand it in what follows, the doctrine of Epistemic Autonomy amounts to the following claim: EA Ideally, in all circumstances each epistemic subject ought to rely only on her own cognitive-epistemic competences (reasoning, perception, memory, introspection, reflection, and so forth) in the formation and sustainment of her doxastic states.1 EA states what its proponents take to be an epistemic ideal: the claim is that it is ideal, epistemologically speaking, for each individual always to rely exclusively on her own faculties. The guiding thought is that it is best, epistemically speaking, for the subject to be able to tell for herself
Epistemic Autonomy 289 what to believe, and this is realized only when she restricts herself to relying exclusively on her own faculties (taking nothing on board that she cannot confirm for herself with those very faculties).2 As a thesis about the epistemically ideal way to comport one’s doxastic life, EA is related to, but ultimately distinct from, a thesis regarding the epistemic materials that are relevant to epistemic assessment. I call this latter thesis Epistemic Individualism, according to which EI For all subjects S and doxastic states D (where D is a state of S’s), the only epistemic materials that are to be taken into account in an epistemic assessment of D are those provided by S’s own cognitive-epistemic competences. There is reason to suppose that all proponents of EA will endorse EI. For it is plausible to suppose that the epistemic materials provided by S’s own cognitive-epistemic competences are exhausted by S’s cognitive-epistemic competences acting on the inputs into S’s cognitive system.3 So insofar as a subject does best, epistemically speaking, when she forms her beliefs using only those epistemic materials provided by her own cognitive- epistemic competences, EI is the epistemic ideology that informs EA. There is something attractive about the view that EA itself captures an epistemic ideal. For one thing, conformity to EA appears to inoculate one against a certain kind of evidential insensitivity. Suppose that S, an otherwise epistemically competent subject, believes that p through accepting another speaker R’s say-so that p. Suppose further that (unbeknownst to S) R’s evidence for p is E. Since S is unaware that R’s grounds for p are E, S may continue to believe that p even if S herself either has, or comes to acquire, grounds to suspect that E cannot support the proposition that p. Such a state seems far from ideal, as it is tantamount to a failure on S’s part to be in a position to appreciate the bearing of her evidence on her own belief.4 By contrast, an epistemically competent subject who is epistemically autonomous – one who conforms to the ideal in EA – is in a superior position to S on this score. This is because the autonomous subject can avoid the sorts of scenario just described, at least in principle. She relies only on the evidence provided by her own cognitive competences, so the bearing of her evidence on her own belief will not in this way be obscured to her. It can still happen, of course, that she forgets that she has some piece of evidence, or that she fails to keep some of her evidence in mind, but this is another matter. The point is that the bearing of her evidence on her beliefs is always available to her, in principle if not in fact. We can see this in connection with the manner in which the autonomous subject relies on others’ testimony: she will not accept the say-so of another unless she has adequate firsthand evidence to think that the say-so was reliable. What is more, it is this evidence, rather than the mere fact of the say-so (or the evidence
290 Sanford Goldberg that supports that say-so), that constitutes the autonomous subject’s evidence for believing the proposition in question, when she believes on the basis of testimony.5 In addition, EA captures the thought that each of us ought to think for ourselves, and that no one should be unduly influenced by others. What is more, EA itself suggests a way to manage the influence of others on one’s own belief system so as to ensure against undue influence: others’ influence on one’s beliefs ought to be resisted unless and until one can independently vindicate for oneself the (likely) truth of what one has been told. Such a conception of “thinking for oneself” has been held up as one of the central lessons of the Enlightenment. We can illustrate the present selling-point of EA by noting the role EA itself might be thought to play in diagnosing what goes wrong in cases in which people are unduly influenced by the attitudes of others.6 Consider the following (admittedly somewhat cartoonish) examples. GURU Sam has decided to follow the Guru. He believes whatever the Guru tells him, just because the Guru said so. He has learned to stifle what originally were doubts as to some of the things the Guru said. He has perfected this practice and has become extremely agile at “explaining away” whatever would-be counterevidence reaches him. PERSISTENT PROPAGANDA Monica lives in a country in which the head of state, a charismatic but vicious and venal megalomaniac, persistently lies. What is more, the head of state has managed to co-opt the country’s most-reliedupon news source. Consequently, the TV stations, websites, and print media owned by this source promote his lies. Unfortunately, Monica’s news consumption is restricted to these sources. She is bombarded by these lies so often that she no longer stops to consider reports from the few alternative sources she happens to encounter. Rather, she has learned to tune them out and to explain them away, in the manner suggested by the head of state, as “fake news.” PEER PRESSURE Frank is a politically active student at a progressive liberal arts college. He is a member of various political groups on campus, he is extremely active in each, and he very much values his reputation in them. It happens occasionally that he has doubts as to the truth or wisdom of some of the things other members say. Still, he values his reputation in these groups very much, and he is terrified of the prospect of losing this standing. Consequently, he has learned to stifle concerns. Soon he finds himself naturally endorsing and believing whatever the group members themselves endorse.
Epistemic Autonomy 291 GASLIGHTING AND CORROSIVE SELF-DOUBT7 Angelica is in an abusive relationship with a man who constantly says things that, taken together, lead Angelica to question her own judgment. Soon, she finds herself full of self-doubt; she learns that the way to avoid conflict and confrontation is simply to not question anything her partner says. Though it is hard at first, she eventually comes to be able to do so. The natural response to cases like these is that in each case the subject is led to put less and less stock in their own faculties and their own judgment – and that they do so to their own epistemic detriment. The doctrine of epistemic autonomy might seem to provide the right diagnosis in each of the cases. It might be thought that in each case the source of the problem is the pressure on the subject to “cede” the burdens of judgment to that of another person or persons. In many of these cases it would be cruel and unfair to blame the subject for doing so; but we can diagnose the problem without placing any blame on the subjects themselves. When one cedes the burdens of judgment to another in this way, one renders oneself potentially insensitive to the probative force of one’s own evidence, and one thereby opens oneself up to the prospect of insensitivity to considerations that ought to lead one to give up a belief (or acquire a new belief). Those who are epistemically autonomous, of course, are not open to this sort of outcome, at least in principle. They apportion their beliefs to their evidence. Thus, the bad outcomes above would be avoided if only the subjects themselves were autonomous in their belief-formation. This is the ideal to which we all ought to strive. Or so say those who embrace EA as an ideal.
15.3 The Right to be Confident For those who are familiar with the history of epistemology, there is another tradition that might be thought to be very much in the spirit of the tradition emphasizing autonomy as an epistemic ideal. I refer here to the tradition whose approach to epistemic evaluation involves assessing whether a subject’s doxastic attitudes are ones for which she enjoys the right to be confident. While this tradition is perhaps not as familiar as it once was, it has an impressive history. In English-speaking epistemology, the use of the notion of a right to confidence can be found at least as far back as the nineteenth century. Thus, in On Liberty, John Stuart Mill wrote, In the case of any person whose judgment is really deserving of confidence, how has it become so? … [K]nowing that he has sought for objections and difficulties instead of avoiding them, … – he has a
292 Sanford Goldberg right to think his judgment better than that of any person, or any multitude, who have not gone through a similar process. (Mill, 2015: italics added)8 And, writing several decades after the original publication of On Liberty, William Clifford describes the well-known case of the infamous shipowner in “The Ethics of Belief” as follows: It is admitted that he did sincerely believe in the soundness of his ship; but the sincerity of his conviction can in no wise help him, because he had no right to believe on such evidence as was before him. He had acquired his belief not by honestly earning it in patient investigation, but by stifling his doubts. And although in the end he may have felt so sure about it that he could not think otherwise, yet inasmuch as he had knowingly and willingly worked himself into that frame of mind, he must be held responsible for it. (Clifford, 1877/1901: emphasis added)9 Moving on to the latter half of the twentieth century, A.J. Ayer (1961, 2020) is perhaps the best-known of those who employ the notion of a right to be confident, going so far as to define “knowledge” as “the right to be sure” (Ayer, 2020).10 And more recently still, several others have employed notions similar to that of the right to be confident in their epistemological theorizing.11 A word or two about the notion of a “right” is in order. The term “right” itself designates something like an entitlement or a permission – a status the possession of which confers a certain normative standing on a subject. Obviously, when we speak of having a right to be confident, the sort of standing we have in mind is not legal in nature. Rather, we have in mind a permission of an epistemic kind. Precisely what this comes to is a matter to which I will be returning below. But for now we can think of this right as an epistemic one – it is an epistemic permission i.e. to be confident, to believe as one does, etc. It is natural to suppose that the epistemological tradition which theorizes in terms of a right of this sort is very much in the spirit of the epistemological tradition which emphasizes epistemic autonomy as an ideal. After all, it is natural to suppose that having the right to be confident is something one enjoys when one competently forms and sustains belief on the basis of one’s total evidence. Stronger, it is natural to suppose that one’s total evidence is the very source of this right. Insofar as one’s total evidence itself is determined by the operation of one’s cognitive-epistemic competences on the inputs into one’s cognitive system, as I noted in connection with the doctrine of Epistemic Individualism (EI), one’s total evidence is then the evidence one has when one conforms to EA. In this manner we might think that there is an argument to be made, from the conditions on
Epistemic Autonomy 293 enjoying the right to be confident, to EA itself. The argumentative strategy in question would aim to show that one enjoys the right to be confident only insofar as one is epistemically autonomous. This would be a way to motivate epistemic autonomy as an ideal of epistemology. But can such an argument succeed? I want to argue that the marriage between EA and the notion of a right to confidence is not a happy one, in that the latter is better seen as a tool in the hands of those who favor a more thoroughgoingly social epistemology.
15.4 From the Right to Be Confident to EA? Let us begin, however, by considering the sort of argument I aim to target: one that purports to vindicate EA by connecting epistemic autonomy to the conditions on the right to be confident. The connection itself might be developed as follows. A subject is epistemically autonomous to the extent that she forms her beliefs by relying exclusively on her own cognitive-epistemic competences. If she does so competently, her beliefs will be warranted by her own total evidence. One who is not epistemically autonomous, by contrast, is one who forms beliefs in ways that do not exclusively rely on her own cognitive-epistemic competences. This is to say that on at least some occasions she relies on others’ say-so in circumstances in which she does not independently confirm for herself the truth of what she was told or the reliability of the say-so. Such a non-autonomous subject is one who is likely, from time to time, to form beliefs that are not warranted by her own total evidence. Such scenarios, we can say, are a possibility for the non-autonomous subject. Insofar as the right to confidence is coextensive with forming and sustaining one’s doxastic attitude when and because the attitude is warranted by one’s own total evidence,12 we can conclude that only those who are epistemically autonomous (and who are competent in relying on their own cognitive-epistemic faculties) are guaranteed to be in a position to enjoy the right to confidence for each of their beliefs. Since it is ideal for each epistemic subject to enjoy the right to confidence for each of her beliefs, autonomy is thereby vindicated as an epistemic ideal. It is worth highlighting a key assumption of this line of reasoning. I will call this assumption SUFFICIENCY (or “SUFF” for short), which I formulate as follows: SUFF Competent employment of one’s cognitive-epistemic faculties is sufficient for the right to be confident in one’s doxastic attitudes. SUFF is assumed in the line of argument above, as that reasoning assumes that securing the truth of SUFF is a desideratum for epistemological theorizing. It is by assuming this that the proponent of
294 Sanford Goldberg this argument paves the way for its main contention: we secure this desideratum only if subjects conform to the demands of Epistemic Autonomy (EA). In response, I want to deny that securing the truth of SUFF is a desideratum for epistemological theorizing. And I want to do so by calling into question another claim made in the line of reasoning above: that enjoying the right to confidence is coextensive with forming and sustaining one’s doxastic attitude when and because the attitude is warranted by one’s own total evidence. On the contrary, I submit that there are cases in which a subject S forms the belief that p when and because such a belief is warranted by S’s total evidence, and yet even so it is false that S has the right to be confident that p. This claim, which is the burden of the next section, will pave the way towards a more social account of the conditions on a right to confidence – and with it a re-evaluation of the case for thinking that EA captures an epistemic ideal.
15.5 A Counterexample In this section, I argue that the guidance provided by the doctrine of epistemic autonomy would sometimes undermine the robust practices of epistemic reliance that characterize our mutual dependence on one another. I will do so by focusing on the assumption that enjoying the right to confidence is coextensive with forming and sustaining one’s doxastic attitude when and because that doxastic attitude is warranted by one’s own total evidence. We can think of this targeted assumption as a BICONDITIONAL, as follows: BI One enjoys the right to confidence in one’s doxastic p-attitude if and only if one’s doxastic p-attitude is formed and sustained when and because it is warranted by one’s own total evidence. Though I will be targeting BI, I will focus, not on BI itself, but rather on the special case in which the doxastic p-attitude in question is the belief that p. That is, I will be targeting BIBEL One enjoys the right to confidence in one’s belief that p if and only if one’s belief that p is formed and sustained when and because it is warranted by one’s own total evidence. And I will target BIBEL by focusing on the right-to-left conditional that makes up BIBEL. This is the conditional that RBIBEL If one forms and sustains the belief that p when and because one’s total evidence warrants this belief, then one enjoys the epistemic right to be confident that p.
Epistemic Autonomy 295 I want to argue that (RBIBEL) is false. In bringing this out I will highlight how the guidance provided by epistemic autonomy can lead us astray. To this end I offer the following case. The scenario itself is in the spirit of Clifford’s ship owner, though the lesson to be learned is a slight strengthening of the lesson Clifford himself derived from the shipowner scenario. Here is the case: LAWYER Pacheco is a professional lawyer who has been asked by a client about negligence in a case of strict liability. Having recently graduated from law school, Pacheco remembers well what she learned about tort law (her particular favorite area of law). She distinctly recalls having recently read that in strict liability torts one’s negligence extends in such-and-such a way. (Let p be the proposition that Pacheco distinctly recalls having read.) She realizes that the law can change. Even so, she has no reason to believe that tort law has changed in the very limited time that has transpired since she read that p in law school, and she has reason as well to believe that if tort law had relevantly changed she would have heard about it (which she didn’t). So she continues to believe that p. It turns out, though, that she is wrong: relevant aspects of the law have changed in the short interim. Regarding LAWYER, I want to say the following two things. First, at the moment she is asked by the client about negligence in a case of strict liability, Pacheco’s total evidence warrants the belief that p – and she sustains this belief because of this. But second, even so, at that time Pacheco did not have the epistemic right to be confident that p. If this is so, the verdict in LAWYER is a counterexample to (RBIBEL), and so is a counterexample to BICONDITIONAL. I will defend these two claims in turn. Let us start with the first claim: at the moment she is asked by the client about negligence in a case of strict liability, Pacheco’s total evidence warranted the belief that p. We can stipulate, as part of the case, that she sustains the belief on this basis; the real question is whether her total evidence warrants her belief so sustained. Pacheco has vivid memories from law school, and she distinctly recalls reading a report in which she learned that p. True, she knows that the law changes. But she also knows that changes in the law are both relatively infrequent and tend to take a very long time. What is more, she has excellent reason to believe that such a period has not elapsed in the interval since she read that p; she knows as well that she has not heard of any change in the interim; and she has reasons to think that if the law had changed in relevant ways, she would have heard about it.13 Taken together, these considerations make clear that her total current evidence warrants the belief that p. It might be thought that the mere fact that she is aware that the law changes, and that she did not bother checking again, undermines the
296 Sanford Goldberg claim that her total evidence warrants the belief that p. But given what was just said about the state of her evidence, it is hard to see how this can be. Mere awareness that the law sometimes changes is not evidence that it has changed. So, too, with her awareness that she hasn’t bothered checking again: this does not constitute evidence that the law has changed. At best these pieces of her evidence might lead her to temper her confidence that that the law hasn’t changed. The real question, then, is whether this evidence ought to temper her confidence to the point that she is no longer warranted by that evidence in believing that it hasn’t changed. But we have already seen that given her total evidence – including her evidence that changes in the law take time, and that she would have heard of a change had there been one – she remains warranted in believing as she does. Denying this would have widespread skeptical ramifications. Compare: do you lose warrant for the belief that your car remains where you parked it merely because you recognize that cars are stolen, and you haven’t checked in a few hours? It would seem that unless you have some reason to think that it was stolen in the interim, or that there is an increased likelihood that it was stolen, your evidence warrants your belief that your car is precisely where you left it, despite your recognition that you haven’t checked recently (and that cars do get stolen).14 It might be thought that the fact that Pacheco is a lawyer, and so had a professional duty to check, undermines the claim that her total evidence warrants the belief that p. But again it is hard to see how this can be. Granted: Pacheco has such a duty, and she is aware of this. But neither of these things constitute evidence that the law has changed. Having a professional duty to collect evidence, and recognizing that one has such a duty, are not themselves evidence of any change in the world. So even when one is aware of having that duty – and so, in this sense, even when one has evidence that one has that duty – this does not constitute one’s having evidence to the effect that the evidence one ought to have will undermine one’s current belief. I conclude, then, that the first claim regarding LAWYER is true: at the moment she is asked by the client about negligence in a case of strict liability, Pacheco’s total evidence warrants the belief that p (and she sustained the belief on this basis). I now move on to the second of the claims I want to defend regarding LAWYER: at the moment she is asked by the client about negligence in a case of strict liability, Pacheco did not have the epistemic right to be confident that p. There are several nearly-equivalent ways to bring this out. The basic idea is that, as a lawyer being relied upon in her capacity as such, Pacheco had a professional responsibility to know the relevant aspects of the law; because she failed in this regard and her failure is implicated in her having a false belief, she did not have the right to be confident that p at the time of the query. As we might put it: her professional responsibilities as a lawyer required of her to re-confirm her belief that p, under conditions in which had she done so she would have seen
Epistemic Autonomy 297 that this belief is false. An alternative way of making much the same point can be put in terms of evidence: there is further evidence she ought to have had, and, if she had acquired that further evidence, her then-total evidence would not have warranted the belief that p. However one spells it out, it seems that her dereliction of her professional duties as a lawyer, together with the falsity of the belief in question, undermines her right to confidence.15 We can reinforce that Pacheco fails to have this right by showing that the contrary assumption leads to false predictions. Suppose (for reductio) that Pacheco did, in fact, have the right to be confident that p. Then we would predict all of the following to be true: (1) Pacheco is entitled to treat p as reasonable in thought and decision-making throughout the exchange with her client; (2) no one is in a position to level an epistemic criticism at (or make a negative epistemic appraisal of) Pacheco’s belief that p; and (3) Pacheco has a solid excuse if some endeavor fails owing to the falsity of her belief that p. But I submit that none of (1)–(3) hold. Regarding (1), it is not the case that Pacheco is entitled to treat p as reasonable in thought and decision-making in her exchange with her client. Were she to do so, she could be criticized for relying on the assumption that p, on the grounds that, given her professional duties as a lawyer, she should have known what the law was. Here I submit that this is a criticism of her belief, not (or not merely) of any subsequent action she might take: her responsibility as a lawyer is to know the law.16 This suggests that (2) is false as well, since others are entitled to expect her, qua lawyer, to know the law. Finally, similar considerations support the further contention that (3) is false: it is not the case that she has a solid excuse if some endeavor fails owing to the falsity of her belief that p. On the contrary, her responsibilities as a lawyer undermine the solidity of the excuse she might otherwise try to claim by appeal to the reasonableness of p on her current evidence. Even though p is reasonable on her evidence, her responsibilities as a lawyer require her to stay up on any changes in the law – with the result, in effect, that there is evidence she should have had. To be sure, Pacheco had no reason to think, and reason to doubt, that any further evidence she could acquire as to the present state of the law would not infirm her belief that p. But Pacheco cannot rest here: she had a professional responsibility which she failed to fulfill. This is the crack in her excuse. I conclude, then, that the second claim I want to make about LAWYER holds as well: Pacheco does not have the right to be confident that p.17 Since she formed her belief that p when and because this belief was warranted by her total evidence (the first claim), we have our counterexample to (RBIBEL). And since (RBIBEL) is (logically equivalent to) the right-to-left conditional in BIBEL, and since BIBEL is a strict implication of BI, we reach the conclusion that BI, too, is false. Finally, since BI is the BICONDITIONAL that is assumed by the line of argument from Section
298 Sanford Goldberg 15.4, we have effectively blocked that line of argument. This is not to say that there are no other ways to establish EA. On the contrary, in Section 15.2 I pointed to several other considerations that might be thought to motivate EA. (I will return to these below.) My conclusion from this section, rather, is more limited in scope: the particular argumentative strategy I bruited in Section 15.4, which aimed to establish EA by appeal to the notion of the right to confidence, turns out, on reflection, not to motivate autonomy as an epistemic ideal after all.
15.6 EA and Our Practices of Epistemic Reliance In fact, I think that the limited conclusion so far – that one particular line of argument to the conclusion asserting epistemic autonomy as an ideal is blocked – is much more significant than it might appear. This is because the point it illustrates is a general one: what examples like LAWYER show us is that there are cases in which the advice of epistemic autonomy runs counter to what is properly expected of us in the various roles we play, when we are engaged in the practices of epistemic reliance that characterize our mutual epistemic dependence on one another. A first point to make is this: what is at issue in LAWYER is a special case of a much more general phenomenon. Take any case in which one has professional or institutional duties, where the satisfaction of those duties requires that one be in a certain epistemic state. The epistemic state in question could involve any of a variety of things: knowledge that one should have, evidence one should possess, an awareness of a certain range of alternatives that (one should realize) need to be ruled out, familiarity with a certain methodology that (one should know) is to be used in a given domain (alternatively: certain sources that are to be consulted in that domain), competence in certain forms of inference (and the sensitivity involved in knowing when to employ those forms of inference), and so forth. Given any such case, there can be variations with the following two features: (1) the total evidence in one’s possession – including the total higher-order evidence bearing on any additional evidence one should have in connection with one’s professional or institutional duties – warrants the belief that p; and yet even so (2) one doesn’t have the right to be confident that p. Such cases will obtain whenever one believes that p despite failing to satisfy one or more of the professional or institutional duties, where one’s having failed to do so results in one’s not having evidence one should have had, and where if one had had this evidence one’s then-total evidence wouldn’t have warranted the belief that p. In other work, I have described such cases as cases of normative defeat (Goldberg, 2017, 2018, forthcoming). In fact, we might think that the sort of cases that involve the potential for this sort of normative defeat go beyond those involving professional or institutional duties. One of the central theses of Goldberg (2018) was
Epistemic Autonomy 299 that in any case in which some subject has a legitimate normative expectation of another subject’s epistemic state, we can have a situation in which a version of (1) and (2) hold. Such expectations, I argued, might derive from the norms of various social practices in which we rely on one another for information. Some of these practices involve reliance on subjects in their capacities as professionals or as playing one or another institutional role: one thinks here of the practices by which we certify and rely on experts and other professionals. But in addition there can be social practices that emerge amongst groups of people, where the practice of reliance does not involve anyone acting in any professional or institutional capacity: consider the epistemic reliance involved in cases between business partners, family members, friends, members of informal groups, and so forth.18 I submit, then, that the phenomenon at play in LAWYER is a special case of a much more general (and pervasive) phenomenon. What holds in LAWYER holds in each of the various types of case just described: what all of these cases make plain is that the “right to be confident” is a socio-epistemic standing that is shaped at least in part by what others are (normatively) entitled to expect of one’s epistemic condition. The standing is epistemic in its ground: one enjoys this standing in virtue of one’s possession of (adequate) relevant epistemic good-making materials. But the standing is social in its (normative) effects: the permission or entitlement it affords is best understood in terms of the normative standing one occupies relative to others when one enjoys this standing. Insofar as we insist on epistemic autonomy as an ideal, and insofar as we interpret this as the ideal of epistemic self-reliance, we will be under pressure to distort the social dimension of this standing. The pressure here derives from the idea (at the heart of EA) that what is best, epistemically speaking, is for each subject to conform to her own evidence; and the distortion arises in those cases in which the subject fails to have the evidence properly expected of her, even as her total evidence warrants her belief. The foregoing points to a general lesson: the social nature of the standing conferred by having the right to confidence gives us a basis for rejecting epistemic autonomy as an epistemic ideal. To bring this out, I begin with a point with which even friends of Epistemic Autonomy will agree: the autonomous subject is not free to concoct her own epistemic standards. On the contrary, everyone should agree that the standards themselves are objective, and hence imposed on us (including on the epistemically autonomous subject), at least in the minimal sense that an epistemic subject might be mistaken about the epistemic standards to which she is answerable. (As all will agree, a subject who thinks that it is a good epistemic principle to read the tea leaves is a subject who can and should be open to epistemic criticism.) But this means that the extent of the scope of autonomy has limits: even as envisaged even by EA, the epistemically autonomous subject is not fully autonomous, but rather is
300 Sanford Goldberg answerable to a standard that is fixed independent of him. What makes him autonomous, rather, is that in fixing belief (which is then answerable to that standard) he relies only on his own faculties, trusting nothing but what he can confirm for himself. He is invariably guided by his own light – a light that is generated by his own total evidence. The grounds for questioning whether epistemic autonomy is an epistemic ideal arise when we consider that the epistemically autonomous subject is an epistemically blinkered subject, in that the only normative pressures to which he takes himself to be subject are those deriving from his total evidence and his set of values. Such a subject, I submit, is not particularly well-suited to the role of being relied upon as a participant in the sorts of social practices that characterize our socio-epistemic lives. To be sure, insofar as the epistemically autonomous subject is aware of these practices and his roles in them, he will be aware of the responsibilities he bears. And insofar as he adequately values his roles in these practices, he will be motivated to act in accord with his responsibilities. But there is nothing about the epistemically autonomous subject per se that guarantees or even makes it likely that he will be well-suited in either or both of these ways. And if he fails in either of these ways, he will not be a good participant in the set of practices on which we rely on one another. Our point is easy to appreciate if we focus on the distinctly epistemic dimension of the autonomous subject. Suppose S is such a subject. As I noted above, S’s awareness of her currently unfulfilled responsibilities is not, by itself, evidence bearing on the truth of her beliefs. There will be cases, of course, in which S’s awareness of her responsibilities does involve having reasons to question her current beliefs – as when S has reasons to think that the evidence she has a responsibility to collect will likely show her current beliefs to be false. But there will also be other cases in which S’s awareness of her responsibilities involves no such reasons for questioning her current beliefs; and when this is so, our epistemically autonomous subject who competently assesses her own total evidence will regard her present belief set as epistemically unobjectionable. This is why she is not particularly well-suited to the role of being epistemically relied upon by others. More generally, epistemic autonomy would not appear to be the epistemic ideal of any epistemic community whose members exhibit systematic and irreducible epistemic reliance on one another.19 We started out with the thought that perhaps we can vindicate epistemic autonomy as an ideal by appeal to the claim that only epistemically autonomous subjects are in the position to enjoy the right to be confident with respect to each of their beliefs. But we have seen that this approach fails to appreciate that the right to be confident is a socio-epistemic standing that implicates one’s responsibilities in the various social roles one plays in the various epistemic communities of which one is a member. The social dimension of this standing enables us to see the limitations of
Epistemic Autonomy 301 epistemic autonomy as an epistemic ideal. One has the epistemic right to confidence in virtue of having evidence that warrants confidence when that evidence satisfies the expectations others are entitled to have of one in virtue of the public, professional, or institutional roles one is playing in one’s epistemic communities. While it may well be that each one of us is in a position to choose autonomously what roles one opts to play, once one has done so, one doesn’t get to choose the expectations others are entitled to have of one – any more than one gets to choose the epistemic standards to which one will be answerable. If one wants to be a professional of some kind, but one also thinks the expectations associated with that profession are unacceptable, one is not free simply to dismiss those expectations. On the contrary, one inherits the burden of making one’s case to the profession, and one remains answerable to the expectations associated with the profession until such time as the profession itself has sanctioned the changes in question.20 The same holds true more generally of the social practices in which one participates. This highlights another (related) reason for doubt about whether autonomy is an epistemic ideal. The normative (epistemic) expectations associated with the various professions, institutions, and social practices have evolved under a wide variety of pressures. Among these pressures is the pressure to ensure good outcomes for those who participate. Insofar as the normative (epistemic) expectations of a given practice does not conduce to such outcomes, the expectations themselves would likely not survive the onslaught of social criticism they would receive. While those individuals who participate in the practice might well have their own evidence regarding the effectiveness of the normative (epistemic) expectations associated with the practice, neither the legitimacy of the practices themselves, nor the normative expectations they sanction, depend on that evidence. And so whenever the normative expectations associated with our socio-epistemic roles are epistemically virtuous, such that fulfilling them does, in fact, put one in a superior epistemic position on the matters at hand, failure to live up to these expectations (e.g. out of an insistence on remaining autonomous) will leave one less well-off, epistemically – whether or not one has evidence of this fact. In this respect the expectations themselves, despite being imposed on the autonomous subject, can be an epistemic good even for the subject herself, and an insistence on autonomy comes at the cost of this sort of goodness. Even after we agree that there are grounds to doubt that epistemic autonomy is an epistemic ideal, however, we might continue to wonder how to handle cases like GURU, PERSISTENT PROPOGANDA, PEER PRESSURE, and GASLIGHTING AND CORROSIVE SELF-DOUBT. Here I grant that the proponents of EA are right about one thing: a failure to conform in one’s beliefs to one’s total evidence is an epistemic flaw – one that undermines a subject’s right to be confident. I acknowledge this, and so I acknowledge that conforming in one’s beliefs to one’s own
302 Sanford Goldberg total evidence is a necessary condition on having the right to confidence. What I reject is that it is sufficient. This is in keeping with what I would propose is the proper diagnosis of these cases. I submit that it is not the violation of epistemic autonomy per se, but rather the undermining of the subject’s competence at assessing her own evidence, that is the real culprit. The trouble lies in the subject’s losing confidence in (or otherwise coming to distrust) this competence, and from there to her ceding the burdens of judgment to another. Insofar as conforming in one’s beliefs to one’s evidence is a necessary condition on having the right to confidence, any subject who loses the ability to discern what her own evidence supports is in a situation that is deeply epistemically problematic. This is so even if we reject epistemic autonomy as an ideal.21
15.7 Conclusion I started this chapter wondering whether the doctrine asserting autonomy as an epistemic ideal can be vindicated by appeal to the claim that only autonomous subjects have the right to be confident. We now see that this strategy fails for an instructive reason: the right to be confident is a socioepistemic standing, and the conditions on enjoying this right tell against autonomy as an epistemic ideal. My conclusion, then, is a hypothetical one: insofar as contemporary epistemology has a place for the notion of having the right to be confident, we have reason to repudiate autonomy as an epistemic ideal, in favor of a more thoroughly social epistemology.22
Notes 1 For an excellent overview of historical and contemporary discussions of what faculties ought to be included in those on which the individual ought to rely, see Graham (2006). 2 Such a view need not rule out a subject’s reliance on the say-so of others, but (i) it will place conditions on when such reliance is rational and (ii) even under such conditions it will regard exclusive reliance on one’s own faculties as epistemically better than reliance on another’s say-so. See below. 3 We might add: together with any materials that are already part of (or “in”) S’s cognitive system from the start. 4 This is a theme in E. Fricker (2006). 5 Even better, by the lights of EA, would be for S not to believe merely on another’s say-so, but to come instead to acquire the relevant non-testimonial evidence herself. 6 Other proponents of EA will want to defend it by appeal to the intrinsic goodness of autonomous epistemic agency itself. I will ignore this sort of defense here. I am focusing on those who hope to offer a distinctly epistemic defense of epistemic autonomy, whereas the above defense is a metaphysical defense. (I say this not to disparage that type of defense, but to justify my more restrictive focus.) 7 For excellent discussions, see Kate Abramson (2014), Alessandra Tanesini (2016), Nora Berenstain (2016), Rachel McKinnon (2019), and Lauren Leydon-Hardy (forthcoming).
Epistemic Autonomy 303 8 With thanks to David Christensen for reminding me of this passage. 9 With thanks to Clayton Littlejohn for reminding me of this passage. 10 With thanks to Richard Baron, Dan Singer, Quee Nelson, and Frode Alfson Bjørdal for reminding me of this. 11 This sort of view is familiar in the work of Wilfred Sellars and Robert Brandom, as well as Michael Williams (2008, 2015). Further, it would seem that so-called entitlement accounts in epistemology, familiar in Crispin Wright (2004) and Tyler Burge (1993, 2003), are ideological cousins of views that speak of a right to confidence (or to believe). 12 It will be noted that my formulation of the conditions on the right to confidence builds in something like a basing requirement (“when and because one’s total evidence warrants the attitude”). Some may think this is overkill: they will think of the notion of a right to confidence as reflecting the state of one’s evidence, not how one forms and sustains the belief. If so, this would make my argument easier to make, since in that case I could disregard matters of grounding altogether and focus only one the state of one’s evidence. In building in a basing requirement, I mean to be concessive to those who would prefer to restrict the ascription of a right to confidence to those cases in which the attitude was based on the relevant evidence. (Those who reject such a condition on the right to confidence can ignore the complications that arise when we build that condition in.) 13 For a discussion of the epistemology of these sorts of beliefs based on a failure to have certain evidence, see Goldberg (2011). 14 Compare the Movie Times case in Feldman (2003). 15 Does one need to add the falsity of the belief? Isn’t it sufficient that she neglected her professional responsibilities? I think this is a complicated question; see Goldberg (2018) for an extended discussion. 16 See Goldberg (2017), where I discuss the basis for this sort of claim, and I characterize its relevance for epistemology. 17 For a very different route to a similar conclusion in cases like that of LAWYER, see Matheson (forthcoming). 18 For one such example that does not make use of my particular framework, see Gibbons (2006). 19 I should add: to say that EA fails to capture an epistemic ideal is not to say (and does not imply) that there is no place for epistemically autonomous subjects in an epistemically ideal community. On the contrary, as Allan Hazlett (2016) has noted, it may be ideal for a community to be such that all subjects should be autonomous some of the time, and perhaps even that some subjects should be autonomous all of the time. My point is only that EA cannot be a general ideal – cannot be an ideal for all subjects in a community at all times. 20 This should not be taken as implying that one can never disagree with the norms or standards of one’s profession; only that if one fails to follow them in a given case, one must have determinate evidence in support of one’s doing so. 21 QUESTION: if believing in accord with one’s total evidence is a necessary condition on having the right to confidence, then what can be said of those cases in which one lacks evidence one should have? If one’s total evidence E supports the belief that p, but the evidence one should have supports the belief that ~p, how are we to adjudicate? RESPONSE: This is a case of normative defeat, where the subject is not in a position to be confident in her belief – alternatively, she is not in a position to form a justified belief on the evidence she has. If she were to believe that ~p, she would fail to believe in conformity with own her total evidence, and so would not have the right to confidence. But if she were to believe that p, which is in conformity with the total evidence currently in her possession, her right would be normatively
304 Sanford Goldberg defeated by the evidence she should have had. So: either way, she is not in a position to enjoy the right to be confident in her belief. This is the risk one runs when one fails to inquire as one should (given one’s roles etc.). 22 I thank Kirk Lougheed and Jon Matheson for very helpful comments on an earlier draft of this chapter.
References Abramson, K. (2014). Turning up the lights on gaslighting. Philosophical Perspectives, 28, 1–30. Ayer, A. J. (1961). The problem of knowledge. New York: Penguin Books. Ayer, A. J. (2020). The right to be sure. In Arguing About Knowledge (pp. 11–13). New York: Routledge. Berenstain, N. (2016). Epistemic exploitation. Ergo 3(22), 569–590. Burge, T. (2003). Perceptual entitlement. Philosophy and phenomenological research, 67(3), 503–548. Burge, T. (1993). Content preservation. The Philosophical Review, 102(4), 457–488. Clifford, W. K. (1877/1901). The ethics of belief. Readings in the Philosophy of Religion, 246. Feldman, R. (2003). Epistemology. NJ: Pearson. Fricker, E. (2006). Testimony and epistemic autonomy. In J. Lackey & E. Sosa (Eds.), The epistemology of testimony (pp. 225–253). Oxford University Press. Gibbons, J. (2006). Access externalism. Mind, 115(457), 19–39. Goldberg, S. (2021). On the epistemic significance of practical reasons to inquire. Synthese. Goldberg, S. (2018). To the best of our knowledge: Social expectations and epistemic normativity. Oxford: Oxford University Press. Goldberg, S. (2017). Should have known. Synthese, 194(8), 2863–2894. Goldberg, S. (2014). Interpersonal epistemic entitlements. Philosophical Issues, 24(1)), 159–183. Goldberg, S. (2011). “If that were true I would have heard about it by now.” In A. Goldman and D. Whitcomb (eds.), Social epistemology: Essential readings. Oxford: Oxford University Press, pp. 92–108 Graham, P. (2006). Liberal fundamentalism and its rivals. In J. Lackey & E. Sosa (Eds.), The epistemology of testimony (pp. 93–115). Oxford: Oxford University Press. Hazlett, A. (2016). “The social value of non-deferential belief.” Australasian Journal of Philosophy 94(1), 131–151. Leydon-Hardy, L. (Forthcoming). Predatory grooming and epistemic infringement. In J. Lackey (Ed.), Applied epistemology. Oxford: Oxford University Press. Matheson, J. (2021). Robust justification. In K. McCain & S. Stapleford (Eds.), Epistemic duties: New arguments, new angles (pp. 146–160). New York: Routledge. McKinnon, R. (2019). Gaslighting as epistemic violence. In B. Sherman & S. Goguen (Eds.), Overcoming Epistemic Injustice: Social and Psychological Perspectives (pp. 285–302). New York: Rowman & Littlefield.
Epistemic Autonomy 305 Mill, J. S. (2015). On liberty, utilitarianism, and other essays. New York: Oxford University Press. Tanesini, A. (2016). ‘Calm down, dear’: Intellectual arrogance, silencing and ignorance. In Aristotelian society supplementary volume 90(1), 71–92. Oxford: Oxford University Press. Williams, M. (2015). What’s so special about human knowledge? Episteme, 12(2), 249–268. Williams, M. (2008). Responsibility and reliability. Philosophical Papers, 37(1), 1–26. Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press. Wright, C. (2004). On epistemic entitlement. Proceedings of the Aristotelian Society, 78, 167–245.
16 We Owe It to Others to Think for Ourselves Finnur Dellsén
16.1 The Puzzle of Epistemic Autonomy There are many questions that I am in no better position to answer than you are. I know that. And yet for many of those questions, I will not defer to your judgment. Doing so would certainly save time. Moreover, in many cases I’d be just as likely to get things right by listening to you rather than trying to figure out the answer for myself, since I know full well that you are as reliable regarding those topics – if not more so. Indeed, there are relatively few topics for which there wouldn’t be someone that I could listen to whose judgment I take to be equally or more reliable than my own judgment. There are some exceptions, of course. For example, I know more than anyone else about most events in my own personal history, and I also arguably know more about some of my own mental states than anyone else. But for most other types of facts – most public facts, as we might call them – there will be someone I could consult that I consider no less reliable than myself. So why do I bother making up my own mind about public facts at all? Why do any of us bother? The answer cannot be that it is worth the effort in terms of reliably forming true beliefs since ex hypothesi I am no more likely – and often less likely – to form correct beliefs by going it alone than by relying on others. Thus, in so far as I am thinking only of my own beliefs about public facts, and only of the likely truth-value of those beliefs, I apparently have no reason to figure things out for myself. And yet isn’t there something objectionable about relying to such an extent on other people’s say-so? Various Enlightenment thinkers, including Descartes (1985/1628), Locke (1975/1689) and Kant (1991/1784), appear to have thought so – an idea that is encoded in the widely-cited informal “fallacy” known as “appeal to authority.” Let us call the tension between these two thoughts the Puzzle of Epistemic Autonomy. The puzzle, in short, is how to explain the value of critically evaluating claims for yourself in a world where there is almost always someone out there whose opinion is at least as likely to be correct
We Owe it to Others 307 as the opinion you would form on your own. Professional philosophers should take special interest in solving this puzzle, since their role in the educational system is often thought to consist largely in cultivating epistemic autonomy in their students (e.g., Nussbaum 2017). Indeed, an entire genre of philosophical education – critical thinking – is quite explicitly designed to do precisely that. For example, a popular critical thinking textbook announces in the preface that “[c]itizens who think for themselves, rather than uncritically ingesting what their leaders and others with power tell them, are the absolutely necessary ingredient of a society that is to remain free” (Cavender and Kahane 2009, p. xiv). In previous work (Dellsén 2020b), I have begun to develop what I would now characterize as a qualified, altruistic solution to this puzzle: We should be epistemically autonomous because, and to the extent that, it makes the consensus positions of experts more reliable, which in turn benefits the community as a whole by enabling laypeople to rely on such expert consensuses. This solution is “qualified” because it does not imply that epistemic autonomy is always valuable; rather, it is valuable only in cases where one is among those who might be consulted on the relevant issue. And this solution is “altruistic” in the sense that the value of someone being epistemically autonomous is not taken to consist exclusively in the way it affects their own epistemic situation. On the contrary, it is my contention that any non-altruistic – i.e., egoistic – explanation for why we should think for ourselves will be unable to capture what’s distinctively valuable about epistemic autonomy. In this chapter, my main aim is to scrutinize two egoistic solutions to the Puzzle of Epistemic Autonomy. These solutions are importantly different in that one of them implies that epistemic autonomy is directly valuable for the autonomous agent, while the other implies that epistemic autonomy is valuable to the agent only indirectly, through positively affecting the community of which the agent is a part. To mark this difference, I distinguish between directly egoistic and indirectly egoistic solutions to the Puzzle of Epistemic Autonomy, and then distinguish those from altruistic solutions, in Section 16.2. I then examine, in Section 16.3, a directly egoistic solution that appeals to the value of understanding, arguing that this fails to solve the Puzzle of Epistemic Autonomy. In Section 16.4, I similarly argue that an indirectly egoistic solution, which appeals to the apparent value of disagreement, fails to solve the problem as well. In Section 16.5, I conclude by outlining my own altruistic solution to the puzzle.
16.2 Direct Egoism, Indirect Egoism, and Altruism What are the possible solutions to the Puzzle of Epistemic Autonomy? Let me start by restating the puzzle somewhat more precisely:
308 Finnur Dellsén The Puzzle of Epistemic Autonomy: In situations where an agent S could effortlessly access opinions on a particular set of propositions {Pi} that she herself takes to be at least as reliable as her own, what epistemic reasons (if any) are there for S to critically evaluate {Pi} for herself rather than adopting these opinions as her own? One type of response to this puzzle is to reject that it has any solution. On this view, there are never any epistemic reasons to critically evaluate something when one has easy access to equally or more reliable opinions on that issue. Although defending such a response arguably puts a professional philosopher in an uncomfortable position, at least a handful of philosophers – Foley (2001), Huemer (2005), Zagzebski (2007), and Constantin and Grundmann (2018) – have defended positions of this kind. For example, Zagzebski (2007) argues that truth-seeking agents who prefer reaching their own conclusion as opposed to deferring to equally or more trustworthy peers would be incoherent since they must implicitly view themselves as more trustworthy even if they have no reason to do so.1 A somewhat more moderate position is defended by Huemer (2005, pp. 524–525), who argues that critical thinking is epistemically irresponsible because it is simply less reliable than alternative methods for belief-formation, including appealing to experts.2 While these responses are admirably brave in carrying a particular type of argument to its apparent logical conclusion, they leave a huge explanatory gap: If epistemic autonomy really is incoherent or irresponsible, why have we evolved – biologically as well as culturally – to think things through for ourselves, and to view it as admirable in others? Thus, a more satisfying response would attempt to solve the Puzzle of Epistemic Autonomy rather than dismiss it in one of the aforementioned ways. In particular, a solution to the Puzzle of Epistemic Autonomy, with regard to a particular set of propositions {Pi}, would describe the epistemic reasons there are for S to critically evaluate {Pi} in situations of this kind. On one way of carving up logical space, there are three types of reasons to which such solutions might appeal: Directly egoistic reasons: S has a directly egoistic reason to critically evaluate {Pi} for themselves if and only if doing so (partially) constitutes S’s being in a superior epistemic state with regard to {Pi}. Indirectly egoistic reasons: S has an indirectly egoistic reason to critically evaluate {Pi} for themselves if and only if doing so (partially) causes S to be in a superior epistemic state with regard to {Pi}. Altruistic reasons: S has an altruistic reason to critically evaluate {Pi} for themselves if and only if doing so (partially) causes other agents to be in a superior epistemic state with regard to {Pi}.3
We Owe it to Others 309 Here, a “superior epistemic state” is any kind of state that carries more epistemic value – broadly defined – than another. What types of states have epistemic value is to be determined by the solution in question, but plausible candidates include truth, accuracy, reliability, justification, knowledge, and understanding. In fact, one of the two proposals I examine below is that an agent has epistemic reasons to think for themselves because, and in so far as, doing so is partly constitutive of understanding. The other proposal I will consider is that an agent’s thinking for herself partially causes disagreements to arise in her epistemic community, which in turn is conducive to the agent’s search for truth. These two proposals exemplify solutions to the Puzzle of Epistemic Autonomy that appeal, respectively, to directly and indirectly egoistic reasons for being epistemically autonomous. As noted, I will argue that neither solution is successful. However, since these solutions clearly do not exhaust the logical space of (directly or indirectly) egoistic reasons for epistemic autonomy, it may well be possible to develop an entirely egoistic solution to the Puzzle of Epistemic Autonomy for all that I have to say below. What my discussion does suggest, however, is that developing a plausible egoistic solution is more difficult than it may seem at first blush. And that, in turn, lends credibility to altruistic solutions to the Puzzle of Epistemic Autonomy, i.e., that an agent has epistemic reasons to think for themselves because, and in so far as, doing so partially causes other agents to be in superior epistemic states. It should be said that one obvious type of reason for exercising epistemic autonomy with regard to some set of propositions {Pi} is to facilitate the development of reasoning skills that one can then apply in other cases, i.e., with regard to another set of propositions {Qi}. For example, a philosophy graduate student narrowly interested in metaphysics might have reason to think critically about normative ethics purely because they hope to develop general philosophical skills that they can later apply in their research in metaphysics. In such cases, the value of being epistemically autonomous regarding {Pi} is to become better at being epistemically autonomous regarding {Qi}. Clearly, however, this type of consideration in favor of epistemic autonomy only pushes the problem one step back, since we must now figure out what is valuable about being epistemically autonomous regarding {Qi}.
16.3 Understanding and Direct Egoism Several epistemologists have floated the idea that understanding, in contrast to more familiar epistemic states like knowledge and true belief, cannot be transmitted via testimony. The thought is, roughly, that you can inherit other people’s knowledge or true belief that P through being told that P, but you cannot in the same way inherit other people’s understanding. According to Zagzebski (2008, p. 146), “understanding cannot
310 Finnur Dellsén be given to another person at all except in the indirect sense that a good teacher can sometimes recreate the conditions that produce understanding in hopes that the student will acquire it also.” Zagzebski’s view is fairly typical among early pioneers of understanding in epistemology (see, e.g., Hills 2009; Pritchard 2010). At least at first blush, it is also plausible, for it does seem that truly understanding any relatively complex phenomenon (e.g., the spread of an infectious disease like C OVID-19), involves a distinct cognitive effort that is not required for simply believing or knowing what someone tells you.4 Supposing that Zagzebski’s no-understanding-through-testimony view is correct, this might seem to undergird a solution to the Puzzle of Epistemic Autonomy. For if understanding cannot be transmitted via testimony, then agents must achieve understanding for themselves if they are to achieve it at all. It seems to follow that understanding requires agents to think for themselves – that they must themselves evaluate the propositions on the basis of which they understand. If understanding is distinctively epistemically valuable in some way – a very common sentiment that is both intuitive (Pritchard 2009; Elgin 2017) and seems to lie at the heart of our conception of scientific progress (Dellsén 2016; Potochnik 2017; see also Bird 2007, p. 84) – it follows that agents have a direct egoistic reason to exercise epistemic autonomy. In slogan form, we should think for ourselves because we can only understand for ourselves. The trouble with this suggestion, however, emerges when we start to specify what exactly it is about understanding that would make it impossible to transmit it directly via testimony. Suppose I am trying to understand the rapid spread of COVID-19 in the spring of 2020. My potential understanding is based on several bits of information, including not only particular facts like its basic reproduction number R0 at different times and locations, but also more general claims like the SIR model for spread of infectious diseases. All these bits of information can clearly be transmitted via testimony; otherwise, you and I would not know about them. However, once I have obtained this information, I do not thereby understand the spread of COVID-19. Roughly speaking, this is because I may not “see” how these bits of information fit together, e.g., how R0 depends on assumptions about the number of susceptibles (S), infected (I), and recovered (R) in the relevant population. What would be missing in such a case is what philosophers have come to call “grasping” (e.g., Kvanvig, 2003; Grimm, 2006; Khalifa, 2013; Strevens, 2013). Grasping is thus the psychological component of understanding that goes beyond merely representing various bits of information and involves somehow seeing these as a coherent whole. My own preferred view of grasping is that it consists in using these bits of information to construct a particular kind of model of the understood phenomenon – a model that represents whether and how each aspect of the phenomenon depends (e.g., causally or constitutively) on other
We Owe it to Others 311 aspects of the phenomenon (Dellsén 2020a). Others construe grasping as involving a particular kind of ability (Wilkenfeld, 2013), intelligibility (de Regt, 2017), cognitive control (Hills, 2016), or distinctive phenomenology (Bourget, 2017). On any of these accounts of grasping, it involves something that one arguably needs to achieve for oneself once one has obtained the various bits of relevant information.5 The key question, then, is whether epistemic autonomy is essential for grasping. This might seem to be so at first blush since grasping is something one needs to do for oneself and epistemic autonomy is naturally described as “thinking for oneself.” This argument would be too quick, however. Epistemic autonomy concerns not some vague and general idea that one must use one’s own mental faculties; after all, consulting other people also requires one to use mental faculties. Rather, epistemic autonomy consists in critically evaluating particular propositions as (probably) true or false, and making up one’s mind on that basis. (Hence the “epistemic” in “epistemic autonomy.”) So if grasping constitutively involved being epistemically autonomous, grasping would have to be partly constituted by this process of making up one’s mind on the basis of a critical evaluation of the relevant propositions. However, none of the extant accounts of grasping take it to involve critical evaluation of this kind. For example, having cognitive control of a representation (Hills), or being in a certain phenomenological state (Bourget), do not require agents to critically evaluate the relevant propositions. Nor is there any pre-theoretical reason to think that grasping requires critical evaluation. To see this clearly, consider again my understanding of the spread of COVID-19 in the spring of 2020. I indicated earlier that a (high degree of) understanding would require “seeing” (i.e., grasping) how various bits of information hang together – e.g., how R0 depends on assumptions (in an SIR-model) about the number of susceptibles (S), infected (I), and recovered (R) in the relevant population. This does not require epistemic autonomy in the relevant sense, since one can grasp such connections without critically evaluating anything. For example, I could come grasp this connection by being taught how to derive R0 from S, I and R (and other assumptions), without in any way critically evaluating my teacher’s instructions. Indeed, I could be so dependent on my teacher that there is absolutely no chance whatsoever that my grasp differs in any way from my teacher’s. Hence, I could come to understand the spread of COVID-19 without in any way making up my mind on the basis of a critical evaluation i.e., without exercising epistemic autonomy. I conclude that, since grasping does not require critical evaluation, agents do not need to exercise epistemic autonomy in order to achieve understanding. To be sure, grasping, and thus understanding, does require agents to engage in “thinking” in the broadest sense of the term, but it does not require agents to engage in critical epistemic evaluations. Hence, it does not require epistemic autonomy in the sense that the
312 Finnur Dellsén Puzzle of Epistemic Autonomy is concerned with. So the puzzle remains: What exactly is the value of critically evaluating various propositions for oneself (as opposed to, e.g., merely organizing these propositions in an understanding-constituting way), when there are other people available who are in at least as good a position to perform that evaluation as you are yourself?
16.4 Disagreement and Indirect Egoism The other “egoistic” solution to the Puzzle of Epistemic Autonomy that I will consider is inspired by John Stuart Mill’s defense of free speech. In On Liberty (1859), Mill famously argued that it is an essential part of rational inquiry to subject one’s positions to counterarguments from those with whom one disagrees (Mill, 1956/1859, p. 45). Richard Foley interprets Mill as arguing that “disagreements encourage further evidence gathering and, thus, are in the long-term conducive to the search for truth” (Foley, 2001, p. 124).6 A similar thesis is advanced by Helen De Cruz and Johan De Smedt (2013), albeit without reference to Mill. Based on a case study concerning the taxonomic status of a recently discovered early human species called Homo floresiensis, De Cruz and De Smedt argue that scientific disagreement “is valuable because it brings about an increase in relevant evidence, a re-evaluation of existing evidence and assumptions, and a decrease in confirmation bias” (De Cruz and De Smedt, 2013, p. 176). Presumably, these effects are themselves valuable because, and in so far as, they are conducive to the search for truth (as per Foley’s suggestion). The Millian point, then, is that disagreements tend to increase the reliability of individual experts in the long term (e.g., in virtue of encouraging them to gather more evidence than they otherwise would). If correct, this point would seem to show that epistemic autonomy is valuable, other things being equal, since disagreements would seemingly only arise amongst epistemically autonomous agents. Indeed, Foley immediately adds to the passage quoted above: “A corollary of this thesis [that diversity of opinion is in general preferable to unanimity] is that anything that discourages disagreements is potentially dangerous” (Foley, 2001, p. 124). The idea, then, is that an agent S has an epistemic reason to critically evaluate {Pi} because, and in so far as, doing so increases disagreement or dissent within the community, which in turn would increase S’s chances of getting at the truth regarding {Pi} in the long run. This would be an indirectly egoistic reason to be epistemically autonomous since critically evaluating {Pi} would partially cause, rather than constitute, S’s being in a superior epistemic state with regard to {Pi}, where the “superior epistemic state” in this case consists in increased reliability regarding {Pi}. Note that this “Millian” solution to the Puzzle of Epistemic Autonomy consists of two distinct claims. The first claim is that epistemic autonomy
We Owe it to Others 313 increases disagreement within a community. I take this to be a causal claim to the effect that epistemic autonomy either causes disagreement to arise, or causes there to be more disagreement than there would otherwise be. The second claim is that increasing disagreement within a community is conducive to each individual’s search for truth. Again, I take this to be a causal claim to the effect that increasing disagreement causes the disagreeing agents to form true beliefs more reliably. So there are two distinct causal claims contained within this solution to the Puzzle of Epistemic Autonomy. Let us consider these in turn. The first claim holds that epistemic autonomy regarding some set of propositions {Pi} increases the extent to which the community disagrees about {Pi}. There is a grain of truth in this. If a community is such that none of its members exhibit any epistemic autonomy at all, then there can be no variation in what the community takes to be true, and thus no disagreement. After all, a completely non-autonomous agent’s views will be fully determined by the views of other members of their community, so that any two (or more) members of the same community will necessarily accept exactly the same things. Hence it is true that exhibiting some degree of epistemic autonomy is a precondition for disagreements to arise in the first place. However, in order to fully solve the Puzzle of Epistemic Autonomy we would need to a account for the value not just of some agents exhibiting some autonomy but also of greater degrees of autonomy exhibited by a greater number of agents. Since the current solution connects the value of epistemic autonomy with the putative value of community disagreement, there would thus have to be some more general connection between the extent to which agents exhibit epistemic autonomy and the extent to which they disagree. However, it is not hard to see that there is no such general connection between autonomy and disagreement. Suppose that a community of n individuals contains just two agents, X1 and X2, that are fully epistemically autonomous with regard to some particular proposition Pi in {Pi}, and that all other agents in the group, X3, …, Xn, are fully non-autonomous with regard to Pi.7 Specifically, suppose that half of the non-autonomous agents – X3, X5, … – automatically accept what X1 accepts regarding Pi, and that the remaining agents – X4, X6, … – automatically accept what X2 accepts regarding Pi. In this sense, X1 and X2 can be said to be authorities on Pi for X3, X5, … and X4, X6, … respectively. Now suppose that X1 and X2 disagree about Pi, i.e., that X1 accepts Pi while X2 rejects Pi. (This is possible since X1 and X2 are epistemically autonomous with respect to each other.) In that case, it follows from our assumptions that exactly half of the community will accept Pi while other half will reject Pi. This, I take it, amounts to a maximal amount of disagreement on Pi in the community. And yet, since only two of the community’s n members are epistemically autonomous,
314 Finnur Dellsén the epistemic autonomy among members of the group is clearly not maximal (especially when n >> 3). The point of this admittedly contrived example is that, since maximal disagreement does not require maximal autonomy, there is no general connection between autonomy and disagreement of the sort that would have to hold for the “Millian” solution to explain the value of expert autonomy. There are other, perhaps more realistic, cases that illustrate the same point as well: Suppose, for example, that a group of n agents is divided between two groups of unequal size: a larger subgroup of those who accept Pi, X1, …, Xk, and a smaller subgroup of those who reject Pi, Xk+1, …, Xn.8 Suppose further that all of the Pi-accepting agents are fully autonomous while some of the Pi-rejecting agents are fully non-autonomous. Now let’s consider what happens if one of the fully non-autonomous Pi-rejecting agents becomes fully autonomous. Whether such an agent switches from rejecting to accepting Pi will, of course, depend on how she herself evaluates Pi. Thus, it is entirely possible that she will switch – indeed, she is presumably rather likely to do so in our case, given that a majority of those of her peers who evaluated Pi themselves accept Pi. Now, if this agent does indeed switch to accepting Pi, the majority of Pi-accepters will become larger than it was before, thus decreasing the disagreement in the group. Furthermore, we may suppose that the same process occurs for all the other non-autonomous agents who reject Pi as well. By the same token as before, it is presumably very likely that at least a majority of them switches to accepting Pi, such that the disagreement decreases even further. The upshot, then, is that it is not only possible, but indeed quite likely, that making non-autonomous agents autonomous would significantly decrease the community disagreement about Pi. This illustrates once again that there is no general connection between epistemic autonomy and disagreement such that increasing autonomy translates into increased disagreement (or even increased likelihood of increased disagreement). Thus, while it is true that epistemic disagreement cannot arise in communities who exhibit no epistemic autonomy at all, the “Millian” solution to the Puzzle of Epistemic Autonomy fails in so far as it requires there to be a considerably stronger connection between epistemic autonomy and disagreement. Moving on to the other part of the “Millian” solution to the Puzzle of Epistemic Autonomy, this claim holds that community disagreement is conducive to reliably obtaining true beliefs. There is a grain of truth in this claim as well. It is certainly possible for disagreement to cause or enable a more efficient or more complete discovery of the truth, e.g., through causing agents to gather more evidence than they otherwise would. That much is demonstrated by De Cruz and De Smedt’s (2013) case study on Homo floresiensis, since “actually” entails “possibly.” But in order for the “Millian” solution to fully account for the value of epistemic autonomy,
We Owe it to Others 315 it is not enough that it is merely possible for disagreement to have this effect, since the puzzle is not to account for the fact that epistemic autonomy can be epistemically valuable. Rather, if the “Millian” solution is to really solve the Puzzle of Epistemic Autonomy, it must hold quite generally that disagreements are conducive for the search for truth in the long term. It is this general claim that I dispute.9 As I illustrate below, disagreements can stifle the search for truth by focusing intellectual and material resources on attempting to settle disputes that are irresolvable or infertile. In these cases, the search for truth would be better served by exploring an entirely different theoretical alternative that sidesteps or synthesizes the previous alternatives on offer. Specifically, the disagreement prevents such theoretical alternatives from emerging or being seriously considered, due to the relevant community’s intellectual resources being too focused on settling the dispute between extant alternatives. So while disagreement on a given issue tends to focus cognitive resources on that issue, it is implausible that doing so will generally be conducive to the search for truth. Let me briefly illustrate with an historical case of disagreement among scientific experts.10 In the late nineteenth century, vision studies in Germany were dominated by a wide-ranging dispute between two opposing schools of thought, led by Hermann von Helmholtz and Ewald Hering, respectively. The different theories of human vision endorsed by Helmholtz and Hering have been described as “empiricist” and “nativist,” respectively, since they differed, inter alia, with respect to the extent to which visual perception of space was taken to be acquired (Helmholtz) or inborn (Hering). The two schools disagreed on other less fundamental issues as well – e.g., on whether there are three (Helmholtz) or four (Hering) distinct kinds of color receptors in the human eye. Importantly for our purposes, the disagreement between Helmholtz and Hering (and their respective followers) had profound effects on the entire field of vision studies in Germany until well into the 1920s. Both sides spent considerable resources on trying to convince those on the other side, to little effect. As historian Steven Turner documents (1993, pp. 90–93), many scientists who were outsiders to the debate felt from the very beginning that the issue had been unhelpfully polarized. Some of these scientists proposed compromises between Helmholtz’s and Hering’s theories, but these hybrid theories were not seriously considered by proponents of either school of thought, and so did not gain widespread acceptance. In the end, the dispute between Helmholtz’s and Hering’s theories was not so much resolved as it was dissolved when Hering himself and his most enthusiastic followers came to the end of their careers in and around the 1920s. Looking back, Turner (1994, pp. 276–280) suggests that the dispute can be seen as a cautionary tale of the dangers of intense polarization in
316 Finnur Dellsén scientific research. Although it is certainly hard to know what sort of theoretical progress could have been made in this period had the efforts of German vision scientists not been so focused on this particular disagreement, it seems plausible that the Helmholtz–Hering dispute hindered rather than helped the search for truth in the long term. Of course, this example does not prove any general thesis to the effect that disagreement necessarily stifles the search for truth. What it does is illustrate the point that there does not appear to be any general connection in virtue of which disagreement is conducive to the search for truth. Spending time and resources on finding more evidence to settle some dispute is not always truth-conducive, since the search for truth is sometimes best served by moving on to explore other alternatives (including gathering more evidence relevant to these other alternatives). As I have noted, however, only a general connection between disagreement and truth-conduciveness would support the “Millian” solution to the Puzzle of Epistemic Autonomy. After all, the solution is meant to apply generally whenever epistemic autonomy is valuable, not just in cases when disagreement contingently happens to be conducive to the search for truth. I conclude, therefore, that the second causal claim of the “Millian” solution is as problematic as the first. Hence we must look elsewhere for a solution to the Puzzle of Epistemic Autonomy.
16.5 Consensus and Altruism I favor a solution to the Puzzle of Epistemic Autonomy that, in contrast to the two aforementioned solutions, appeals to altruistic rather than egoistic reasons for exercising epistemic autonomy. The idea, in short, is that critically evaluating a claim will benefit other members of one’s epistemic community in virtue of increasing the reliability of consensus positions on that claim. This is epistemically valuable because, and in so far as, these other members of one’s community will inevitably have to rely on such consensus positions in their (non-autonomous) assessment of the relevant claim. Thus the idea here is not that you should be epistemically autonomous because and in so far as it (directly or indirectly) brings you some epistemic benefit; rather, you should be epistemically autonomous because and in so far as it benefits other members of your epistemic community. A key assumption behind this solution is that an epistemic community inevitably exhibits a “division of cognitive labor” (Kitcher, 1990). Some of the community’s members will be considered to be more reliable with regard to certain topics, while others will be considered more reliable with regard to other topics. These apparently more reliable individuals are thus considered to be “experts” with respect to each particular topic. Happily, more often than not, these “experts” are indeed more reliable than the average person within their domain of alleged expertise. Of course, sometimes
We Owe it to Others 317 it is quite difficult to figure out who are the most reliable experts on a particular topic, especially when experts disagree with one another. There are many interesting and difficult questions about what agents should do, epistemically speaking, in situations of that sort.11 However, it is rarely particularly hard to carve out a broad class of individuals who are almost certainly more reliable than the average person on a particular topic. For example, in the case of academic topics, those who are employed to do research on a particular topic at universities will generally be much more reliable than the average person on that topic. Mutatis mutandis for topics that fall under the expertise of other professions. So with respect to a particular topic – conceived of here as a set of propositions {Pi} – an epistemic community can without too much idealization be divided into (a) those that are, and are considered to be, experts on {Pi}, and (b) those who are not, and aren’t considered to be, experts on {Pi} – i.e., those who are “laypeople” with respect to {Pi}. In an epistemically well-ordered community, the {Pi}-laypeople will form their beliefs based on the testimony of the {Pi}-experts. Note that this is not an elitist or hierarchical system where some people are elevated to the status of epistemic overlords and other people are demoted into an epistemic underclass, since the {Pi}-experts will inevitably be laypeople with respect to other topics {Qi}, on which at least some of the {Pi}-laypeople will be experts. Indeed, we are all experts with respect to some topics and laypeople with respect to others; no one is simply an expert or a layperson tout court.12 Now, if you are a layperson with respect to a particular topic {Pi} seeking to appeal to the opinions of {Pi}-experts on that topic, there are roughly three types of situations you might be faced with: (i) Ideally, the {Pi}-experts will all or overwhelmingly agree on the truth-value of relevant proposition Pi. Consider, for example, the 97% agreement on anthropogenic global warming (AGW), the idea that the recent rise of average surface temperatures on Earth is partially caused by human activity. Now, if the {Pi}-experts don’t overwhelmingly agree on the truthvalue of Pi, it must be because they either (ii) substantially disagree on Pi, or because (iii) they have yet to come to an opinion on Pi. In either of these latter cases, the opinions of {Pi}-experts will not be of much use to the layperson, who (unlike the {Pi}-experts themselves) normally won’t be in a position to make fine-grained distinctions in the reliability of different experts. So, again without too much idealization, we can focus on the happier case where the {Pi}-experts overwhelmingly agree about Pi, i.e., situations of type (i). Why and when should a {Pi}-layperson believe the consensus position among {Pi}-experts in situations of this sort? What makes such a consensus position a reliable guide to the truth? In previous work (Dellsén, 2020b), I gave a partial answer to this question based on a mathematical fact regarding probabilistic dependence between propositions. The basic
318 Finnur Dellsén idea is this: From the layperson’s point of view, experts that are more epistemically autonomous exhibit a greater degree of probabilistic independence in their assessments of a proposition Pi. Given plausible ceteris paribus conditions (for details, see Dellsén, 2020b, p. 354), this implies that, all other things being equal, the consensus position of more epistemically autonomous experts will be more likely to be correct than an otherwise identical consensus position held by less autonomous experts. In this sense, epistemic autonomy among experts enhances the extent to which they can be relied upon to deliver correct consensus verdicts to laypeople. Let me illustrate how this works more concretely by returning to the case of the 97% consensus on AGW. Why is this consensus trustworthy, even from the point of view of someone who has reviewed none of the scientific evidence for AGW and is in no position to evaluate the theory on the basis of that evidence? Well, suppose – counterfactually! – that AGW was initially proposed by a small group of powerful scientists whose influence on the various subfields of climate science was so immense that all other climate scientists accepted AGW entirely on the basis of their say-so. In this counterfactual scenario, the 97% consensus on AGW would provide laypeople with no more reason to believe it than if only the original group had announced its acceptance of the theory. After all, in such a scenario, the other thousands of climate scientists would have accepted AGW even if the original group had made a huge mistake in their evaluation of the theory (or was part of some nefarious conspiracy) and the actual scientific evidence for it was weak or nonexistent. So in this type of case, the additional thousands of scientists who adopt the same position on AGW as the original group would provide no additional reason to believe AGW whatsoever. Fortunately, this is not the actual situation with regard to AGW. There is no group of purportedly-infallible climate scientists whose say-so is automatically and uncritically channeled by other climate scientists. Rather, the climate science community is much closer to one in which each scientist critically evaluates AGW for themselves and reaches a conclusion (sometimes on the basis of the same evidence as their peers, but more often on the basis of different evidence and even different types of evidence). Of course, no actual epistemic community is such that every single one of its members is completely autonomous with respect to any proposition. But a large reason why we ought to trust the climate science community’s evaluation of AGW is that its thousands of members are sufficiently epistemically autonomous for it to be extremely unlikely that the community would reach a consensus position on AGW unless it was true. So, to summarize, epistemic autonomy is valuable because, and in so far as, it leads experts on a particular topic to be reliable in their consensus positions, which is in turn valuable because, and in so far as, laypeople
We Owe it to Others 319 with respect to the topic rely on such expert consensus positions to reliably form true beliefs. Now, this is by no means an unqualified defense of epistemic autonomy for all agents in all circumstances, since it does not apply to agents in so far as they consider themselves to be laypeople rather than experts with regard to a particular set of propositions {Pi}. (Indeed, this solution presupposes that at least some of these {Pi}-laypeople don’t exercise epistemic autonomy with regard to {Pi} – since if they would, there would be no point to having a reliable expert consensus on {Pi}.) Rather, the solution to the Puzzle of Epistemic Autonomy that I have sketched here applies specifically to those who serve as {Pi}-“experts” in our community: If you are someone who is consulted as an expert on {Pi}, you should exercise epistemic autonomy with respect to {Pi} since that makes other people better able to rely (non-autonomously) on the consensus position reached by you and your fellow {Pi}-experts. In my view, this is a plausible and desirable restriction to epistemic autonomy. A solution to the Puzzle of Epistemic Autonomy ought not entail that all agents should be epistemically autonomous with respect to all topics. Such a solution would “prove too much.” Instead, a sensible solution should distinguish between the epistemically desirable and undesirable instances of epistemic autonomy in a plausible way. The current solution to the Puzzle of Epistemic Autonomy does precisely that. For instance, it implies that those of us who are laypeople with respect to AGW should not base our beliefs on our own autonomous assessment of the theory; rather, we should defer to the experts (i.e., climate scientists). On the other hand, to the extent that we are considered experts on a given claim or topic – e.g., as the contributors to this volume are taken to be experts on epistemic autonomy itself – the current solution implies that one should resist deferring to other experts on that topic and instead make up one’s own mind. And the reason one should do this is the altruistic one that making up one’s own mind benefits other epistemic agents. Believe me, I’m an expert – or don’t, if you are one as well.13
Notes 1 I discuss this argument in more detail in Dellsén, 2020b: 347–350. 2 Huemer also appears to endorse the claim that epistemic autonomy is incoherent (see Huemer, 2005: 525–526). 3 There is no category here corresponding to directly altruistic reasons, since such reasons would entail that S’s critical evaluation of {Pi} could somehow constitute the other agent’s being in a superior epistemic state with regard to {Pi}. I suppose this could be palatable to certain hardline externalists, but I won’t consider this option further here. 4 As Boyd (2017) points out, this might not be true of absolutely all cases of understanding, since achieving understanding based on various bits of information sometimes involves no real effort at all. For example, if you know that your friend has an early class to teach, and that your friend knows that there is construction on the road to the university, then it involves little effort
320 Finnur Dellsén to understand why your friend left their house especially early this morning (Boyd, 2017: 14). Malfatti (2020; see also Malfatti, 2019) argues that understanding can be transmitted through testimony without significantly more effort on the receiver’s end than in cases of knowledge through testimony. However, Malfetti’s argument would not undermine the current solution to the Puzzle of Epistemic Autonomy since it would, in conjunction with the argument below, simply show that there is a wider range of epistemic states (viz. understanding and knowledge) that epistemic autonomy would essentially contribute to producing. 5 In the course of arguing against ‘lucky’ understanding, Khalifa (2013; see also Khalifa, 2017, ch. 7) proposes a partial account of grasping according to which reliable evaluation of alternative explanations is necessary for grasping. Khalifa’s argument for this claim rests on the assumption that reliable evaluation of alternative explanations is necessary for understanding – an assumption that I think should be rejected (roughly on the same grounds that I reject justification requirements on understanding – see Dellsén, 2017, 2018a). Furthermore, even if reliable evaluation of alternative explanations were necessary for grasping, this would be irrelevant for our current purposes since one way – indeed, the most reliable way in most cases – to evaluate explanations would be to consult (other) experts rather than to attempt to evaluate them for oneself. 6 See also Matheson, 2015 and Lougheed, 2020, which is inspired by Elgin, 2010. 7 We assume here, for convenience, that n is even. 8 So we are stipulating that n – (k + 1) < k. 9 Note that this is not to deny that there are plenty of examples in which disagreement have brought epistemic benefits (see, e.g., Lougheed, 2020: 65-69). Nor is it to deny that disagreement and successful inquiry are statistically correlated (see, e.g., Schulz-Hardt et al. 2002, cited in Matheson, 2015). 10 My description of this episode follows Turner, 1993, 1994. 11 See, e.g., Goldman, 2001, Anderson, 2011, Guerrero, 2017, Dellsén, 2018b, and Nguyen, 2020. 12 At the very minimum, everyone is an expert with respect to their own lives, personal history, and preferences. 13 Many thanks to Zach Barnett, Patrick Connolly, Luke Elson, John Lawless, Dan Layman, Kirk Lougheed, Federica Malfatti, Jon Matheson, and Nate Sharadin for very helpful feedback on drafts.
References Anderson, E. (2011). Democracy, public policy, and lay assessments of scientific testimony. Episteme, 8, 144–164. Bird, A. (2007). What is scientific progress? Nous, 41, 64–89. Bourget, D. (2017). The role of consciousness in grasping and understanding. Philosophy and Phenomenological Research, 95, 285–318. Boyd, K. (2017). Testifying understanding. Episteme, 14, 103–127. Cavender, N., & Kahane, H. (2009). Logic and contemporary rhetoric: The use of reason in everyday life (11th ed.). Belmont: Wadsworth. Constantin, J., & Grundmann, T. (2018). Epistemic authority: Preemption through source sensitive defeat. Synthese. 10.1007/s11229-018-01923-x.
We Owe it to Others 321 De Cruz, H., & De Smedt, J. (2013). The value of epistemic disagreement in scientific practice: The case of homo floresiensis. Studies in History and Philosophy of Science, 44, 169–177. de Regt, H. W. (2017). Understanding scientific understanding. Oxford: Oxford University Press. Dellsén, F. (2016). Scientific progress: Knowledge versus understanding. Studies in History and Philosophy of Science, 56, 72–83. Dellsén, F. (2017). Understanding without justification or belief. Ratio, 30, 239–254. Dellsén, F. (2018a). Deductive cogency, understanding, and acceptance. Synthese, 195, 3121–3141. Dellsén, F. (2018b). When expert disagreement supports the consensus. Australasian Journal of Philosophy, 96, 142–156. Dellsén, F. (2020a). Beyond explanation: Understanding as dependency modeling. The British Journal for the Philosophy of Science, 71, 1261–1286. Dellsén, F. (2020b). The epistemic value of expert autonomy. Philosophy and Phenomenological Research, 100, 344–361. Descartes, R. (1985/1628). Rules for the direction of the mind. In J. Cottingham, R. Stoothoff, & D. Murdoch (Eds.), The philosophical writings of descartes, volume I. Cambridge: Cambridge University Press. Elgin, C. Z. (2010). Persistent disagreement. In R. Feldman & T. Warfield (Eds.), Disagreement (pp. 53–68). Oxford: Oxford University Press. Elgin, C. Z. (2017). True enough. Cambridge, MA: MIT Press. Foley, R. (2001). Intellectual trust in oneself and others. Cambridge: Cambridge University Press. Grimm, S. (2006). Is understanding a species of knowledge? British Journal for the Philosophy of Science, 57, 515–535. Goldman, A. I. (2001). Experts: Which ones should you trust? Philosophy and Phenomenological Research, 63, 85–110. Guerrero, A. (2017). Living with ignorance in a world of experts. In R. Peels (Ed.), Perspectives on ignorance from moral and social philosophy (pp. 168– 197). New York: Routledge. Hills, A. (2009). Moral testimony and moral epistemology. Ethics, 120, 94–127. Hills, A. (2016). Understanding why. Nous, 50, 661–688. Huemer, M. (2005). Is critical thinking epistemically responsible? Metaphilosophy, 36, 522–531. Kant, I. (1991/1784). An answer to the question: What is enlightenment? In H. Reiss (Ed.), Political writings (2nd ed., pp. 54–60). Cambridge: Cambridge University Press. Khalifa, K. (2013). Understanding, grasping and luck. Episteme, 10, 1–17. Khalifa, K. (2017). Understanding, explanation, and scientific knowledge. Cambridge: Cambridge University Press. Kitcher, P. (1990). The division of cognitive labor. Journal of Philosophy, 87, 5–21. Kvanvig, J. (2003). The value of knowledge and the pursuit of understanding. Cambridge: Cambridge University Press. Locke, J. (1975/1689). An essay concerning human understanding. Oxford: Clarendon Press. Lougheed, K. (2020). The epistemic benefits of disagreement. Cham: Springer.
322 Finnur Dellsén Malfatti, F. I. (2019). On understanding and testimony. Erkenntnis. 10.1007/ s10670-019-00157-8. Malfatti, F. I. (2020). Can testimony transmit understanding? Theoria. 10.1111/ theo.12220. Matheson, J. (2015). Disagreement and the ethics of belief. In J. Collier (Ed.), The future of social epistemology: A collective vision (pp. 139–148). Lanham: Rowman and Littlefield. Mill, J. S. (1956/1859). On liberty. Indianapolis: Bobbs-Merrill. Nguyen, T. C. (2020). Cognitive islands and runaway echo chambers: Problems for epistemic dependence on experts. Synthese, 197, 2803–2821. Nussbaum, M. (2017). Not for profit: Why democracy needs the humanities (Updated ed.). Princeton: Princeton University Press. Potochnik, A. (2017). Idealization and the aims of science. Chicago, IL: University of Chicago Press. Pritchard, D. (2009). Knowledge, understanding, and epistemic value. In A. O’Hear (Ed.), Epistemology (Royal Institute of Philosophy Lectures) (pp. 19– 43). Cambridge: Cambridge University Press. Pritchard, D. (2010). Knowledge and understanding. In The nature and value of knowledge: Three investigations (pp. 3–90). Oxford: Oxford University Press. Schulz-Hardt, S., Jochims, M., & Frey, D. (2002). Productive conflict in group decision making: Genuine and contrived dissent as strategies to counteract biased information seeking. Organized Behavior and Human Decision Processes, 88, 175–195. Strevens, M. (2013). No understanding without explanation. Studies in History and Philosophy of Science, 44, 510–515. Turner, R. S. (1993). Vision studies in Germany: Helmholtz versus Hering. Osiris, 8, 80–103. Turner, R. S. (1994). In the eye’s mind: Vision and the Helmholtz-Hering controversy. Princeton: Princeton University Press. Wilkenfeld, D. A. (2013). Understanding as representation manipulability. Synthese, 190, 997–1016. Zagzebski, L. (2007). Ethical and epistemic egoism and the ideal of autonomy. Episteme, 4, 252–263. Zagzebski, L. (2008). On epistemology. Belmont, CA: Wadsworth.
17 Epistemic Self-Governance and Trusting the Word of Others Is There a Conflict? Elizabeth Fricker
Having autonomy means being in control of one’s life. Practical autonomy means being in control of what one does; epistemic autonomy, being in control of what one believes. This seems like a good thing to have. Trusting others means allowing oneself to be dependent on others: to rely on them in various ways. All kinds of interpersonal goods and other benefits can arise from this. So trusting, and being capable of trust, seems like a good thing. But there is a prima facie conflict between these goods. If I am dependent on others, then ipso facto I am not fully in control of my life. If I am dependent on them to do things on my behalf that I lack the skill to do myself, I lack practical autonomy; if I depend on them for knowledge, or to exercise epistemic skills that I myself lack, then that reduces my epistemic autonomy. In this chapter I examine further the tensions between these values. If trust and autonomy are both virtues, then it seems there is not a unity in the virtues: possessing one to the highest degree precludes possessing another. To be completely autonomous, one must not trust; and to trust entails forgoing autonomy.
17.1 Practical and Epistemic Self-Governance and Trust Self-governance, being oneself in control of one’s self and one’s life, seems like a good thing, even an ideal to be aspired to. But the ability to enter into trusting relationships is also an important good of human life. We are social and emotional creatures, and a good human life involves cooperative and caring relationships with others. An indispensable part of these is allowing oneself to be dependent on others for various needs, and vulnerable to being harmed if others betray one’s trust. So in the practical domain there is an opposition between self-governance and trust. A good human life will strike a suitable balance between retaining control oneself of how things that matter to one progress, and allowing delegation of this to trusted others; and it will find a suitable balance between emotional neediness and self-reliance. In the practical domain of action,
324 Elizabeth Fricker and in one’s personal and social life, self-governance is not an absolute unqualified good, but one to be traded off against the goods that come with dependence on others. But what about one’s epistemic life? Is there a similar incompatibility between epistemic self-governance, maintaining responsibility oneself for one’s beliefs, and trust in others? The trust here is about matters relating to acquiring knowledge, what we will call “epistemic trust”1: most obviously, and the topic of this chapter, trust in others for what they tell us.2 As I have argued in a previous paper, the supposed ideal of the autonomous knower – someone who never takes another’s word on any topic, and only believes what she can find out through her own cognitive resources – is no such thing (see Fricker, 2006). Each one of us (cognitively normal adult humans) is able to understand the limitations on what one can find out for oneself imposed by one’s finite cognitive powers and restricted place in the world, and to appreciate the contrasted capacities and placing of others. And this understanding shows one that, on many topics, others are in a position to know about them, and are better placed than oneself to know. So it is irrational not to accept another’s word on a topic when one knows she is in such a superior epistemic position to oneself. In that previous paper I argued that believing on the basis of accepting another’s testimony is consistent with maintaining responsibility for one’s beliefs, and thus with epistemic self-governance, if one is discriminating in whom one trusts – if one believes what someone tells one only when one has good evidence of her honesty and competence on her topic. In this chapter, I consolidate the thesis of that earlier paper. I first briefly discuss self-governance in relation to one’s actions and desires (Section 17.2). I then propose that epistemic self-governance requires forming and holding one’s beliefs rationally, and that this requires forming and holding one’s beliefs in suitable accordance with one’s evidence (Section 17.3). In Sections 17.4 and 17.5, I develop an account of trust. On this account, forming belief on a speaker’s say-so is a case of trust: one trusts the speaker with respect to her utterance (Section 17.6). My account shows there is no reason why trust cannot be based in evidence of trustworthiness – in our case of interest, trust in the speaker on her topic. When a hearer believes what she is told on such a basis, this is a case of forming belief in accordance with one’s evidence. So, we have shown that there is no incompatibility between epistemic self-governance and believing things on the basis of others’ testimony. As was argued in my earlier paper, this simply needs to be done with apt discrimination, in accordance with evidence of speaker trustworthiness. It is true that when I take your word on some matter, I do not myself have access to the evidence on which your knowledge is based. But I have higher-order evidence: evidence that you have evidence; and this is enough for my own belief to be evidentially grounded.
Epistemic Self-Governance 325
17.2 Self-Governance of One’s Actions and Desires Autonomy is self-governance, being oneself in control of one’s life. What does this involve? An explication of self-governance may involve conditions on what actions are available to the subject, given her circumstances – these must not be too limited. But the focus in philosophical discussion has been mainly on the psychological conditions for self-governance: conditions on how one’s actions are caused, plus conditions on one’s desires, and also, as will be the focus of my interest, on one’s beliefs. These conditions are both synchronic – regarding coherence – and diachronic, regarding how these states are caused (see Buss and Westlund, 2018). Self-governance of one’s actions and of one’s beliefs and other attitudes means governance by the dictates of reason: it requires acting, and forming one’s attitudes, in accordance with rational requirements. Why so? Because, as famously articulated by Kant, only when reason is the source of the authority on which one acts, is this source within one’s own will, not external to it. Self-governance involves both synchronic constraints on the relations between one’s attitudes, and diachronic constraints on how one’s actions, and one’s attitudes, are caused. A component of self-governance is that one acts on a first-level desire only when it is, or if considered would be, reflectively endorsed at the meta-level: one wants, or is content, to be moved by such desires. Call this condition on self-governance reflective harmony. A reluctant addict, or someone under any form of compulsion they would like to be free of, fails this condition. Self-governance entails a capacity for meta-level reflection on one’s first-level desires, and the ability to “control oneself” – to refrain from action on a strong desire, if this is judged not to be all things considered best. Reflective harmony is a synchronic condition required for self-governance. There are also diachronic constraints. Even if someone’s desires at a given point in time are in reflective harmony, she lacks self-governance if these have been produced in her over time in a wrong kind of way – for instance by some process of manipulative brain-washing of which the subject is unaware, but would repudiate if she were made aware. It is inevitable, for humans, that their psychological states are produced in them by causal processes which, if traced back, go outside the subject’s own psychology and so outside her control. How to distinguish, amongst such causal processes, between those which do, versus do not, undermine self-governance? This question cannot be discussed adequately here, but I will propose a necessary criterion: the process must be one that survives reflective scrutiny. That is to say, if one finds out that this is how one’s attitude was caused, this does not give one reason to revise the attitude.
326 Elizabeth Fricker
17.3 Self-Governance in Belief Formation and Sustaining Discussions of self-governance have mainly focused on this matter of how one’s desires are formed, how they are inter-related, and how they produce one’s actions. But even if one’s desires satisfy all the required conditions for self-governance, a person will not be in control of her life if her beliefs have not also been formed in a suitably self-authored and independent fashion. Being in control of one’s life requires being in control of one’s beliefs – of how one forms and sustains them. What is required for self-governance in belief-formation? The proposal here is that, first, self-governance in belief formation and sustaining requires doing so in accordance with the dictates of rationality; second, that this requires forming one’s beliefs in accordance with one’s evidence. Defense of the idea that self-governance is governance by the dictates of rationality is a topic that deserves and has had books devoted to it (see Kant, 1996; Korsgaard, 2009). No more will be said about it here than was already said above: that only this gives autonomy, not heteronomy of the will, since the laws of rationality are precisely those with which the rational will itself apprehends the necessity of its conformity. Why does forming and sustaining belief rationally require doing so in accordance with one’s evidence? The connection sounds platitudinous. As David Hume wrote “a wise man proportions his belief to the evidence” (Hume, 1975). But it is worth spelling out the facts that underwrite the connection between rationality of belief and evidence. Below I will offer a defense for the following thesis: R-Evidentialism3: Evidence, and only evidence, provides rational grounds for belief. R-Evidentialism is tendentious since it rules out that “pragmatic,” nontruth-related considerations can ever provide rational grounds for belief.4 (Thus, for instance, R-Evidentialism rules out that loyalty to one’s friend can provide reason to continue to believe in her innocence of a crime, despite mounting evidence of her guilt.) Some clarifications about terminology are needed: reasons for belief are items that give one rational grounds for belief, so belief based on reasons is rational belief. But rational belief is not necessarily belief based on reasons, since there is conceptual space for the idea of belief that is rational, though based on no grounds. Grounds for belief are items that are reflectively accessible to the believing subject – other beliefs are the salient candidates here, but on some views, experiences and other introspective seemings also provide grounds for belief (see Conee and Feldman, 2008). Why is only belief that is formed in suitable accordance with one’s evidence rational? Well, note what some of the alternatives might be:
Epistemic Self-Governance 327 forming a belief via wishful thinking or self-deception, or via implantation of the belief in one by another agent, by a process of which one is unaware; or by somehow getting into a situation where another person has gained sway over one’s psychology, so that one uncritically forms belief in anything she tells one (without this being mediated by an evidenced belief in her competence on her topic). These ways in which a belief is caused in one fail the criterion suggested above in connection with desires: realizing that this is how the belief was caused gives one reason to reject it.5 This fact is indicative of the irrationality of these ways of coming to believe – they cannot withstand rational scrutiny. Why are these methods unable to withstand reflective scrutiny, while forming one’s belief in apt response to evidence is a method that is stable under scrutiny? Wishful thinking and unconscious brain-washing are not ways of acquiring beliefs that withstand rational reflective scrutiny, because such a way in which a belief was acquired does nothing to indicate its likely truth. Belief by its nature aims at truth: to believe a proposition that represents a state of affairs P is to believe that P is how things are in the world external to and independent of one’s believing. When one ascribes belief to another person, one ascribes a representational mental state; but belief, for the believer, is not focused on her own mind, but on the world. This is why “P, and I don’t believe that P,” and “I believe that P, and not-P” are both statements that no one could ever cogently make. To realize that what one believed is not true is ipso facto to cease to believe it; and to discover that a proposition is true is ipso facto to come to believe it. Forming one’s beliefs via suitable response to one’s evidence is a method that survives reflective scrutiny, because E is evidence for a proposition P just if E’s obtaining indicates that P is very likely true. This is pretty much an analytic truth about the notion of evidence. What kinds of items constitute evidence is itself a philosophical issue. The notion of evidence plays different roles in different domains – in law courts, in science, and in epistemology. It is plausible that there is not a single view of what items constitute evidence that is apt for all these diverse roles.6 But common to all of these roles is the core idea that an item E is evidence for a proposition P just if E’s existing/obtaining/being so, indicates that P is very likely true. And it is because evidence for P indicates P to be true, that it provides reason to believe P. E indicates P’s truth for one just if, relative to one’s background knowledge, given that E obtains, it is very likely that P is true. Thus, for instance, for anyone with basic worldly knowledge, smoke is evidence of fire, the doorbell ringing is evidence that someone is at the door, the ground being wet is evidence that it has rained recently. The evidential connection that holds for one between E and P may be due to an observed natural correlation between them, or via inference to the best explanation.
328 Elizabeth Fricker Some accounts equate one’s ultimate evidence for one’s empirical beliefs with one’s seemings – perceptual and intellectual (see Conee and Feldman, 2008). But we will take evidence for one’s beliefs to be propositional: the proposition that the doorbell rang makes very likely true the proposition that there is someone at the door (the underlying relation here being inference to the best explanation). Such evidential relations, and so the notion of evidence, are relative to a background state of knowledge or belief; and so we take evidence to be evidence for a person who has that background knowledge. If evidence is to feature in an account of what makes beliefs rational, it must be evidence which the person possesses. So, for our current project, we can characterize evidence thus: (E): A proposition E is evidence for S that P relative to her current belief set B1 … Bn just if adding E to B1 … Bn, increases S’s subjective probability that P: P is more likely to be true, in her epistemic position post receiving E, than in her epistemic position prior to receiving E. Subjective probability here must be understood as an epistemic, not a brutely psychological notion: that her subjective probability for P is raised by acquiring E is a fact about S’s epistemic standing before and after receiving E. Our definition of evidence (E), together with the fact that belief by its nature aims at truth, explains why the only rational ground there can be for belief in a proposition P is (sufficiently strong) evidence for P. Evidence for P is simply any truth-indicating basis for believing P. And non-truth-indicating factors are not apt to rationalize belief since belief aims at truth. R-Evidentialism is shown to be a necessary truth. (The substance of R-Evidentialism is to rule out non-evidential grounds for belief. R-Evidentialism is consistent with the thesis that there are some beliefs that are held rationally, but on no grounds. Our associated slogan that rationality requires forming one’s beliefs in accordance with one’s evidence is to be understood as allowing this.) A belief P that is based on grounds, we have here shown, is rational just if it is suitably based on one’s evidence. It is rational to believe P just if one’s evidence renders P sufficiently likely. Sufficiency will be a high degree of probability, but less than absolute certainty – empirical evidence does not entail the truth of what it is evidence for. Taking evidence to be propositional, plus taking the evidence that bears on the rationality of one’s beliefs to be evidence that one possesses, means that evidencing is a relation on one’s beliefs: some of one’s beliefs are evidence for other of one’s beliefs. For instance, one’s belief that the doorbell rang is evidence (given one’s other beliefs) for one’s belief that there is someone at the door. So the conception of all evidence as propositional serves an account of how inferential beliefs – beliefs that are based on other beliefs – are rational. It fits with a conception of beliefs as divided
Epistemic Self-Governance 329 into inferentially versus non-inferentially rational, and is silent on the question of how the latter are acquired in a way that is normatively apt. My topic in this chapter is to explore whether accepting what one is told can be done in a way that respects the rational requirement to form belief only in accordance with one’s evidence. We can dismiss the idea that such testimonial beliefs are non-inferentially rationalized by experience in the way that perceptual belief arguably is.7 Belief from accepting the word of others is normatively inferential belief, based on other beliefs.8 This being so, a conception of all evidence as propositional is adequate to address our issue; we need not venture into the controversial territory of whether non-doxastic conscious states such as perceptual experiences can evidence belief.9 Our argument above has established that a necessary part of epistemic self-governance is forming one’s beliefs in accordance with one’s evidence. This is a rational requirement, and conformity with the dictates of rationality is a condition of self-governance. This requirement is by no means all there is to being in control of one’s epistemic life. There are diachronic conditions on epistemic self-governance: conditions on what determines how one focusses one’s attention, what lines of enquiry one pursues and which one neglects, and so forth. As with desires, a capacity for second-level critique of one’s first-level states is a requirement for self-governance: the ability to scrutinize how one’s beliefs were acquired, and whether they cohere with each other; and to then sustain or revise them accordingly. In this chapter, I pursue the question: is forming belief from trusting others’ testimony consistent with epistemic self-governance? I have made the first step by showing that epistemic self-governance requires forming belief in accordance with one’s evidence. Next, I develop an account of trust that fits what it is to trust a speaker regarding her utterance. This reveals that there is no incompatibility between trusting a speaker, forming belief from accepting her word on her topic, and basing one’s belief on suitable evidence. It is so based when one trusts the speaker on the basis of one’s evidence that she is trustworthy with respect to her utterance. This is higher-order evidence of the trustworthiness of the epistemic source in question.
17.4 A Definition of Reliance The notion of trust has many uses in our lives, and it is doubtful whether there is a single explication that fits all of them (for a survey see Dormandy, 2020). Here I define a notion of trust-based reliance on an occasion. This is a three-place relation between a trusting person, a trusted item, and the thing for which it is trusted: its performance in some respect on an occasion. For instance, I may trust my car to get me home after the concert without breaking down. When the item is another person,
330 Elizabeth Fricker the performance is an action (or a refraining)10 by her. So we can say: A trusts T to φ on occasion O, where φ-ing is an action-type performed by T on O11. Since someone’s trust in other persons is my concern, I will talk in these terms henceforth. However, the notion of trust-based reliance I develop admits of extension to cover trust in inanimate items that have an excellence of their kind, and so proprietary virtues. Most accounts of trust identify reliance as a core component. I first define reliance; one gets progressively richer notions of trust by restricting the basis for the reliance. The basic idea of reliance is simple, but the definition is a little complicated. I’ll give it, and then explain its components. Simple Occasion-Reliance: A relies on T to φ on O if and only if: (i) T’s φ-ing on O is necessary in the circumstances, where these include A’s own past and planned future actions, to ensure an outcome that is required for things to go well for A in some respect, and for her plans in this respect to be fulfilled; and (ii) A knows this (knows that the condition specified in (i) holds); and (iii) A has no “Plan B”; and (iv) A either believes, or has an optimistic attitude to, both the proposition that not easily would T fail to φ on O, and the proposition that T will φ on O. A “plan B” here is another mechanism put in place by A, to ensure that the desired outcome will be brought about, even if T fails to φ on O. (Strictly, (i) would not hold if A did have a Plan B; but the absence of failsafe plans on A’s part is a key element in reliance, and so I make it explicit in the definition). Here’s an example of reliance: Garden Services: I rely on you to water my garden while I am away on holiday, just if your doing so is necessary in the circumstances for my plants not to die, which would upset me and be contrary to my plans; and I know this; and, knowing this, I have not put in place a “Plan B” – some other mechanism to ensure my garden gets watered, even if you fail to water it; and I either believe, or have an optimistic attitude to, the propositions that not easily would you fail to water my garden, and that you will water my garden. Reliance, as defined, involves a cognitive relation of the relier to the person or item relied on for her performance. A knows that she is dependent on T’s φ-ing, to ensure that things go well for her in the respect in question. Thus, I do not rely on the air around me to contain sufficient oxygen to support my life, if I am scientifically ignorant, and know nothing of the various gases in the atmosphere, and the role of oxygen in respiration. (We might coin a thinner notion of being reliant on, to capture just the
Epistemic Self-Governance 331 dependence element of reliance, without the cognitive aspect of awareness of this dependence.) A key component of reliance is absence of a Plan B on A’s part. If I ask you to water my garden, but I also ask another neighbor to check if you’ve done it, and if not do so herself, then I am not relying on you for my plants not dying. Given that absence of a Plan B is the bottom line of reliance, it seems I could lack a Plan B, so be relying on you to φ, although I do not have outright belief that you will φ. I cannot be relying on you to φ if I know you will fail to φ12; but can I if I merely hope, with a low degree of credence, that you will do so? If absence of a Plan B is the hallmark of reliance, this suggests this is possible. But not so quick. If I only hope, but am not confident, that you will indeed water my garden, then I am not truly relying on you to do so. Though I have no Plan B, I am not relying on you to water my garden, since in my plans I am admitting as a live possibility that my garden may not get watered, and my plants die – I entertain and, as it were, shrug my shoulders at this possible letting down of me on your part. I am not relying on you to φ unless I am counting on the outcome of your φ-ing to come about, not entertaining as epistemically possible any other outcome; and this means that I am counting on you to φ, not entertaining failure to φ by you. This is inconsistent with merely hoping, without much optimism, that you will φ, or having a low credence in your φ-ing. But it does not require belief that you will φ. It requires either belief, or an optimistic attitude to the proposition that you will φ. This is a notion I coin, and which I now explain. To maintain an optimistic attitude to a proposition P is similar to accepting P as true while knowing that one does not have evidence for P sufficient for knowledge. But it involves more than acceptance. To accept P as true is to take P as true for the purposes of planning, action etc. But maintaining an optimistic attitude to P also requires a psychological condition: avoidance of entertaining as a live epistemic possibility not-P. I think this is an attitude epistemologists and philosophers of mind should admit into their ontology. It is an attitude we perforce take up, or try to take up, to many propositions whose truth is a matter of concern to us, but that are uncertain for us. For instance, for me a year or so back, that my daughter will come back safely from her three months of travel in South America (she did!). An optimistic attitude to P is more than acceptance that P, but less than belief. If one has outright belief that P one does not entertain as an epistemically live possibility not-P; but one can also succeed in the attempt to set oneself not to think about the possibility that not-P, even though one knows in one’s heart of hearts that one cannot strictly rule it out. “I’m not worrying, I’m sure she’ll be fine” – one tells others, and oneself; setting one’s attitude to optimism. (Or, in some cases, to pessimism – “I have a lottery ticket, but there’s no way I’m going to win” – it may be apt to set
332 Elizabeth Fricker oneself against being distracted by thoughts of a minute probability of financial gain.)13 If one has outright belief that T will φ, or if one has an optimistic attitude to T will φ, then one is counting on T φ-ing; one does not entertain as an epistemically live possibility that T may fail to φ. This is required for truly relying on T to φ: if one entertains as epistemically possible that she may fail to φ, but does not put a Plan B in place, then one is tolerating the possibility that the desired outcome E ensured by T’s φ-ing may not come about, and shrugging one’s shoulders, accepting this may happen; hence not relying on E’s coming about.14 Relying on T to φ on O is entirely consistent with knowing that T has the property that not easily would she fail to φ on O, and with knowing that T will φ on O; and one’s reliance is most fully justified when one knows this.
17.5 Trust-Based Reliance Trust-based reliance is a type of reliance. Reliance specifies that A either believes or has an optimistic attitude to the proposition that not easily would T fail to φ on O. But it says nothing of what grounds A has for this belief: why A thinks T would not easily fail to φ on O. Reliance is based in trust of the person, when the ground for A’s belief that not easily would T fail to φ on O, is her belief that T instantiates relevant epistemic and/or character virtues that will both motivate T and ensure her success in φ-ing on O. In contrast, A’s reliance on T to φ on O is not trust-based, when A’s ground for her belief that not easily would T fail to φ on O is that she thinks some non-admirable selfish motivation will lead T to φ on O. For instance, my belief that someone to whom I have lent money will pay it back by the due date is not trust-based if I expect this only because she has signed a legally binding contract with me, and I could sue her if she failed to repay. So, A’s reliance on T to φ on O is trust-based when her basis for belief or optimism that T will φ on O is that she believes or is optimistic that T has the relevant instance of this property: TrustT,φ,O: Not easily would T fail to φ on O, due to her relevant virtues.15 I will call this T’s trustworthiness with respect to φ-ing on O. I now define Trust-Based Occasion-Reliance. This definition differs from Simple Occasion-Reliance only in a restriction on A’s belief regarding T’s motivation for φ-ing, specified in its final clause: A has trust-based reliance on T to φ on O if and only if: (i) T’s φ-ing on O is necessary in the circumstances, where these include A’s own past and planned future actions, to ensure an outcome that is
Epistemic Self-Governance 333 required for things to go well for A in some respect, and for her plans in this respect to be fulfilled; and (ii) A knows this (knows that the condition specified in (i) holds); and (iii) A does not have a “Plan B”; and (iv) * A either believes, or has an optimistic attitude to, the relevant instances of the propositions that “not easily would T fail to φ on O, due to her relevant virtues,” and that “T will φ on O due to her relevant virtues.” We can also define mere reliance as reliance that is not trust based. As with simple reliance, A’s having trust-based reliance on T to φ on O is consistent with lack of belief by A that T is trustworthy with respect to φ-ing on O, and lack of belief by A that T will φ on O; but A must have at least an optimistic attitude (as previously defined) to these propositions. And, as with simple reliance, A’s having trust-based reliance on T to φ on O is entirely consistent with her knowing that T is trustworthy with respect to φ-ing on O, and that T will φ on O. As with other stable character traits, a person can be known to be trustworthy with respect to an action one trusts her for via induction from past experience, and/or inference from one’s broader empirically based knowledge of her character and capabilities. The essence of trusting T to φ on O is counting on T to φ on O (not entertaining as epistemically possible that she may fail to φ), and not having a Plan B in place. This in no way conflicts with knowing that T is trustworthy as regards the thing one trusts her to do. Indeed, this is the case when trust in her to do so is best justified. Trust can, and in many cases should be, empirically grounded in evidence of trustworthiness. This consequence falls out of my account of trust-based reliance. Reflection on cases reveals it as intuitively correct. For instance, the management of an adventure-holiday company could be sued for trustingly-relying on equipment used by its clients that had not been properly checked. And a company arranging after-school care for children could be prosecuted for employing people to look after the children – placed by the company in a position of trust with respect to their care – without a thorough vetting-process before they were allowed to take up positions of responsibility as carers. It is important to distinguish the idea of trust from that of epistemic faith. Trust can be based in nothing more than empirically ungrounded epistemic faith that the trustee is trustworthy. When so, the truster makes an epistemic leap to reliance on the trustee across an evidential gap. But trust can also be empirically grounded in evidence of the trustee’s trustworthiness. It is a conceptual mistake to think that one’s reliance is not based on trust unless this is given without seeking evidence of the trustee’s being worthy of it. Of course, it is possible to trust without adequate evidence to justify belief in trustworthiness, though (as argued above) one is
334 Elizabeth Fricker not really trusting unless one adopts an attitude of optimism to the proposition that the trustee is trustworthy. However, it is one thing for trust to be possible, and another for it to be justified. Trust is surely best justified when it is epistemically justified, by evidenced belief in trustworthiness. Conceptual confusion may arise because there is, nonetheless, a grain of truth lurking in the idea that trust involves making an epistemic leap of faith beyond what the evidence warrants. It is one thing to know that T is trustworthy with respect to φ-ing on O; it is another to know that she will φ on O. The former does not and cannot entail the latter. No empirically verifiable character property of a person could do so.16 However conscientious and capable my garden-watering friend is, sufficient bad luck could render her incapable of executing her agreed duty in this regard; the most prudent and honest person could succumb to unpredictable financial bad luck, and be unable to repay her debt on time. Hence the inference from T’s trustworthiness with respect to φ-ing on O, to the proposition that she will indeed φ on O, is a non-deductive empirical inference. I think there are many cases where the inference is sufficiently well-supported to provide empirical knowledge that T will φ on O – will fulfil the trust placed in her – antecedent to her doing so. But this is never entailed. This being so, trusting, even where best-justified, since based in knowledge that the trustee is trustworthy, always involves an element of epistemic risk. In trusting you to φ on O, I trust that when the occasion comes, your relevant character virtues and competences, which I know you to possess, will carry the day. But it is entirely consistent with the laws of physics that they should fail to do so. What matters for our present concern is that there is nothing to prevent one from placing one’s trust in another on some matter only when one has evidence of her trustworthiness; and this includes trusting a speaker as regards what she tells one.
17.6 A Key Role for Trust-Based Reliance The notion of trust as trust-based occasion-reliance developed here is relatively thin. First, it is not intrinsically reciprocal (though it is consistent with reciprocal awareness between A and T of A’s dependence on T.) A may have trust-based reliance on T to φ on O, while T is unaware of A’s reliance. I could trust my partner to water the garden while I am away, out of relevant character virtues – in this case, effective conscientiousness and love of the plants – although she does not know that I care about the garden and would be upset if I returned to find the grass and other plants yellowed and even dead. Second, and relatedly, while the motivation and competences ensuring T’s successful action must be relevant virtues, in contrast with non-admirable motivations – this is what makes the notion of trust applicable – nonetheless, T need not be motivated even in part by trust-responsiveness.
Epistemic Self-Governance 335 That is, she need not be motivated by a disposition to act as I trust her to, through recognition of the fact of my trustingness: to be trustworthy because she recognizes that I am trusting. For instance, suppose M and N are mid-way through an acrimonious divorce. It is M’s turn to have the children for the week. N may trust M to care effectively for the children out of conscientious duty and love for them; N is aware that as regards his attitude to N, M would prefer to hurt N if possible; but he would not use the children as tools, sacrificing their welfare, to gain revenge on N. This is a case where N trusts M to look after the children, on our definition of trust-based reliance. It is not a case of trust based in trustresponsiveness: this mechanism is neither why M looks after the children well, nor why N expects him to do so. This relatively thin notion of trust-based reliance fits one set of ordinary language uses of “trust.” In our N/M divorce example we can imagine N saying, “I know M would love to hurt me, but I trust him to care for the children, he would not sacrifice them to get at me.” The idea of trust-based reliance extends naturally to types of inanimate object that have a function or purpose, allowing a notion of good performance or excellence of their kind. It is natural to say, “I trust my car to get me back from the concert without breaking down.” Our thin notion of trust-based reliance has a key theoretical role: it characterizes the mechanism through which both recipients and other observers of a speaker’s testimony come to know the fact told to the recipient by the speaker, when they take her word for what she tells. When one takes another’s word on some topic one believes the speaker, and forms belief in what she states on her say-so. In telling one that P, the speaker presents herself as giving her word regarding P, offering one the right to believe P on her say-so; and she presents herself as suitably authoritative to give her word – as knowing what she states, P.17 In believing the speaker, one trusts her to be as she presents herself – suitably authoritative to give her word on her topic. It is natural and, as our account shows, correct to use the notion of trust here. We can see how each clause of the definition of trust-based reliance is fulfilled: the desired outcome for me, the recipient, is that I acquire knowledge from you, the speaker, which requires that your action is one of speaking from knowledge18; I depend knowingly on you speaking from knowledge; if I believe on your say-so, without requiring other corroborating evidence for what you tell me before I believe it, then I have no Plan B19; and if indeed I trust you to speak from knowledge, forming belief in what you tell me on the basis of taking your offered word, then I must either believe, or have an optimistic attitude to, both the proposition that not easily would you tell me20 that P unless you knew that P,21 due to your honesty and your competence regarding P; and the proposition that you do indeed speak from knowledge in stating that P, due to your honesty and competence. More precisely, antecedent to accepting
336 Elizabeth Fricker your word, I must at least have an optimistic attitude to these two propositions; once I have formed belief on your say-so, I must then believe that you spoke from knowledge, since that you did so is a rational commitment of my believing on your say-so (see Fricker, 2015).22 This is what it is to take a speaker’s word for what she states; the epistemic mechanism of trusting her as regards her utterance that this involves. The recipient trusts the speaker to speak from knowledge of what she states, and forms belief on that basis. So she is rationally committed to the proposition that the speaker spoke from knowledge; finding out she did not do so is a defeater for her belief. This fact identifies this basis for belief (see Fricker, 2015). Given this is the mechanism, there is a further question as to under what conditions it yields knowledge: is it enough to have an optimistic attitude to the proposition that the speaker is trustworthy, or must one have evidenced belief or even knowledge of this? My own view is that, for testimonial belief to be fully justified, one must have evidence of the speaker’s trustworthiness. When a recipient R (or an onlooker) justifiedly forms belief in a proposition P told to her by a speaker S through taking S’s word for what she states, her belief is justified by this available backing-up thought, for whose premises she has evidence: S told me/R that P; not easily would she do so unless she knew that P; so, very probably, P. Call this the inference from trustworthiness. This local reductionist stance on the epistemology of testimony is part of my broader commitment to R-Evidentialism. But to argue for it is not the topic of this chapter (for my account see Fricker, 2017). The result of concern for our present question is simply that it is entirely possible only to trust in what others tell one when one has evidence of their trustworthiness.23 One can be suitably discriminating in whom one trusts, wisely proportioning one’s belief in what one is told, like the rest of one’s beliefs, to one’s evidence – in this case, evidence of speakers’ trustworthiness. The testimony-based beliefs of one who does this are rationally held. Hence there is no incompatibility, not even a tension, between forming belief from trusting the word of others, and maintaining self-governance with regard to one’s beliefs. Our further exploration of the matter has confirmed what I first argued in (Fricker, 2006).
17.7 Alternative Approaches; Conclusion My local reductionism is not the only game in town as an account of what justifies recipients of testimony in believing what they are told. There is no difference in principle between the evidence available to the addressee of a telling and to onlookers, and so in my view there is no special reason for belief in what one is told available to the addressee alone.24 “Assurance theorists” of testimonial justification take a different view. They invoke a richer notion of reciprocal trust, an interpersonal
Epistemic Self-Governance 337 relation between speaker and addressee. And they argue that norms of trust that govern this provide a non-evidential reason to believe what she is told that applies to the addressee alone25 (see Hinchman, 2005; Faulkner, 2011; McMyler, 2011). On such an account of the reasons that ground belief in what one is told, this does conflict with our requirement for epistemic self-governance of believing only in accordance with one’s evidence. (Assurance theorists may bite this bullet, and conclude so much the worse for self-governance: as in other areas of life, so with testimony there is a conflict, and a trade-off must be found.) I say the addressee has no special reason to believe what she is told only available to her. Nonetheless, there are important normative differences between the position of the addressee and onlookers. The addressee can challenge someone who tells her that P to explain how she knows that P, and has a right of complaint to her if she lacked the epistemic authority to give her word that P. And the addressee in virtue of being addressed, invited by the speaker to trust her, incurs a duty: to take the speaker seriously – to listen to her and hear her out, to consider carefully her claim and perhaps seek further evidence on it. What she does not have is a special permission or mandate simply to believe the speaker in virtue of being invited by her to do so. There are no second-personal epistemic reasons, reasons to believe; only evidence provides these. (On second-personal reasons see Darwall, 2006). Our analytic account of a thin notion of trust, trust-based reliance, has shown there is no reason why one cannot form beliefs through trusting the word of others in accordance with the requirement to form and sustain one’s beliefs in accordance with one’s evidence, and so retain epistemic self-governance. It is worth spelling out further why there is no problem here. First, there is no conceptual difficulty with the idea of trusting only when one has evidence of the trustee’s trustworthiness. This applies no less in the case of epistemic trust: trust in the epistemic and character virtues of the teller to ensure she gives her word only when suitably authoritative to do so, through which trust one acquires a new belief. Our account of trust has shown this. Trustworthiness is an empirically confirmable property of a person, and there is no conceptual incoherence in the idea of placing trusting reliance only in persons whom one’s evidence shows to be trustworthy. As was shown earlier, to think trust must be bestowed without discrimination is to confuse trust with epistemic faith. It is possible to trust someone without evidence of their trustworthiness, but this is usually unwise. This is as true of trusting others for what they tell one, as for trusting them for other matters – such as to pay back money one has lent to them out of conscientiousness and good financial planning, and not only because they would risk going to jail if they failed to.26 Second, it may seem puzzling that belief from taking the word of another can be suitably evidence based. Surely the essence of taking
338 Elizabeth Fricker another’s word on some matter P is that one has to trust them that P is so, since one lacks evidence regarding P oneself? Absolutely. But one’s belief that P on the strength of another’s word for it can nonetheless be evidence based, because one has higher-order evidence for P: when one has evidence that a certain speaker is trustworthy on the topic of P, one has evidence that a certain source of putative information that P is reliable, and this itself, in combination with the output of that source, is evidence for P. This point deserves further expansion. Some writers, most notably Richard Moran (Moran, 2018), make much of a contrast between taking the speaker’s word, “believing the speaker,” versus taking her testimony as evidence. Believing the speaker, Moran writes, “provide[s] a kind of reason for belief that is categorically different from that provided by evidence” (Moran, 2018, p. 40). So, he suggests, though it is possible to take testimony that P as evidence for P, this is not what one does when one trusts the speaker, taking her word for what she states. Moran’s thesis, it emerges, hinges on an idiosyncratically narrow conception of “evidence” (see Fricker, 2019; Moran, 2019). But it is worth dispelling some confusion about the idea that testimony that P is evidence for P. “The evidential view of testimony” is frequently referred to in the literature, usually in disparaging tones. It is suggested that it fails to take into account the special nature of testimony as an interpersonal encounter – intentional communicative action to an audience by an intelligent agent. And it is suggested that the idea that testimony that P warrants belief that P through providing evidence that P – that is, a truth-related basis for belief that P – is in conflict with the idea of taking the speaker’s word, and trusting her – “believing the speaker.” But there is no such thing as “the” evidential view. We must distinguish at least two very different views. One view – elsewhere I have called it “brutely Humean reductionism” (see Fricker, 2017) – assimilates the basis for believing in what one is told to something like the basis one has for believing the readings of a thermometer, or a fuel gauge: one has evidence that the level of the mercury, or the placing of the needle, is correlated with a certain temperature, or amount of fuel left in the tank. I have argued that this correlationist view of how human testimony provides a basis to believe what one is told is hopeless (see Fricker, 2017). But the link between evidence and what it is evidence for may be by inference to the best explanation. This is the kind of evidential link that is involved, when one trusts a speaker because one has grounds to believe her trustworthy on her topic. The inference from trustworthiness goes thus: S told me that P, not easily would she do so unless she knew that P so, very probably, P. This is the backing-up thought that one has available, when one accepts what the speaker tells one in an evidentially based way (see Fricker, 2020). When one trusts the speaker, accepting her word for what she states, on such a basis, one’s belief is based in evidence – the evidence that she gave her word that P,
Epistemic Self-Governance 339 and her word is trustworthy. This kind of evidential basis does not ignore the fact that this is an intentional action aimed at communication, but makes its inferential step through appreciation of this fact. It uses higherorder evidence of the trustworthiness of the source, the speaker’s word, and this provides grounds to take her at her word, to trust her (for details see Fricker, 2017). In many cases where I accept what others tell me they have an epistemic expertise I lack, and I am not capable of finding out what they tell me for myself; I could not evaluate the evidence, or recognize counter-evidence. So I am epistemically dependent on these experts for the knowledge I gain from them. But we have seen that this dependence is consistent with maintaining responsibility for my beliefs. In this chapter I have sought to refute the thesis that there is an opposition between the very idea of trusting a speaker, taking her word for what she states, and that of forming belief only on the basis of one’s evidence. These things are consistent, and hence there is no incompatibility between forming belief on the basis of accepting what others tell us, and maintaining epistemic self-governance.
Notes 1 On the account developed below epistemic trust has the same analysis as trust generally; it is distinguished by what it is trust for. Trust in a teller with respect to her utterance is epistemic first, in that one trusts the speaker to give one new knowledge; second, what one trusts in is the speaker’s epistemic and character virtues as knower and communicator – one trusts her qua epistemic agent. 2 One trusts others in the dynamics of knowledge-generation when one delegates to them, or shares with them, the collection of data, conducting of experiments and so forth. 3 The broad idea of evidentialism admits of different specific formulations. It is most often formulated concerning justifiedness of belief. Plausibly a belief is justified just if it is rational. Conee and Feldman (1985) formulate evidentialism as the thesis that justification supervenes on one’s evidence: if two people A and B have the same evidence, then for any proposition P, A has propositional justification to believe P just if B also does so. This supervenience claim is stronger than evidentialism as formulated here. 4 In my view R-Evidentialism is a necessary truth; but it is not an obvious one. R-Evidentialism is consistent with the thesis that pragmatic considerations, such as how much is at stake for one regarding whether P is true or not, can affect thresholds: how strong the evidence for P must be, before one forms outright belief in P. 5 Note that the claim here is that realizing how one’s belief was formed gives one reason to reject it, not that one will in fact do so. This is especially relevant in the last case, of a person wholly under the sway of another. 6 For an excellent survey see Kelly (2016). 7 See Fricker (2006a) for an argument rejecting this view as incoherent. 8 The topic of this chapter is whether it is possible for trust to be evidencebased. There are views of the epistemology of testimony on which there is a default entitlement to trust the speaker, without needing evidence of their trustworthiness. But even these views will allow that one’s belief in what is told rests on one’s belief that one has witnessed a telling of it.
340 Elizabeth Fricker 9 I am here concerned with the epistemic relation between knowing that a speech act of telling that P has been addressed to one, or to another in one’s presence, and coming rationally to believe that P on the speaker’s say-so. If we were concerned instead with how one knows what speech act has been made, then the non-doxastic conscious states, quasi-perceptions of utterance meaning, that constitute understanding of the utterance, would feature in the epistemology. See Fricker (2003). 10 In a formal sense, refraining from doing a certain action is itself an action. So I omit the qualification “or refraining” henceforth. 11 O may be an extended occasion, over which T must perform a series of subactions. For example, if I trust you to water my garden while I am away on holiday, this may involve you coming round every evening and watering my plants for a period of two weeks. 12 Throughout this discussion I assume that gross, barely intelligible failures of coherence in one’s beliefs and other attitudes are not possible. If you think they are possible, then take my definition to hold only for subjects who have broadly coherent attitudes. 13 Thanks to Tim Williamson for this point. 14 My argument here assumes that if one allows that T may fail to φ, then either one will have put in place a Plan B, or one thinks it would not be so bad if E were not to come about. However, one may be in such bad circumstances that though one regards not-E as intolerable, there is just no plan B available to one. In such a circumstance one is still, in one sense, relying on T – say, to save one from the assassin. But in my sense, one is merely hoping that T will save one, not relying on her to do so. Perhaps one still relies on T to do everything in her power to save one, though not on her success. 15 The idea of an event that would not easily happen, and conversely of one that might easily happen, is one that we have an ordinary-language grasp of. I will not offer an explicit semantics for it. However, it is important that the fact that a type of event would not easily happen does not entail that an event of that type does not, in fact, happen – a combination of very unlikely circumstances could bring it about. 16 Remember that our intended semantics for “not easily would T fail to φ on O” allows that she may have this property and yet, if an unforeseeable catastrophe occurs, fail to φ on O. 17 For this account of assertion see Fricker (2006) and Moran (2018). Fricker (2021a) gives a fuller account, including an analysis of the notion of presenting oneself as F. 18 In saying that for the audience to acquire knowledge, the speaker must speak from her knowledge, I invoke my own conception of how testimonial transmission works (see Fricker 2015). This is not accepted by all, for instance Lackey (2008) holds it is enough that the speaker reliably speaks truth, and this is consistent with lack of knowledge. Those of Lackey’s persuasion can accept my main claim here, that accepting a speaker’s testimony is an instance of trust-based reliance, while substituting their own alternative conception of what precise property of the speaker’s action is relied on. 19 Let me forestall possible confusion here: a Plan B in this case, viz. when you tell me that P, is seeking further independent evidence as regards P, which would show P to be false, were it so. Seeking evidence regarding your trustworthiness, and believing you only when this reveals you to be trustworthy, is not having a Plan B. That is to say, it would not protect me from the harm of acquiring a false belief in the event that you fail my trust – fail to tell me the truth. Compare: if plan A is driving by car to my destination, filling the tank
Epistemic Self-Governance 341 and checking the oil and tires before I set off is making sure plan A works, not having a plan B. 20 For overhearers, the property will be: not easily would T tell R that P unless she knew that P, due to her honesty and competence regarding P; where R is the person addressed. 21 This premise fits exactly into our general format when expressed: “not easily would T fail to speak from knowledge in a telling by her that P on O.” φ-ing on O in this case is: speaking from knowledge in her telling that P. 22 Objection: “there are entirely unreflective recipients of testimony, for instance young children, who form belief in what they are told, but do not trust the speaker in your sense, since they form belief in the stated proposition immediately and unreflectively, and lack any doxastic attitude to the proposition that the speaker is trustworthy in your sense.” Agreed; my aim is to characterize the mechanism of taking the speaker’s word for what she states. This describes language participants with an understanding of the nature of the transaction, that the speaker is offering her word; there is no aspiration to cover all cases where a person forms belief in response to an instance of testimony. I do insist this is the core mechanism that describes all cases of mature language participation. See Fricker (2015). 23 Here it has been shown that there is no incompatibility between the ideas of trust of a speaker, and basing belief in what she tells in evidence of her trustworthiness. This would be cold comfort if such evidence was very hard to come by. See Fricker (2002), (2015), Sperber et al. (2010) for discussion of how evidence of trustworthiness is often available. 24 The addressee will, however, non-accidentally tend to be better placed to comprehend the utterance, since she is the one whose comprehension it has been crafted to suit. But there are cases where an onlooker has more grounds to hold the speaker trustworthy than the addressee. 25 This amounts to the thesis that norms of reciprocal trust justify the addressee in trusting the speaker on the basis of an optimistic attitude to her trustworthiness, without needing evidenced belief in this. This has epistemological consequences, since trust in testimony peculiarly has the feature that its immediate upshot is acquiring a new belief. (See Fricker (2021b).) 26 Even if not analytically impossible, is it contrary to norms of trust to trust someone only when one has evidence they are trustworthy on the relevant matter? Perhaps sometimes; but generally I think one is entitled to feel offended when someone does not trust one precisely when and because they know one well, and have had lots of past evidence of one’s trustworthiness that should convince them one is to be trusted.
References Buss, S. and Westlund, A. (2018). Personal autonomy. Stanford Encyclopaedia of philosophy. E. N. Zalta. Stanford, CA, Stanford University. Conee, E. and Feldman, R. (1985). “Evidentialism.” Philosophical Studies 48(1): 15–34. Conee, E., & Feldman, R. (2008). Evidence. In Q. Smith (Ed.), Epistemology: New essays (pp. 83–104). Oxford: Oxford University Press. Darwall, S. (2006). The second person standpoint: Morality, respect and accountability. Cambridge, MA: Harvard University Press. Dormandy, K. (2020). Introduction: An overview of trust. Trust in epistemology. K. Dormandy. New York and Abingdon, Routledge.
342 Elizabeth Fricker Faulkner, P. (2011). Knowledge on trust. Oxford: Oxford University Press. Fricker, E. (2002). Trusting others in the sciences: A priori or empirical warrant? Studies in History and Philosophy of Science, 33(2), 373–383. Fricker, E. (2003). Understanding and knowledge of what is said. In A. Barber (Ed.), Epistemology of language (pp. 325–366). Oxford: Oxford University Press. Fricker, E. (2006a). Second-hand knowledge. Philosophy and Phenomenological Research, 73(3), 592–681. Fricker, E. (2006b). Testimony and epistemic autonomy. In J. Lackey & E. Sosa (Eds.), The epistemology of testimony (pp. 225–250). Oxford: Oxford University Press. Fricker, E. (2015). How to make invidious distinctions amongst reliable testifiers. Episteme, 12, 173–202. Fricker, E. (2017). Inference to the best explanation and the receipt of testimony: Testimonial reductionism vindicated. In K. McCain & T. Poston (Eds.), Best explanations: New essays on inference to the best explanation (pp. 262–294). Oxford: Oxford University Press. Fricker, E. (2019). ‘Believing the speaker’ versus believing on evidence: A critique of Moran. European Journal of Philosophy, 27, 767–776. Fricker, E. (2020). Review of Moran, R. The exchange of words. Mind. Fricker, E. (2021a). An austinian account of assertion. Analytic Philosophy. Fricker, E. (2021b). Can trust work epistemic magic? Philosophical Topics. Hinchman, E. (2005). Telling as inviting to trust. Philosophy and Phenomenological Research, 70(3), 562–587. Hume, D. (1975). An enquiry concerning human understanding. In P. H. Nittich & L. A. Selby-Bigge (Eds.), Enquiries concerning human understanding and concerning the principles of morals. Oxford: Clarendon Press. Kant, I. t. M. G. (1996). The metaphysics of morals. Cambridge: Cambridge University Press. Kelly, T. (2016). Evidence. In E. N. Zalta (Ed.), Stanford encyclopaedia of philosophy. Stanford, CA: Stanford University. Korsgaard, C. (2009). Self-constitution: Agency, identity and integrity. Oxford: Oxford University Press. Lackey, J. (2008). Learning from words: Testimony as a source of knowledge. Oxford: Oxford University Press. McMyler, B. (2011). Testimony, trust, and authority. Oxford and New York: Oxford University Press. Moran, R. (2018). The exchange of words. New York: Oxford University Press. Moran, R. (2019). The exchange of words: Replies to critics. European Journal of Philosophy, 27, 786–795. Sperber, D., Ement, F. C., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., & Wilson, D. (2010). Epistemic vigilance. Mind and Language, 25(4), 359–393.
Index
Page numbers followed by ‘n’ indicate notes. Abramson, K. 262–265, 302n7 accetpance(s) 4, 62–63, 68, 69n4, 177, 331 Adler, J. 258 Ahlstrom-Vij, K. 95, 97, 99, 104–105, 108, 109n4, 111n16, 135, 139, 147n1, 148n13, 148n17, 153, 156, 162, 168, 191n14, 199 Alvarez, M. 251 Anderson, E. 113, 128n1, 128n2, 320n11 Anscombe, G.E.M. 276 Aristotle 157, 221, 227n5; Aristotlelian 76, 154, 164, 220–221, 227n5 Arpaly, N. 277, 283–284, 285n7, 286n12 Ayer, A.J. 292 Baehr, J. 157, 170n6, 171n24, 174, 184, 191n2 Barnett, Z. 320n13 Baron, R. 303 Barrett, J. 191n12 Battaly, H. 170n6, 170n11, 189, 191n6, 192n36, 224 Beckman, A. 118–119 Berenstain, N. 242, 302n7 Bird, A. 310 Bjørdal, F.A. 303 Bourget, D. 311 Boyd, K. 319n4 Bradford, G. 213n5 Brandom, R. 303 Brighouse, H. 191n1
Broncano-Berrocal, F. 212n1, 253, 258 Broncano-Rodríguez, F. 201, 213n6 Bullock, E.C. 134–135, 139–142 Burge, T. 303 Carnap, R. 217 Carruthers, P. 247n5 Carter, J.A. 119, 128n7, 153–154, 156, 162, 170n21, 213n4, 299 Cassam, Q. 125, 128n5, 222–223 Cavender, N. 307 Cholbi, M. 212 Christensen, D. 255, 266n3, 303 Christman, J. 28–30, 88n1, 295 Church, I. 191n12 Clayton M. 42 Clifford, W. 292 close-mindedness 223–224 Coady, C.A.J. 15n6, 15n11, 15n20, 178–179, 258, 285n2 Code, L. 9, 191n5, 191n8, 191n11, 247n4, 248n18 Cohen, J. 62, 69n4, 177 Conee, E. 326, 328, 339n3 Connolly, P. 320n13 Constantin, J. 308 Copernicus, N. 220–221 Corbí, J. 213n8 Craig, E. 237 Dancy, J. 251 Darwall, S. 337 De Cruz, H. 312–314
344 Index defer 210 Dellsén, F. 4, 191n13, 191n27 de Regt, H.W. 311 Descartes, R. 14n2, 55–56, 221, 272, 306; Cartesian 56 De Smedt, J. 312–314 Dewey, J. 43–44 Dokic, J. 233 Dormandy, K. 329 Dreier, J. 285n1 Driver, J. 278–279, 285n6, 286n14 Dual Process Hypothesis of Reflection/reasoning 41–42, 45–46, 48 Dworkin, G. 38n14, 43, 49, 132, 141 Ebels-Duggan, K. 181, 191n1 Elga, A. 255, 257 Elgin, C. 191n13, 192n30, 310, 320n6 Elson, L. 320n13 Enoch, D. 275, 279 epistemic harm 84, 187, 264 epistemic injustice 8, 12–13, 120, 122, 124, 126, 145–146, 187, 241; hermeneutical 13; testimonial 13, 159, 240–241 epistemic value 205, 231, 299 epistemic vice(s) 44, 124–126, 246 epistemic virtue 7–9 evidence 328; higher-order 256, 329 evidentialism 326 expert/experts 15n13, 75, 88n4, 96, 110n8, 110n12, 146, 159–160, 163, 186–188, 192n31, 220, 254–256, 258, 286n15, 299, 307–308, 312, 315–317 externalist(s) 22, 25, 28–29, 32, 37, 265, 319n3
family resemblance 72, 154 Faulkner, P. 247n11, 337 Feinberg, J. 195 Feldman, R. 238, 303, 326, 339n3 Fisch, M. 215–220, 226n2, 227n3 Fogelin, R. 228n11 Foley, R. 285n2, 308, 312 Frankfurt, H. 38n14, 43, 59, 88n1, 216–217, 219–220, 223 Freire, P. 103 Fricker, E. 1, 3, 110n11, 153, 156, 161, 168, 191n7, 258, 302n4
Fricker, M. 170n13, 191n26, 232, 234, 240–241, 247n12 Friedman, J. 100–101, 109n6, 111n15, 148n18 Friedman, M. 218 gaslighting 251, 262–266 Gersema, E. 225 Gettier, E. 273 Gibbons, J. 303 Goldberg, S. 2, 191n8 Goldman, A. 5–6, 15n20, 28, 95, 97, 99, 103, 133, 138–140, 142, 144–145, 147n1, 148n19, 320n11 González de Prado, J. 248n19 Gordon, E.C. 213n6 Govier, T. 244, 248n19 Graham, P. 259, 302n1 grasping 205–206, 310 Grasswick, H. 175, 191n11, 248n18 Greco, J. 23, 38n7, 191n4 Grimm, S. 206, 310 Grundmann, T. 308 Guerrero, A. 320n11 Hannon, M. 213n6 Hardwig, J. 3, 175 Hare, R.M. 276 Harman, E. 283, 285n1 Harris, S. 225 Hazlett, A. 191n25 Hegel, G. 215 Helmholtz, H. 315–316 Hering, E. 315–316 Hills, A. 285n5, 285n10, 310–311 Hinchman, E. 191n24, 337 Hookway, C. 233 Hopkins, R. 285n9 Horowitz, S. 257 Howell, R.J. 275, 285n6 Huemer, M. 308, 319n2 Hume, D. 78, 258, 326 humility, intellectual 190, 250–251, 253–259 Hurka, T. 199 interference(s) 15n6, 86, 98, 103, 106, 108, 119, 134–138, 140–141, 143, 178–179, 181, 196, 232 internalist 22, 25, 27, 37, 38n14, 236–237
Index 345 Jones, K. 242–244 Kahan, D. 123 Kahane, H. 307 Kant, I. 3, 14n2, 58, 65–69, 69n1, 78–79, 195–196, 215–217, 227n3, 276, 306, 326 Kaplan, J. 225 Kearns, S. 286n14 Kelly, T. 339n6 Kelp, C. 259 Khalifa, K. 310, 320n5 Khanenman, D. 45–46, 48–49 Kiesewetter, B. 257 King, N. 8, 154, 162, 164, 167, 191n3, 191n20, 191n21, 192n33 Kitcher, P. 316 Kornblith, H. 210 Korsgaard, C. 217, 326 Kuhn, T. 217, 227n5 Kukla, R. 239 Kvanvig, J. 310 Lackey, J. 340n18 Lasonen-Aarnio, M. 254, 256, 261 Lawless, J. 320n13 Layman, D. 320n13 Lehrer, K. 22, 37n5, 38n15 Levinson, M. 51n5 Leydon-Hardy, L. 302n7 Littlejohn, C. 303 Locke, J. 306 Lord, E. 253, 257 Lougheed, K. 8, 107, 133, 192n36, 226n1, 228n17, 285n1, 304n22, 320n6, 320n9 Lugones, M. 246 Lynch, M. 205–207, 228n11 Lyons, J. 258 Malfatti, F. 206, 320n4 Markovitz, J. 286n12, 286n14, 286n15 Matheson, J. 107, 226n1, 228n15, 228n17, 247n3, 255, 257, 285n1, 303n17, 304n22, 320n6 McDowell, J. 217 McKenzie, C. 248n19 McKinnon, R. 302n7 McMyler, B. 191n7, 337 Meehan, D. 124–126
Mele, A.R. 27, 32, 37n1, 37n2, 38n12, 88n1 Mill, J.S. 132, 141, 197, 291–292, 312 Mogensen, A. 286n11 Monmarquet, J. 191n5 Moran, R. 191n24, 237–239, 247n11, 247n14–248n15, 248n17, 338, 340n17 Navaro, J. 213n8 Nedelsky, J. 191n10 Nelson, Q. 303 Nguyen, T. 192n32, 320n11 Nickel, P. 192n32, 285n9 Noggle, R. 51n11 Nozick, R. 273 nudging 102, 109n3, 114, 120–126 Nussbaum, M. 191n1, 307 Nyham, B. 224 open-mindedness 235 Oshana, M. 180, 191n9 Parfit, D. 251 Peacock, G. 215, 220 Plantinga, A. 273, 285n2 Popper, K. 216–217 Potochnik, A. 310 Pritchard, D. 23, 50n1, 140, 142, 147n1, 148n13, 155, 170n18, 201, 203–205, 213n5, 227n6, 228n16, 258, 310 Proust, J. 233 Ranalli, C. 191n15 Raz, J. 2, 175 Reifler, J. 224 reliabilist(s)/reliablism 28, 51n8; process 28 Riggs, W. 202 Riley, E. 120–124, 126, 128n3 Roberts, R. 3, 8–9, 119, 128n7, 154, 161, 170n21, 171n26, 179–180, 191n2, 191n3, 191n5, 191n22, 212n1 Rorty, R. 227n4 Rosen, G. 283 Scanlon, T. 8, 234 Schroeder, M. 251, 257 Schulz-Hardt, S. 320n9
346 Index Seigel, H. 191n1 self-deception 327 self-government 195–196, 199, 202, 215, 325 self-trust 243 Sellars, W. 303 Sharadin, N. 320n13 Shoemaker, D. 233 Simion, M. 259 Simmel, G. 73 Singer, D. 303 skepticism/skeptical/skeptic/skeptics 56, 88n1, 114–115, 117–118, 125, 134, 142, 156, 271, 273, 296 Sosa, E. 191n4, 200–201, 207–210 Spear, A. 262–264 Spelman, E. 246 Sperber, D. 341 Star, D. 286n14 Steel, R. 257 Strawson, P. 285n2 Strevens, M. 310 Sunstein, C. 120–121, 126, 128n3, 143 Swaine L. 50n3 Sylvan, K. 251, 253, 257 Tanesini, A. 237, 244, 246, 248n15, 248n20, 302n7 Taylor, C. 217 testimonial injustice 240–241 Thaler, R. 120–121, 126, 143 Titelbaum, M. 255 Turner, S. 315, 320n10
understanding 188, 203–207, 309 van Frassen, B. 222 Vega-Encabo, J. 8, 201, 212n1, 258 Waller, J. 228n10 Walzer, M. 219 Weatherson, B. 255 Webb, M. 285n2 Whitcomb, D. 190, 253, 266n1 Whiting, D. 206 Wilkenfeld, D.A. 311 Williams, M. 303 Williamson, T. 273, 340n13 Wittgenstein, L. 57, 227n5 Wolff, R.P. 276–277 Wood, W.J. 3, 8–9, 119, 128n7, 154, 161, 170n21, 171n26, 179–180, 191n3, 191n2, 191n5, 191n22, 212n1 Wright, C. 303 Wright, S. 192n36 Young, I. 242 Young, R. 197 Zagzebski, L. 1, 3, 50n1, 88n3, 119, 128n4, 128n7, 156, 162, 170n1, 171n27, 181–182, 188, 191n2, 191n5, 191n7, 191n17, 232, 234, 236, 247n9, 258–259, 308–310